thinking is dangerous — it leads to ideas
thinking is dangerous — it leads to ideas
Member of the Board of the Polish Linux Users Group. Human rights in digital era hacktivist, Free Software advocate, privacy and anonimity evangelist; expert volunteer to the Panoptykon Foundation; co-organizer of SocHack social hackathons; charter member of the Warsaw Hackerspace; and Telecomix co-operator; biker, sailor.
Formerly President of the Board of the Polish Free and Open Source Software Foundation; CTO of BRAMA Mobile Technologies Laboratory on Warsaw University of Technology and a student at Philosophy Institute on Warsaw University.
I find that in most situations where any mishap is involved, especially with any large institutions in the picture, Hanlon's razor tends to apply, and is a good working model to base assumptions on.
This has been the case with most Internet censorship debates in Poland, for instance. Assuming malice really wasn't helping to get our point across.
This is why I am flabbergasted with NSA's (and the rest of the gang, too) insistence on gathering as much data as they can. Sure, for most regular Jacks or Jills, "you need the haystack to find the needle" might sound about right. A bit more observant person might however do a double-take: "wait, what?". When I'm searching for a needle, the last thing I want or need is an ever-larger haystack. Something's fishy.
Then, they might go the extra mile and dig a bit, finding out that NSA's data has no real impact on anti-terrorism efforts. Maybe they'll even dig out a 2007 Stratfor report on the "obstacles to the capture of Osama", pointing out things like:
[T]he Taliban and al Qaeda so far have used their home-field advantage to establish better intelligence networks in the area than the Americans.
One big problem with this, according to sources, was that most of these case officers were young, inexperienced and ill-suited to the mission.
Or this gem:
This lack of seasoned, savvy and gritty case officers is complicated by the fact that, operationally, al Qaeda practices better security than do the Americans.
And while one of the sections of the report is indeed entitled "Needle in a Haystack", it doesn't exactly support the "we need the whole haystack" narrative of the NSA and it's ilk. Because this narrative simply makes no sense. Why? Because math.
When we're talking about searching large datasets for something, we need to account for false positives and false negatives. The larger the dataset, the larger a problem they become. But don't take my word for it, Floyd Rudmin has written a great analysis of this back in 2006:
Suppose that NSA’s system is really, really, really good, really, really good, with an accuracy rate of .90, and a misidentification rate of .00001, which means that only 3,000 innocent people are misidentified as terrorists. With these suppositions, then the probability that people are terrorists given that NSA’s system of surveillance identifies them as terrorists is only p=0.2308, which is far from one and well below flipping a coin. NSA’s domestic monitoring of everyone’s email and phone calls is useless for finding terrorists.
That's right. Even if we assume amazingly good accuracy, the agency has a better chance catching a terrorist by flipping a coin, than by actually using the data they gather.
That's exactly why I am flabbergasted: usually that would be the point where I'd call upon Hanlon's razor. But we have just assumed that NSA is really, really competent in what they're doing, and what they're doing is, in no small part, math.
So either they are very, very competent and understand that mass surveillance cannot work the way NSA claims it is supposed to; or they are not competent enough to know this, but then all the more they lack the most basic skills to work with datasets they have. Can't have it both ways!
The scary possibility is that NSA knows this full well, and yet they still gather the data. Why would they do this? Well, while it might not be all that useful to catching terrorists, it might be a game-changer in areas where the numbers are different. Again, Floyd Rudmin puts it best:
Also, mass surveillance of the entire population is logically plausible if NSA’s domestic spying is not looking for terrorists, but looking for something else, something that is not so rare as terrorists. For example, the May 19 Fox News opinion poll of 900 registered voters found that 30% dislike the Bush administration so much they want him impeached. If NSA were monitoring email and phone calls to identify pro-impeachment people, and if the accuracy rate were .90 and the error rate were .01, then the probability that people are pro-impeachment given that NSA surveillance system identified them as such, would be p=.98, which is coming close to certainty (p_1.00).
So are the NSA and other security agencies too incompetent to understand mass surveillance is useless for its stated purpose, or are they competent enough to understand it and the real purpose is just a bit different?
Neither possibility makes me feel safer. Or be safer, for that matter.
David Cameron's bright idea to ban encryption that is not backdoored by the UK law enforcement, backed, of course, by Barrack Obama, is not exactly popular among the geeks and the technically savvy.
Main argument against the ban goes: if an encryption system has a master key, "bad guys" too can get it or discover it. The whole encryption scheme, then, is critically flawed.
Apart from that, the prevailing view among the geeks and hackers can be summarized as "good luck banning it, I'm going to use it anyway and what are they going to do about that? They're not going to put us all in jail!"
Problem is, the ban is not about banning encryption. It's about criminalizing its use and flagging those who use it.
Hence, the whole technical community — hackers, activists, IT specialists, etc — discussing technical merits of the proposal and technical means to go around it once introduced miss the point completely. Technical issues are not relevant for the British PM and his ilk.
Right now John McDoe using an HTTPS-protected website or TLS-protected IMAP-server basically uses the same crypto, that a TOR-using privacy activist does. AES, Diffie-Hellman key exchange, public-key crypto are all there. These are tried and true, based in some basic math, ingeniously used.
If any of the elements gets compromised, it's compromised for everybody. Security of your bank's HTTPS-protected website is directly connected to the security of TOR or GnuPG.
And of course, it's as deplorable to the listeners, as it is obvious to the techies.
Making strong, non-backdoored crypto illegal is a neat "solution" to this "problem".
Banks and large corporations will bend over, because being prosecuted for non-compliance with "legislation critical to national security" is not good for business. Besides, they're patriots, right?
Anything used or offered officially by any company in the UK or the US will have to be backdoored. This will "solve the problem" of commercially-available secure platforms, offering good security and privacy for non technically-savvy users. You either pay for backdoored encryption, or are on your own using (unwieldy at times) FLOSS tools.
Of course, the tech-savvy can still use the encryption tools, and help the less technically fluent to do so too. However, when they do, they become criminals. The Government does not have to show that you did anything illegal other than the simple fact that you used non-backdoored encryption services or software.
The very fact of wanting to stay secure and keep your privacy will become a criminal offence.
How can they prove you used non-backdoored encryption tools? Simply by saying so, provided that you used any encryption at all. This also means that even if you do use a backdoored encryption platform, the Government can always claim that this particular platform has not been backdoored, and therefore you still broke the law. You have no way of proving otherwise. Can we guess how that plays out?
Nobody's going to be putting non-backdoored encryption users in jail by the dozen, no doubt. But as soon as the Government wants you, they can have you. By the balls or behind the bars.
Free as in Freedom,
not free as in beer
Richard M. Stallman's quote, well known to free software advocates, brings clarity to an ambiguous term — "free" can refer to freedom, or can mean "gratis"; both can be on-topic as far as software is concerned. It has also become, in a way, the motto of the free software movement.
Many initiatives draw inspiration from free software philosophy — libre culture movement, Wikipedia, open educational resources, and many other, base on ideas floated by and tested within free and open source software projects. The "free as in freedom, not free as in beer" thought is also present outside of the freedom-loving software developers' world.
Usually it's the first part of the quote that gets the most attention and focus. It is about freedom, after all, and not about whether or not something is available gratis. This focus was (and is) required to clearly demarcate software, culture or educational resources that give and preserve freedoms of their users from those that are just available cost-free (allowing for access, yet denying the rest of the Four Freedoms); the priceless from the zero-priced.
We might need to change that accent, however. Software developers, artists and educational resources creators, libre or not, have to eat, too.
Richard Stallman had introduced a simple yet effective criterion of whether or not a given software (or any other resource, for that matter) is freedom-preserving — its license has to guarantee:
To make extending the set of libre software easier, in the first free software license, the GNU GPL, one more trick has been also used — copyleft, the requirement that all software based on GPL-licensed software will also have to be distributed under the same terms.
Copyleft clause has since become a point of contention within the free/libre/open-source software community. The debate between its detractors and proponents is as vivid today, as it has been 30 years ago.
The MIT/BSD crowd argues that copyleft denies developers of derivative works (in this case, software based on a GNU GPL-licensed project) the freedom to close their project or change the license.
The GNU GPL side points out that even if that particular freedom is denied in such a case, it's for the greater good — others, including the users of the derivative work, have their four freedoms preserved.
The debate, then, concerns the freedom of the derivative work's author to close that work, versus the four freedoms of all users, ever. And of course, this is relevant not only to software.
Within the software development world and outside of it the copyleft clause tends to be considered "bad for business". Derivative work authors would like to be able to close their works regardless of the licensing of the originals, so as to earn a living on them — after all, how can one make money on something that is free to copy at will?
The answer lies with new business models, compatible with the culture of sharing (and sharing of culture). Crowdfunding, voluntary payment-based models, making money on merchandise (like band t-shirts) or concerts, and (in the case of software) selling services like feature implementation, support, or deployment, allow the creators to thrive and earn a living even though — or, as often is the case, precisely because of — fans sharing of their works.
These are not obvious and seem uncertain — and yet more and more often they finance productions, large and small. On the other hand, the "tried and tested" ways of making money on creative work are not a guaranteed way to make a profit. Even more so with the market being saturated by huge companies.
Preference for non-copyleft licenses might stem from lack of trust to new models: "I might want to sell a closed product based on this, what then?" However, if I can close something, others can, too. We're all worse-off.
The Heartbleed debacle illustrates this well. A trivial software bug in a popular free software library used on the Net by big and small alike to provide secure transmission had huge consequences for the whole FLOSS ecosystem, and broader: for the whole Internet. It also remained undiscovered for years.
Software involved — the OpenSSL library — is available on a non-copyleft license. It's being used by companies, including most of the heavyweights (including Google, Facebook, and Amazon), in their products and services.
They use it, but do not really help develop this crucial piece of software. OpenSSL developers did not have the funds for regular code audits that would have discovered the bug long before it caused any harm.
Large companies also do not share their modifications. OpenSSL's license does not require it, so why would they? Turns out Facebook modified their OpenSSL version in a way that (inadvertently, probably) made it insusceptible to the bug.
Had OpenSSL used a copyleft license, requiring sharing modified code with the community, Heartbleed might have been discovered much earlier, causing much less harm.
Free software, libre culture, open educational resources development has its cost. Thousands donate their time and expertise, and share effects of their work. It often is overlooked, usually when while arguing for use of FLOSS the "it's gratis" argument is being used.
It is not. Time to start properly valuing the work put into those initiatives. And to support them, also financially.
Copyleft, turns out, can help here too: if nobody can close my work, I myself can also use their enhancements. We're all better-off.
This is my GPG key transition statement. I am transitioning off of my old key:
To a new key:
The old key has not been compromised. The main reason for transition is this weak subkey:
I have generated a new, much stronger key. And I have done so in a way that (to an extent) protects me from ugly consequences of a possible private key loss (think: stolen laptop, with keys). I used these three great howtos:
With their help I have generated a master keypair, stowed away in a safe place; and a laptop keypair that I use day-to-day.
The master keypair has never touched my laptop or any device associated with me — it has been generated on an airgapped random loner laptop in the Warsaw Hackerspace (every hackerspace has a few of these), running a copy of TAILS.
From it, the laptop keypair has been also generated on the airgapped loner lappy. Then, the master keypair has been transferred to the storage medium, and the laptop pair — to my laptop; both have been safely wiped from the loner afterwards (besides, everything was happening on a ramdisk anyway).
The minor inconvenience if this setup is that I can only sign other people's keys with my master keypair, i.e. when I am not travelling.
Below you'll find my key transition statement. You can also download this statement signed by both the old and the new key.
GPG Key Transition Statement
Date: 30th December, 2014
For a number of reasons, i've recently set up a new OpenPGP key, and will be transitioning away from my old one.
The old key will continue to be valid for some time, but i prefer all future correspondence to come to the new one. I would also like this new key to be re-integrated into the web of trust. This message is signed by both keys to certify the transition.
The old key was:
pub 4096R/0x5337E3B760DEC17F 2011-09-28 [expires: 2014-12-30]
Key fingerprint = 07FD 0DA1 72D3 FC66 B910 341C 5337 E3B7 60DE C17F
And the new key is:
pub 4096R/0xEAA4EC8179652B2E 2014-10-14 [expires: 2020-10-12]
Key fingerprint = D0E9 E1E3 D80A 098A 0D0D 7EC4 EAA4 EC81 7965 2B2E
To fetch the full key from a public key server, you can simply do:
gpg --keyserver keys.riseup.net --recv-key 'D0E9 E1E3 D80A 098A 0D0D 7EC4 EAA4 EC81 7965 2B2E'
If you already know my old key, you can now verify that the new key is signed by the old one:
gpg --check-sigs 'D0E9 E1E3 D80A 098A 0D0D 7EC4 EAA4 EC81 7965 2B2E'
If you don't already know my old key, or you just want to be double extra paranoid, you can check the fingerprint against the one above:
gpg --fingerprint 'D0E9 E1E3 D80A 098A 0D0D 7EC4 EAA4 EC81 7965 2B2E'
If you are satisfied that you've got the right key, and the UIDs match what you expect, I'd appreciate it if you would sign my key. You can do that by issuing the following command:
NOTE: if you have previously signed my key but did a local-only signature (lsign), you will not want to issue the following, instead you will want to use —lsign-key, and not send the signatures to the keyserver
gpg --sign-key 'D0E9 E1E3 D80A 098A 0D0D 7EC4 EAA4 EC81 7965 2B2E'
I'd like to receive your signatures on my key. You can either send me an e-mail with the new signatures (if you have a functional MTA on your system):
gpg --export 'D0E9 E1E3 D80A 098A 0D0D 7EC4 EAA4 EC81 7965 2B2E' \
| gpg --encrypt -r 'D0E9 E1E3 D80A 098A 0D0D 7EC4 EAA4 EC81 7965 2B2E' \
--armor | mail -s 'OpenPGP Signatures' email@example.com
Additionally, I highly recommend that you implement a mechanism to keep your key material up-to-date so that you obtain the latest revocations, and other updates in a timely manner. You can do regular key updates by using parcimonie to refresh your keyring. Parcimonie is a daemon that slowly refreshes your keyring from a keyserver over Tor. It uses a randomized sleep, and fresh tor circuits for each key. The purpose is to make it hard for an attacker to correlate the key updates with your keyring.
I also highly recommend checking out the excellent Riseup GPG best practices doc, from which I stole most of the text for this transition message ;-)
Please let me know if you have any questions, or problems, and sorry for the inconvenience.
Michał "rysiek" Woźniak
Can't leave parliamentarians alone for 3 days, can you.
Today, the Administration and Digitization Commission of Sejm (the lower chamber of Polish Parliament) has approved for further proceedings a project of "A Resolution concerning actions to limit children's access to pornography on the Internet", which used to "call upon the Minister of Administration and Digitization to guarantee parents a right to porn-free Internet" — the final draft is still not available on Sejm website, but it should soon be available here.
In comparison with the original project the new text is... better, although that does not mean it's any good. Here it is for your reading pleasure (please note: the translation is mine and unofficial, and I omit the rather unimportant "whereas..." part):
By Sejm of the Republic of Poland
Concerning actions to limit children's access to pornography on the Internet
1. Sejm of the Republic of Poland moves for the Minister of Administration and Digitization to prepare solutions which will guarantee parents a right to access the Internet network free from pornography.
2. These solutions should follow these guidelines:
a. Any person should have the possibility to block transmission of any pornographic materials;
b. An internet service provider should provide tools that would allow blocking transmission of pornographic materials;
c. An internet service provider is required to provide tools that would allow blocking transmission of pornographic materials free of charge;
d. An internet service provider can disable access to pornographic materials. An agreement with a customer should reflect this.
3. Minister of Administration and Digitization shall present a proposal of such solutions within 18 months from the date of adoption of this resolution.
Yep. The Commission has convened on this issue mere week after the previous session, not giving enough time to properly prepare and have a serious discussion. At least the text has been changed in a way that makes it not entirely absurd (only just a bit, depending on who is reading it).
One could read the text of the resolution in a way that would give the Ministry the possibility to simply reply:
There are parental filters available, free of charge, for any software platform, KTHXBAI.
...or, in a way that would require an answer along those lines:
ISPs are required to "voluntarily" censor the Net on the level of their core infrastructure, opt-in or opt-out.
Basically, we need to make sure that (providing that the resolution clears Sejm) the Ministry will not go in the direction of a solution that would introduce central filtering of the Internet.
The only sane solution I see is filtering on end-user devices (including home routers). During consultations last year, regarding this very topic, this has exactly been the solution we have suggested the Ministry should go along with. Time to take it off the shelf, I guess.
Now Sejm has to decide, and this will happen during next few weeks. Unfortunately, the modified project apparently has the support of the coalition, so I'd like to invite Poles to write their representatives, and in the meantime I'm prepping up for an 18-month fight to keep any central-level filtering, be it obligatory or "voluntary" (as in the UK), limited to end-user devices.
This means a lot of work; if you feel it's important or valuable — support Panoptykon.