thinking is dangerous — it leads to ideas
thinking is dangerous — it leads to ideas
President of the Board of the Polish Free and Open Source Software Foundation. Human rights in digital era hacktivist, Free Software advocate, privacy and anonimity evangelist; expert volunteer to the Panoptykon Foundation; co-organizer of SocHack social hackathons; charter member of the Warsaw Hackerspace; and Telecomix co-operator; biker, sailor.
Yesterday Bloomberg broke the news that NSA is said to had known about http://en.wikipedia.org/wiki/Heartbleed Heartbleed] for months or years, without telling anybody — and the wheels of the media and blogosphere have started to churn out reactions from surprised through shocked to outraged.
Frankly, I am most surprised by the fact that anybody is surprised. After Snowden's revelations we all should have already gotten used to the fact that what once was a crazy tin-foil hat paranoia, today is entirely within the realm of possible.
Even less surprisingly, a quick dementi has been issued on behalf of the NSA. Regular smoke and mirrors, as anybody could have expected, but with one very peculiar — and telling — paragraph (emphasis mine):
In response to the recommendations of the President’s Review Group on Intelligence and Communications Technologies, the White House has reviewed its policies in this area and reinvigorated an interagency process for deciding when to share vulnerabilities. This process is called the Vulnerabilities Equities Process. Unless there is a clear national security or law enforcement need, this process is biased toward responsibly disclosing such vulnerabilities.
What this means is that when a bug is found by a "security" agency, it might not get responsibly disclosed. If "there is a clear national security and law enforcement need", it might be used in a weaponized form instead.
With the "America under attack" mentality and the ongoing "War on Terror" waged across the globe, we can safely assume that there is "a clear national security need", at least in the minds of those making these decisions.
And we need to remember, that if there is a bug, and somebody has found it (but not disclosed it), somebody else will find it, eventually. It might be Neel Mehta or Marek Zibrow, who then discloses it responsibly; or it might be Joe Cracker, who exploits it or sells it to other shady organisations.
And because we all use the same encryption mechanisms, the same protocols and often the same implementations, it then will be used against us all.
Now, it is crucial to understand that it's not about NSA and Heartbleed. It's about all "security" agencies and any software bugs. By not responsibly disclosing discovered bugs "security" agencies make us all considerably less secure.
Regardless of whether NSA has or hasn't known about Heartbleed, such a non-disclosure policy is simply irresponsible — and unacceptable.
A few months ago Jim Farley, Ford representative, blurted in a panel at CES that:
We know everyone who breaks the law, we know when you're doing it. We have GPS in your car, so we know what you're doing. By the way, we don't supply that data to anyone.
Comments about where not very positive, to say the least, and both Mr Farley and Ford's PR manager retracted this statement immediately — underlining that gathered data would only be used after anonimisation, or only after explicit consent by the driver. In other words, "this is no surveillance".
Of course, once the data reaches Ford's servers the only thing keeping Ford from giving them away is their promise. Seems pretty thin to me — especially with the money insurance providers can throw at this (not to mention law enforcement).
Damian Szymański, Gazeta.pl: What is Ecologic's idea and how can it help us all lower costs of using cars?
Emil Żak, Robert Bastrzyk: Today nobody keeps track of costs of using their cars. Turns out that annually it can add up to more than the value of the car itself. Tires, petrol, insurance, repairs, etc. It all costs. Our device analyses every action of the driver. It signalises what we have done wrong and suggests, what we can change to lower the costs of petrol, for example. Moreover, we have access to this data 24h.
Not at all. The question is how the driver drives their car. Ecologic is a mobile app, online portal and a device that you connect in your car. Thanks to that we can have all sorts of data, for example about combustion...
What kinds of data are collected? Ecologic's website claims that the device is "equipped with the motion sensor, accelerometer, SIM card, cellular modem and GPS", and that:
The system immediately begins recording operating data of the vehicle, the GPS position and driving techniques in real-time.
So the idea is to collect data like GPS position, acceleration and breaking, vehicle utilization, driving technique, and sending these off to Ecologic's servers. Seems that it doesn't differ wildly from what Ford has in stock, with an (apparently) nice addition of the driver being able to check on their data and stats. Sounds great!
However, a question arises: what happens with the data? Even if Ford's "promise" not to share with anybody seems thin, Ecologic doesn't even try to hide that the real money is in selling access to gathered data.
In the "For Who" (sic) section of their website we can find the real target group (emphasis mine):
Private users — keep an eye on the young driver in the family
Small business — fast and easy management of vehicles
Fleets — keep the fleet under control & save costs
Leasing Companies — lower the accident rate and track miles
Insurance — give discounts on no-claims & safe driving
Of course one very important group is missing from that list: I am sure law enforcement will be quick to understand the utility of requiring any and all cars install the device, and not having to deal with costly traffic enforcement cameras any more without losing the ability to issue speeding tickets. After all, would Ecologic deny access to data to law enforcement?
Ah, but the Ecologic cares about drivers' impression of being surveilled:
Your driver after work can switch off live tracking to feel conftable without impression that he is "spied". A button on the mobile app allows the driver to indicate that the current trip is personal and help you to track private km. (sic!)
So the driver can "switch off live tracking", but the system will nonetheless help you (i.e. the employer) track "private km"? So these data also have to land in Ecologic's servers, eh? Apart from the employer, who else will have access to this "private trip" data? Insurance companies? Law enforcement goes without saying, of course.
In the interview, Ecologic claims that:
It's all about motivation and healthy competition. We need to change the way we think. Instead of a stick, we want to give people two carrots.
It's a pity that for the drivers themselves this translates into three sticks — employer, insurance provider and law enforcement.
This is my NetMundial content proposal, with some typos fixed and minor edits.
ICANN and IANA decentralisation efforts mark an important milestone in the evolution of the Internet: there is finally widespread recognition of the fact that centrally controlled bodies pose a threat to the free and open nature of the Internet. ICANN and IANA are, however, but a small part of a much larger problem.
More and more, communication platforms and methods are secondarily centralized; that is, in a network decentralized on lower protocol levels there are services being run that are centralized on higher levels. Running on a network based on open standards are closed services, that are then used by other entities as base for their services.
In other words, some private services — offering, for example, user authentication methods — are being used as a de facto infrastructure by large numbers of other entities.
If we recognize the dangers of centrally-controlled domain name system, we should surely recognize the danger of this phenomenon also.
It is of great value that the importance of decoupling IP addresses management and the domain name system management from a single state actor has been recognized and that currently there is a strong push towards multistakeholderism in this area.
There is, however, a secondary emergent centralization happening on the Internet, that potentially can pose a comparable, or even bigger, threat to the interconnected, open and independent nature of this global network.
This centralization is harder to perceive as dangerous, as it is not being actively supported by any state actor; hence, it falls under the radar for many Internet activists and technologists, that would react immediately had similar process been facilitated by a government. It does, however, have a potential to bring negative effects similar to a state-sponsored centralization of infrastructure.
Another reason for this process to happen unnoticed or for the possible negative effects of it to be depreciated is that it is fluid and emergent on behaviour of many actors, enforced by the network effect.
This process is most visibly exemplified in Facebook gathering over a 1 billion of users, by providing a centrally-controlled walled-garden, and at the same time offering an API to developers willing to tap-into this vast resource, for example to use it as authentication service. Now, many if not most Internet services requiring log-in as one of their options offer Facebook log-in. Some (a growing number) offer Facebook as the only option. Many offer commenting system devised by Facebook, that does not allow anonymous comments — a user has to have a Facebook account to be able to partake in the discussion.
Similarily, Google is forcing Google+ on YouTube users; to a lesser extent, Google Search is being used by a swath of Internet services as their default internal search engine (that is, used to search their own website or service). GMail is also by far the most popular e-mail and XMPP service, which gives Google immense power over both.
These are two examples of services offered by private entities (in this case, Google and Facebook) that had become a de facto public infrastructure, meaning that an immense number other services rely and require them to work.
If we recognize the danger of a single state actor controlling ICANN or IANA, we can surely recognize the danger of a single actor (regardless of whether it is a state actor or not) controlling such an important part of Internet infrastructure.
Regardless of reasons, why this situation emerged (users' lack of tech-savvy, service operators' want of easiest and cheapest to implement and integrate solutions, etc), it causes several problems for the free and open Internet:
If such a large part of services and actors depend on a single service (like Facebook or GMail), this in and of itself introduces a single point of failure. It is not entirely in the realm of the impossible for those companies to fail — who will, then, provide the service? We have also seen both of them (as any other large tech company) have large-scale downtime events, taking services based on them down also.
In the most basic sense, any user of a service based on these de facto infrastructures has to comply with and agree to the underlying service (i.e. Facebook, Google) Terms of Service. If many or most of Internet services have that requirement, users and service operators alike lose independence over what they accept.
Operators of such de facto infrastructures are not obliged to provide their services in an open and standard manner — running mostly in the application layer these services usually any attempts of interoperation. Examples include Twitter changing their API TOS to shut-off certain types of applications, Google announcing the planned shut-off of XMPP server-to-server communication, Facebook using XMPP for the internal chat service with server-to-server shut-off.
With such immense and binary ("either use it, or lose it") control over users' and other service providers' data, de facto infrastructure operators do not have any incentives to share information on what is happening with the data they gather. They also have no incentives to be transparent and open about their future plans or protocols used in their services. There is no accountability other than the binary decision to "use it or lose it", which is always heavily influenced by the network effect and the huge numbers of users of these services.
With no transparency, no accountability, and lack of standardization, such de facto infrastructure operators can act in ways that maximize their profits, which in turn can be highly unpredictable, and not in line with users' or the global Internet ecosystem's best interests. Twitters' changing of API TOS is a good example here.
Such de facto infrastructure operators are strongly incentivised to shut-off any interoperability attempts. The larger the number of users of their service, the stronger the network effect, the more other services use their service, and the bigger the influence they can have on the rest of the Internet ecosystem. Social networks are a good example here — a Twitter user cannot communicate with a Facebook user, unless they also have an account on the other network.
This is obviously not the case with e-mail (I can run my own e-mail server), at least not yet. The more people use a single provider here (i.e. GMail), the stronger that provider becomes, and the easier it would be for its operator to shut-off interoperability with other providers. This is exactly what Google is doing with XMPP.
Lack of predictability, openness and independence obviously also hurts innovation. What used to be a free and open area of innovation is more and more becoming a set of closed-off walled-gardens controlled by a small number of powerful actors.
It is also worth noting that centralized infrastructure on any level (including the level of de facto infrastructure discussed herein) creates additional problems on human rights level: centralized infrastructure is easy to surveil and censor.
Hence, the first question to be asked is this: when does a private service become de facto public infrastructure?
At this point this question remains unanswered and there is not a single Internet Governance body, or indeed any actor, able to reply to it authoritatively. Nevertheless, we are all in dire need for an answer to this question, and I deem it a challenge for Internet Governance and an important topic that should be included in any Internet Governance Forums now and in the future.
The second question that ever more urgently requires an answer if we are to defend the open and not balkanized Internet is: what should be done about private services that have become de facto public infrastructure?
This question is also as of yet unanswered, but there are several possible proposals that can be made, including treating such situations as monopoly and breaking them up (so handling them outside Internet Governance), requiring public interoperable API available for other implementators, etc. This is perhaps not exactly in the purview of Internet Governance, it is however crucial for the Internet as a whole and I propose it be treated as a challenge to be art least considered at IGFs henceforth.
Usually when I
rant write about public consultations of some government ideas, there's not much good I can say. Well, for once this is not the case.
The Ministry of Administration and Digitization is working on their position for upcoming NetMundial Internet stakeholders meeting in Saõ Paulo. To prepare for that, the Ministry has announced a call for comments on a document prepared by the European Commission about Internet governance, and has invited several organisations and companies to weigh-in on the topic on a multistakeholder meeting in meatspace.
The topic is immensely important, and I hope to elaborate on that soon. In the meantime, however, I'd just like to say, that for some time now NGOs that are interested and competent in this area no longer have to knock on Ministries' doors. Instead, we're invited along ISPs, telcos, and large Internet companies, and can freely voice our opinions. Sometimes we even get listened-to.
Even better, this time one of the NGOs invited to comment and for the meeting was the Warsaw Hackerspace.
So we got @hackerspace.pl addresses into official ministerial communication, and two hackers into ministerial corridors. Expecting the media to go crazy about it in 3... 2... 1...
Some of you might have already noticed (for example via my Diaspora profile) my infatuation with RetroShare. A very interesting communication and file-sharing tool that does deserve a proper, full review — for which I do not, unfortunately, have time.
There are some good things (full peer-to-peer decentralisation, full encryption), there are some less good things (using SHA1 and the daunting GUI). But today RetroShare really shined, and in an area that is constantly a chore for free software...
Now, I know there are many free software projects trying to do VoIP, but none seems to be "there" yet. SIP is hard to set-up; Jitsi works on a single server but for some reason I have never been able to get a working VoIP call via Jitsi with a contact from a different server. One project that was closest to being usable was QuteCom... "was", as there hasn't been a single new release for 2 years now.
Just download the software, install it and have the keys generated (that happens automagically), and download the VoIP plugin if you don't have it already included (chances are, you have; if not, on Linux retroshare-voip-plugin package is your friend, the other OS users can look here).
Now add a friend, start a chat and voilà, VoIP works. No account on any server needed, no trusting a third party, works behind NATs (tested!). And is already encrypted, so no one can listen-in on your communication.
The amazing part? During testing my lappy suspended to ram. After waking up a few minutes later the call worked as if nothing happened.