Skip to main content

Songs on the Security of Networks
a blog by Michał "rysiek" Woźniak

Hacker in the Digital Affairs Council

This is an ancient post, published more than 4 years ago.

As such, it might not anymore reflect the views of the author or the state of the world. It is provided as historical record.

It’s official – I have been confirmed as a member of the Digital Affairs Council to the Minister of Administration and Digital Affairs. I was recommended by Internet Society Poland and Polish Linux Users Group.

What is the Digital Affairs Council anyway?

the Council is “minister’s advisory and consultative body” (as described in art.17 of the informatisation law). That means that on one hand it doesn’t really get to make direct decisions; on the other, however, Council’s recommendations will carry certain weight (at least, that’s the theory).

The Council is an evolution of the Informatisation Council, operating since 2005. Several members of the current Council had been involved in that previous installment.

According to the law, the Council will propose and opine projects of statements (among others, by the Council of Ministers), documents, development strategies, program projects and reports in the areas of informatisation, communications, information society development and rules regarding the functioning of public registers, rules and state of introducing ICT systems in public administration, and even Polish ICT terminology. And…

The Council can initiate activities related to informatisation, ICT market development, and development of information society.

The Council today has 20 members, representing administration, NGOs, technical organisations and business. What recommendations will the Council produce and which direction will it lean? How will the practicalities of its operation look like? Hard to say today. But the possibilities seem quite interesting.

Who’s in the Council?

I have had the pleasure of meeting several members of the Council on different occasions; not all of them, unfortunately. The ones I know paint an interesting picture.

  • Igor Ostrowski – Council Chairman; lawyer, Vice-Minister of Administration and Digital Affairs during anti-ACTA protests, before that a member of the Prime Minister’s Strategic Advisors Team; such a choice can only please, especially all “opennists” and privacy advocates out there.
  • Joanna Berdzik – Vice-Minister of Education, engaged in the Digital School project (including the Open Textbooks programme).
  • Dominik Skoczek – lawyer, represents the Polish Film-makers’ Association; during anti-ACTA protests he was the head of the Intellectual Property and Media Department in the Ministry of Culture and National Heritage, and responsible for the ACTA process; copyright maximalist, claiming that copyright reform proponents are only in it for “gratis access for users”.
  • Anna Streżyńska – well-known in Poland for her activities while presiding over the Office of Electronic Communications and successful fight against the Polish telco monopolist.
  • Katarzyna Szymielewicz – President and co-founder of the Panoptykon Foundation, unrelenting activist for privacy, freedom and personal autonomy in the times of pervasive surveillance.
  • Alek Tarkowski – “opennist”, Polish Creative Commons chapter co-ordinator, director of the Digital Centre; previously, with Igor Ostrowski, a member of the Prime Minister’s Strategic Advisors Team.
  • Elżbieta Traple – law professor, copyright law expert; during the post-ACTA Ministry of Administration and Digital Affairs workshops she proposed changes to Polish copyright law reaffirming fair use in the digital domain.
  • Jarosław Tworóg – Vice-President of the Board of the National Chamber of Electronics and Telecommunication; I’ve had the pleasure of taking part in several public consultation meetings along with Mr. Tworóg; expert in the area of electronics and telecommunication.
  • Agata Wacławik-Wejman – co-founder and Member of the Board of the Institute of Law and Society, policy counsel at Google.
  • Piotr VaGla Waglowski – operator of prawo.vagla.pl website, lawyer, activist, member of the Council of Panoptykon Foundation, co-initiator of organising Public Domain Day celebrations.

Hence we have openness and privacy activists on one hand, copyright maximalists and representatives of big IT companies on the other. What will come of this – we’ll see.

Public consultations and anonymity

This is an ancient post, published more than 4 years ago.

As such, it might not anymore reflect the views of the author or the state of the world. It is provided as historical record.

The problem of anonymity – and a connected issue of representativeness – in public consultations (and wider: generally in public debate) seem to be a Gordian knot. On one hand, anonymity is indicated as necessary for a truly independent discourse; on the other, in invites behaviour that is far from desirable.

We tried to tackle this issue (both in the panels and during the workshops) at the Nowe perspektywy dialogu (“New perspectives of dialogue”) conference, held within the framework of the W Dialogu (“In Dialogue”) project – in which the FOSS Foundation cooperates with the Institute of Sociology at the University of Warsaw.

The Problem

Anonymity in a discussion has some advantages:

  • higher comfort of voicing opinions – the participants don’t have to consider what their spouse, boss or priest thinks of what they have to say; nor do they have to be concerned with potential government retribution for opinions that are not in-line with the “party line”;
  • higher capacity to change opinions – as one of the attendees noted, anonymous participants are more likely and willing to admit error and change their opinion based on facts and subject matter arguments;
  • reasoning instead of personal connections – anonymity allows the discussion to move beyond personal connections, relations and animosities, and focus more on subject matter arguments and facts.

Obviously, there are also important drawbacks:

  • trolling – likely to be present in any exchange of ideas, trolls are especially drawn to on-line discussions, and anonymity is a strong contributing factor;
  • mandate – it is hard to ascertain that every member to an anonymous public debate has mandate to partake in it (consider a participatory budgeting debate in a local community: non-residents shouldn’t be able to influence the decision);
  • lack of transparency – participants can voice their own opinion, but can work in the interest of particular companies or groups of interests as well; while this is fine, transparency is crucial in a democratic society: information how a given interest group lobbied might important for the final decision, and it is non-trivial to provide accountability and transparency in an anonymous decision making process;
  • sock-puppets – with anonymous participation, what is to stop certain participants, companies or interests groups from using multiple artificial identities to sway the decision?

Would it be possible to have the anonymous cookie and eat it too, though?

Shades of anonymity

First of all, it is worth reminding that there are several shades shades of anonymity, depending on:

  • what data is anonymized (e.g. affiliation, full name, address, gender, etc.);
  • with regard to whom is it anonymized (e.g. other participants to a given discussion, discussion organizers, observers, public institutions, media, etc.);
  • at what stage of the discussion the data is anonymized (e.g. only during the discussion but available it ends, entirely and with regard to the whole discussion and all of its effects, only after the discussion has concluded, etc.).

Additionally, statements in a discussion can be:

  • not being signed at all, allowing for full anonymity – this way participants don’t can’t even know if any two statements were made by the same person, or different persons;
  • signed with a discussion-specific identifier (e.g. a random number), hiding the identity of authors, but making it possible to see which statements in a given discussion (but not beyond) are made by the same person;
  • signed with a global identifier in all discussions on a given platform (again: for example a random number or UUID), making it possible to check all statements a given person made in all discussions, but still not divulging their identity.

The first of these makes it impossible to follow a conversation (no way to be sure if we’re answering the same person, or some other participant). The second one allows for a better structuring of a given discussion, and to more easily follow the exchange of ideas. Last one doesn’t really differ from pseudonymity (apart from the fact that the identifier is chosen by the system, instead of the participants themselves), hence it makes it possible for participants to build identities of sorts within a given platform.

Different tools, different aims

Anonymity is a certain tool that can help us achieve certain goals, if we use it with care. How?

Polish Data Protection Supervisor, dr Wiewiórowski, made a simple yet powerful distinction: anonymity makes sense and is very useful in general, high-level consultation processes. As soon as we start consulting particular documents and discuss specifics, commas and numbers, transparency and accountability are much more important – as this is where particular interests really come into play, and we need ways to follow these very closely in a democratic society.

This was further supplemented by a thesis that a fully anonymous public consultation process needs to be evaluated with regard to subject matter by the consultation organisers, and its result should be treated as a guideline rather than a definite decision. If a given process is to be completely binding, it needs to be completely transparent.

Hence on one axis we have a whole spectrum of anonymity of public consultation processes, on the other – a spectrum of how general or particular a given process is and how binding it should be. We also know that there is a strong correlation between the two axes: the more detailed and binding a given consultation process is, the more transparency and accountability is needed, hence less anonymity for its participants.

This correlation, I would say, is extremely powerful in organizing the discussion around anonymity in public consultations. It also means that it is impossible to make a decision about anonymity in a given consultation process without deciding first what kind of a process it is supposed to be. This is also crucial to all attempts at creating tools aiming to support such processes.

It’s worth noting we already have examples of quasi-consultation processes from both ends of the spectrum:

  • general elections are partially anonymous (participants are identified to ascertain their mandate, but the vote itself is secret, so that it is impossible to attribute a given ballot to a given voter), while at the same time being very general, high-level and not really binding with regard to particular decisions to be made by representatives (as anybody who voted on a politician just to see them back-pedal from their election-time promises knows full well);
  • consensus meetings around a particular issue are meant to be non-anonymous, fully transparent and accountable (every participant is required to give their name and affiliation), because they are to a large degree binding and concrete.

Another interesting example is the Chatham House Rule:

When a meeting, or part thereof, is held under the Chatham House Rule, participants are free to use the information received, but neither the identity nor the affiliation of the speaker(s), nor that of any other participant, may be revealed.

Hence, during a meeting governed by the Rule participants are not anonymous to each other (which solves the problem of representativeness, helps structure the discussion better, etc), but after the meeting all participants can expect full anonymity with regard to who said what (which in turn helps make the discussion more open, honest and not tied-in with particular interests of participants’ affiliations).

Why being a pirate is not worth it

This is an ancient post, published more than 4 years ago.

As such, it might not anymore reflect the views of the author or the state of the world. It is provided as historical record.

I have lately been asked to write a short text on “why being a pirate is not worth it”. To be honest, I wasn’t entirely sure how to approach it, so we ended up changing the topic. However, challenges are there to be accepted, hence I decided to make an attempt in my free time and without deadlines. And no, even though my love towards the Polish Pirate Party is well-known, this is really not about them.

Undoubtedly, pirates have a very positive public image nowadays, and for some time now. This has to be romanticism’s illegitimate child, this fascination with pirates’ uneven, solitary struggle against the unforgiving elements, and resistance towards social norms of their day. Resistance, that banishes them from the society for good.

It’s hard to tell, though, which goes first: was the resistance a reaction to rejection, or the other way around? Each pirate would have their own story to tell, and their own reasons.

What we will definitely find in piracy – the idealized version, that is – admiration of the cold and brutal, yet beautiful nature, fascination with times long past (with their aesthetic and peculiar ethos) and tragic yet full of determination strife for personal freedoms, against all odds and “the system” (feudal, with some rudimentary capitalism). That strife is what resonates so well today.

Problem is: this image is so idealized, it’s almost unrecognisable. It’s a Hollywood version, simplified and painted pretty, but not having much relation to historical facts.

Pirates were excluded from the society, and constantly struggling with merciless elements, that’s undisputed. However, they were far from being as “anti-system”, as we’d like to think – they often had mandate from one of the sea powers, and operated in a manner we would call today “freelancing”. So much for the romantic ideal of a freedom fighter.

Sailing ship crews, especially pirates, were controlled by the iron will (and fist) of the captain, the death tall was always high, and the cruel sea was as much a reason for this as were brutal and inhumane punishments administered with the conviction (not that far from truth) that only fear can keep a crew of bandits in check. Full-blown feudalism, only at sea and drowned in blood.

Of course, pirates’ blood was not the only being spilt: crews of captured merchant ships were rarely spared – after all, who’s to feed and guard tens of prisoners in hard conditions at sea?

Pirate’s life was a cruel life of a bandit on uncompromising sea, threatened from every side: the elements, captain, fellow crew members, attacked crews and finally – navy ships, trying to keep control over trading routes.

Not a life to envy.


Those of you, who expected something about copyright law and copying in the Internet, might I remind that “piracy” is not downloading music from the Web. I’d like to suggest familiarizing oneself with this helpful infographic.

On Mozilla, DRM and irrelevance

This is an ancient post, published more than 4 years ago.

As such, it might not anymore reflect the views of the author or the state of the world. It is provided as historical record.

A sad day has come – Mozilla has announced they are bringing DRM EME to Firefox, due to fears that without it, its users will not be able to access some content, and hence will turn from Firefox towards other browsers.

And while Mozilla goes to great lengths to band-aid this situation as much as possible, a spoon full of sugar won’t make the medicine go down.

Defective by design

First of all, let’s state the obvious: DRM never really works. It can’t – it’s like trying to show something to a user without showing it to the user. The very idea is absurd, and in the digital world unworkable. What DRM does great is creating problems for paying users, for free software community, and beyond.

Mozilla knows this, they’re techies. Hence all the effort to make it seem as “detached” from Firefox itself, as possible.

Mozilla Chromium

What they do not appreciate, apparently, is that for a long time there has been less and less reasons to use Firefox instead of, say, Chromium (other similar browsers). From the end-users’ perspective, Chromium is faster, leaner, it has many of the same extensions available… And between the interface changes (copycating Google and bringing grief to those of us who took the time to customize their Firefox experience), and versioning scheme changes (copycating Google and bringing grief to those of us who have to support it in their infrastructure), Firefox is becoming more and more a Chromium look-and-feel-alike, instead of the groundbreaking web-browser it used to be.

Why use the copy if you can get the real deal?

A question of trust

For me that reason always was freedom and trust. I trusted Mozilla to protect and defend my freedoms on-line. And I supported this, as many like me, for instance by using Firefox and installing it for my family members and friends. For some time now, Mozilla is making moves that strain that trust. And for me personally, introducing DRM to Firefox might just be the straw that broke the camel’s back.

That means that as soon as I find a fitting, freedom-preserving replacement, I might start installing that for my friends and family.

Will they complain that some websites do not work, that some videos do not play? Yes they will. But we’ve been through that already – years ago, when Mozilla was taking back the web. Back when Mozilla was about making the web more open, fighting walled-gardens of content, upholding the principles of open web.

And back then, we’ve won.

The value of Mozilla

Mozilla never seemed to be about the numbers, and accounts, zeroes. Mozilla was about values, even when that meant some content was harder to access on Firefox. Those values – not numbers of users, and not sponsorship deals! – were what made Mozilla relevant.

Mozilla and W3C have got things backwards, it was Hollywood that needed to worry about being irrelevant. – Will Hill’s comment in a Diaspora thread

A decade ago we’ve been able to make a change by promoting an open and standards-compliant browser in the world where the whole Internet seemed written for a closed, non-standard blue E. We’ve been able to do that by standing up for Mozilla each time we noticed a website that didn’t work.

Today, Mozilla is not standing up for us in a world where choice and control are at risk.

And in the grand scheme of things, what the free and open Internet really needs more is not yet another mobile operating system, but a browser that respects and protects the values and ideas that are at the very heart of the open web.

Mozilla needs to stand on a principle, or it will not have a standing at all.

Not-quite-good-enough-Mundial

This is an ancient post, published more than 4 years ago.

As such, it might not anymore reflect the views of the author or the state of the world. It is provided as historical record.

I had been invited to join NETmundial a couple of weeks ago in São Paulo. It’s been an interesting learning experience for – well, I guess for all involved parties (a.k.a. “multiple stakeholders”; “multistakeholderism” was the buzzword du jour). Sadly, not very much more, though.

When the most resounding message in statements made from the stage by organisers and high-profile guests is that the outcome document has to be “good enough”, that sends a strong signal that mediocrity is to be expected.

In that regard, nobody got disappointed.

Now, I do not have as black an opinion of NETmundial as La Quadrature du Net; I even feel that Smári McCarthy’s view that [the entire conference was a waste of time] goes a wee bit too far.

Still I am far from the optimism expressed by the Polish Ministry of Administration and Digital Affairs (among others). Here’s why.

Background

It would be hard not to notice two prevailing contention issues in, of and about the Internet during the last year or so: privacy and net neutrality.

In either both governments and corporations are highly interested; in either different governments and corporate entities have (or claim to have) different interests. And – most importantly – both are inseparably connected to human rights in the digital era, and to the future of the Internet as a whole.

Discussion of these issues, especially privacy, gained much steam after Edward Snowden’s revelations about overreaching mass surveillance programmes run by the US National Security Agency.

In response to, among others, these revelations, in late 2013 Brazilian president Dilma Rousseff announced plans to host a global Internet governance meeting, which came to be The Global Multistakeholder Meeting on the Future of Internet Governance, a.k.a. NETmundial.

At the same time, for the last few years, there was a debate happening in Brazil around Marco Civil da Internet. The debate hinged on the very same two crucial issues – privacy and network neutrality. Few weeks before NETmundial the bill has cleared Brazilian congress, and was passed into law on the first day of NETmundial, April 23rd. The bill contains strong protection of network neutrality and privacy on the Internet.

Few weeks before NETmundial the European Parliament voted for a bill that would (among other things) protect network neutrality in the EU.

Process

The process seemed thought-through and geared towards multistakeholderism. The idea was to gather as many people, institutions, NGOs, governments, interested in Internet governance, as possible, get their input and prepare a single document, outlining the principles and the roadmap for Internet governance.

RFC

Discussion had started long before the April conference. First, a call for submission had been made (around 180 submissions had been received, including mine). Each had to refer either to principles, or roadmap.

Then, a first draft version of the outcome document has been published, and opened for comments. Hundreds of these flowed-in, and the call form comments has ended directly before the conference itself.

Plenary

Finally, the conference was organised as a single-track, massive (more than 800 people in attendance) plenary. After the usual official statements (made by – among others – Mrs. Rousseff, Sir Tim Berners-Lee, Vint Cerf, Nenna Nwakanma, and representatives of several governments, including the Polish Minister of Administration and Digital Affairs), a call for comments – this time submitted in person, via microphones – was open and continued for the better part of the 2 days.

There were 4 microphones: one for civil society, one for governments, one for academia and technical community, and one for business. There were about 200 representatives of each of these groups in the room, and each group has been represented more-or-less equally in the composition of the group of people on stage, running the event. Microphones were called upon sequentially, and each speaker had 2 minutes (later reduced to 1.5 minute) to voice their comment.

Interestingly, remote hub participants were also offered the floor (via an audio-video link) after each microphone call sequence, and there were quite a few quality “remote” remarks that added real value to the proceedings.

Each comment, each word, was transcribed and directly shown on-screen. All transcripts are also available on-line, which is a boon for transparency and accountability.

“It’s who counts the votes”

After the plenary ended for the day, all the comments were then processed and merged with the outcome document draft by the High-Level Multistakeholder Committee. Sadly, while on the plenary every group had the same power, the same amount of time to voice their concerns, things changed in the Committee: there were 3 representatives each from civil society, technical community, academia, business, and (surprisingly) “international organisations” (like the… European Commission!). However, there were 12 representatives of governments.

And the Committee meeting was not recorded nor transcribed. Every NETmundial participant could be in the room the Committee was working in, but they didn’t have a voice.

There goes multistakeholderism, accountability and transparency, out the window.

Content

In the comments, especially those voiced in the plenary, both net neutrality and privacy/mass surveillance issues were not only present, but – I would say – prevalent. While most comments in support of enshrining network neutrality and including strong wording against mass surveillance in the outcome document came (unsurprisingly) from the civil society, there were such voices also from governments, academia, technical community and business, including this great tidbit by Mr. James Seng:

businesses should also be protected from being coerced by their government or any other legal authorities into mass surveillance.

…and this great comment, coming (surprisingly) not from civil society, but from the government side – by Mr Groń, representing Polish Ministry of Administration and Digital Affairs (which does seem to get it as far as Internet is concerned):

Text in the current form may suggest that there might be mass surveillance interception and collection programs which are consistent with human rights and democratic values. By definition, mass surveillance is not consistent with human rights and democratic values.

The rule of law and democratic values states that surveillance must respect specific and strict rules. There must be specific legislation setting limits of powers of surveillance authorities and providing necessary protection for citizens’ rights. Use of surveillance mechanisms must be under supervision of court. Such mechanisms may be used only in a case of reasonable suspicion of committing a crime and only against specific person or persons.

Mechanism used must be proportional and may be used only for specific time period.

Many comments called for explicit acknowledgement of Edward Snowden’s role in the conception of NETmundial. Many others for outright calling access to the Internet a human right. Several about the need to connect developing nations.

I also took to the microphone to underline the issue of walled-gardens and consequent growing balkanisation of the Internet.

Of course, voices advocating stronger protection of imaginary property where also there, but (and again, this is my subjective take on it) there were much fewer of them than one would have expected.

And of course there were pro-censorship statements, thinly veiled behind the usual “think of the children” (Tunisia) and “the right of the government to decide what is best for the people” (China).

Outcome

As good as the comments were, the outcome document is sadly very disappointing. There was a strong urge to build a consensus around the document, which obviously meant that certain things were hard to introduce – but during the work of the committees merging comments with draft documents there were several positive changes introduced, including strong language against mass surveillance both in the Roadmap, and in the Principles. The latter being most clear-cut:

Mass surveillance is not compatible with the right to privacy or the principle of proportionality.

Then the draft document, merged and polished by the respective Principles and Roadmap committees, went under consideration of the High-Level Multistakeholder Committee. And that’s where things got cut and mangled. The strong anti mass surveillance language disappeared, leaving only watered-down version that can be read as if suggesting mass surveillance can be carried out in a way that is compatible with human rights law.

Make no mistake – this is due to vehement opposition to such strong condemnation of mass surveillance, voiced by none other than the United States. US representative went as far as to state, that in the view of the US (compare and contrast with the Polish statement above):

Mass surveillance not always a violation of privacy.

For the same reason there is no acknowledgement of Edward Snowden in the document, of course. And, of course, these were voiced unequivocally only at the not recorded nor transcribed HLMC meeting.

Net neutrality got a boot and was only included as a “point to be further discussed beyond NETmundial” (along with roles of stakeholders, jurisdiction issues and benchmarking).

Finally, intermediary liability only got a weak acknowledgement, anchored in “economic growth, innovation, creativity and free flow of information”, instead of human rights (like freedom of expression or privacy):

Intermediary liability limitations should be implemented in a way that respects and promotes economic growth, innovation, creativity and free flow of information.

Little wonder, then, that civil society organisations decided to voice their disappointment with the outcome document in a common statement; its the last sentence seems a fitting summary:

We feel that this document has not sufficiently moved us beyond the status quo in terms of the protection of fundamental rights, and the balancing of power and influence of different stakeholder groups.

Conclusions

The document is far from satisfactory, especially in the context of the very reasons NETmundial was conceived (mass dragnet surveillance by the US), and legislative work being done around network neutrality (including Marco Civil and the Europarlament vote). And as far as privacy and mass surveillance is concerned, we know it’s of rising importance for more than a decade. Time to up our game.

With FCC proposing watered-down and meaningless net neutrality rules during NETmundial proceedings, US agencies blatantly advocating more surveillance and smartphone remote “kill-switch” law being passed in California, NETmundial could have sent a strong, unambiguous signal about the need of protecting human rights also in the digital domain.

Instead, due to political pressure to find a compromise, however mediocre and meaningless (quipped “overwhelmingly rough consensus”), the outcome document doesn’t really introduce any new quality to the debate.

To some extent, though, it’s the journey that counts.

NETmundial was as much an Internet governance meet-up, as an experiment in multistakeholderism. And even though it was slanted (due to, among others, the HLMC having a large over-representation of governments), even though it was far from perfect, even though the process could have been better designed, it is still an experiment we can learn a lot from.

I feel that somewhere along the road NETmundial organisers missed the fact that:

Multistakeholderism is a framework and means of engagement, it is not a means of legitimization. – via Wikipedia

With eyes on the prize of a consensual outcome document, there was a vague feeling that civil society has been invited to the table to legitimize the process and the outcome, and that there are little to none concessions that would not be considered to keep all parties at the table.

It eventually turned out a bit better, and I find the fact that the US had to unequivocally advocate mass surveillance, is one of the positive outcomes of this meeting. The king had to acknowledge its lack of clothing.

While it is hard to disagree with Jeremié Zimmerman, writing for La Quadrature du Net:

Governments must consider the Internet as our common good, and protect it as such, with no compromise.

…we can, and should, learn from NETmundial. As Human Rights Watch put it:

What was evident throughout the two days of discussions in São Paulo is that a “multistakeholder” approach to Internet governance – however vague a term, or however difficult a concept to implement – is a far more inclusive and transparent approach than any process where only governments have a seat at the table

I think I’ll finish this off with a question raised by Smári McCarthy:

We’re going to need to do something better. The people running OurNETmundial were doing a fairly good job of drawing attention to the real issues. Perhaps OurNETmundial should become an event. But where? When? By whom? And how do we avoid cooption?

Irresponsible non-disclosure

This is an ancient post, published more than 4 years ago.

As such, it might not anymore reflect the views of the author or the state of the world. It is provided as historical record.

Yesterday Bloomberg broke the news that NSA is said to had known about Heartbleed for months or years, without telling anybody – and the wheels of the media and blogosphere have started to churn out reactions from surprised through shocked to outraged.

Frankly, I am most surprised by the fact that anybody is surprised. After Snowden’s revelations we all should have already gotten used to the fact that what once was a crazy tin-foil hat paranoia, today is entirely within the realm of possible.

Even less surprisingly, a quick dementi has been issued on behalf of the NSA. Regular smoke and mirrors, as anybody could have expected, but with one very peculiar – and telling – paragraph (emphasis mine):

In response to the recommendations of the President’s Review Group on Intelligence and Communications Technologies, the White House has reviewed its policies in this area and reinvigorated an interagency process for deciding when to share vulnerabilities. This process is called the Vulnerabilities Equities Process. Unless there is a clear national security or law enforcement need, this process is biased toward responsibly disclosing such vulnerabilities.

What this means is that when a bug is found by a “security” agency, it might not get responsibly disclosed. If “there is a clear national security and law enforcement need”, it might be used in a weaponized form instead.

With the “America under attack” mentality and the ongoing “War on Terror” waged across the globe, we can safely assume that there is “a clear national security need”, at least in the minds of those making these decisions.

And we need to remember, that if there is a bug, and somebody has found it (but not disclosed it), somebody else will find it, eventually. It might be Neel Mehta or Marek Zibrow, who then discloses it responsibly; or it might be Joe Cracker, who exploits it or sells it to other shady organisations.

And because we all use the same encryption mechanisms, the same protocols and often the same implementations, it then will be used against us all.

Now, it is crucial to understand that it’s not about NSA and Heartbleed. It’s about all “security” agencies and any software bugs. By not responsibly disclosing discovered bugs “security” agencies make us all considerably less secure.

Regardless of whether NSA has or hasn’t known about Heartbleed, such a non-disclosure policy is simply irresponsible – and unacceptable.

Ecologic, Ford and surveillance

This is an ancient post, published more than 4 years ago.

As such, it might not anymore reflect the views of the author or the state of the world. It is provided as historical record.

A few months ago Jim Farley, Ford representative, blurted in a panel at CES that:

We know everyone who breaks the law, we know when you’re doing it. We have GPS in your car, so we know what you’re doing. By the way, we don’t supply that data to anyone.

Comments about where not very positive, to say the least, and both Mr Farley and Ford’s PR manager retracted this statement immediately – underlining that gathered data would only be used after anonimisation, or only after explicit consent by the driver. In other words, “this is no surveillance”.

Of course, once the data reaches Ford’s servers the only thing keeping Ford from giving them away is their promise. Seems pretty thin to me – especially with the money insurance providers can throw at this (not to mention law enforcement).

Ford isn’t the only company why strives to “help” drivers by gathering data on them. A Polish startup, Ecologic (winners of the Warsaw Startup Fest), had this to say (emphasis mine):

Damian Szymański, Gazeta.pl: What is Ecologic’s idea and how can it help us all lower costs of using cars?

Emil Żak, Robert Bastrzyk: Today nobody keeps track of costs of using their cars. Turns out that annually it can add up to more than the value of the car itself. Tires, petrol, insurance, repairs, etc. It all costs. Our device analyses every action of the driver. It signalises what we have done wrong and suggests, what we can change to lower the costs of petrol, for example. Moreover, we have access to this data 24h.

Total surveillance?

Not at all. The question is how the driver drives their car. Ecologic is a mobile app, online portal and a device that you connect in your car. Thanks to that we can have all sorts of data, for example about combustion…

What kinds of data are collected? Ecologic’s website claims that the device is “equipped with the motion sensor, accelerometer, SIM card, cellular modem and GPS”, and that:

The system immediately begins recording operating data of the vehicle, the GPS position and driving techniques in real-time.

So the idea is to collect data like GPS position, acceleration and breaking, vehicle utilization, driving technique, and sending these off to Ecologic’s servers. Seems that it doesn’t differ wildly from what Ford has in stock, with an (apparently) nice addition of the driver being able to check on their data and stats. Sounds great!

However, a question arises: what happens with the data? Even if Ford’s “promise” not to share with anybody seems thin, Ecologic doesn’t even try to hide that the real money is in selling access to gathered data.

In the “For Who” (sic) section of their website we can find the real target group (emphasis mine):

Private users – keep an eye on the young driver in the family Small business – fast and easy management of vehicles Fleets – keep the fleet under control & save costs Leasing Companies – lower the accident rate and track miles Insurance – give discounts on no-claims & safe driving

Of course one very important group is missing from that list: I am sure law enforcement will be quick to understand the utility of requiring any and all cars install the device, and not having to deal with costly traffic enforcement cameras any more without losing the ability to issue speeding tickets. After all, would Ecologic deny access to data to law enforcement?

Ah, but the Ecologic cares about drivers’ impression of being surveilled:

Your driver after work can switch off live tracking to feel conftable without impression that he is “spied”. A button on the mobile app allows the driver to indicate that the current trip is personal and help you to track private km. (sic!)

So the driver can “switch off live tracking”, but the system will nonetheless help you (i.e. the employer) track “private km”? So these data also have to land in Ecologic’s servers, eh? Apart from the employer, who else will have access to this “private trip” data? Insurance companies? Law enforcement goes without saying, of course.

In the interview, Ecologic claims that:

It’s all about motivation and healthy competition. We need to change the way we think. Instead of a stick, we want to give people two carrots.

It’s a pity that for the drivers themselves this translates into three sticks – employer, insurance provider and law enforcement.

Blurry line between private service and public infrastructure

This is an ancient post, published more than 4 years ago.

As such, it might not anymore reflect the views of the author or the state of the world. It is provided as historical record.

This is my NetMundial content proposal, with some typos fixed and minor edits.

Abstract

ICANN and IANA decentralisation efforts mark an important milestone in the evolution of the Internet: there is finally widespread recognition of the fact that centrally controlled bodies pose a threat to the free and open nature of the Internet. ICANN and IANA are, however, but a small part of a much larger problem.

More and more, communication platforms and methods are secondarily centralized; that is, in a network decentralized on lower protocol levels there are services being run that are centralized on higher levels. Running on a network based on open standards are closed services, that are then used by other entities as base for their services.

In other words, some private services – offering, for example, user authentication methods – are being used as a de facto infrastructure by large numbers of other entities.

If we recognize the dangers of centrally-controlled domain name system, we should surely recognize the danger of this phenomenon also.

Document

It is of great value that the importance of decoupling IP addresses management and the domain name system management from a single state actor has been recognized and that currently there is a strong push towards multistakeholderism in this area.

There is, however, a secondary emergent centralization happening on the Internet, that potentially can pose a comparable, or even bigger, threat to the interconnected, open and independent nature of this global network.

This centralization is harder to perceive as dangerous, as it is not being actively supported by any state actor; hence, it falls under the radar for many Internet activists and technologists, that would react immediately had similar process been facilitated by a government. It does, however, have a potential to bring negative effects similar to a state-sponsored centralization of infrastructure.

Another reason for this process to happen unnoticed or for the possible negative effects of it to be depreciated is that it is fluid and emergent on behaviour of many actors, enforced by the network effect.

This process is most visibly exemplified in Facebook gathering over a 1 billion of users, by providing a centrally-controlled walled-garden, and at the same time offering an API to developers willing to tap-into this vast resource, for example to use it as authentication service. Now, many if not most Internet services requiring log-in as one of their options offer Facebook log-in. Some (a growing number) offer Facebook as the only option. Many offer commenting system devised by Facebook, that does not allow anonymous comments – a user has to have a Facebook account to be able to partake in the discussion.

Similarily, Google is forcing Google+ on YouTube users; to a lesser extent, Google Search is being used by a swath of Internet services as their default internal search engine (that is, used to search their own website or service). GMail is also by far the most popular e-mail and XMPP service, which gives Google immense power over both.

These are two examples of services offered by private entities (in this case, Google and Facebook) that had become a de facto public infrastructure, meaning that an immense number other services rely and require them to work.

If we recognize the danger of a single state actor controlling ICANN or IANA, we can surely recognize the danger of a single actor (regardless of whether it is a state actor or not) controlling such an important part of Internet infrastructure.

Regardless of reasons, why this situation emerged (users’ lack of tech-savvy, service operators’ want of easiest and cheapest to implement and integrate solutions, etc), it causes several problems for the free and open Internet:

  • it hurts resillience

If such a large part of services and actors depend on a single service (like Facebook or GMail), this in and of itself introduces a single point of failure. It is not entirely in the realm of the impossible for those companies to fail – who will, then, provide the service? We have also seen both of them (as any other large tech company) have large-scale downtime events, taking services based on them down also.

  • it hurts independence

In the most basic sense, any user of a service based on these de facto infrastructures has to comply with and agree to the underlying service (i.e. Facebook, Google) Terms of Service. If many or most of Internet services have that requirement, users and service operators alike lose independence over what they accept.

  • it hurts openness

Operators of such de facto infrastructures are not obliged to provide their services in an open and standard manner – running mostly in the application layer these services usually any attempts of interoperation. Examples include Twitter changing their API TOS to shut-off certain types of applications, Google announcing the planned shut-off of XMPP server-to-server communication, Facebook using XMPP for the internal chat service with server-to-server shut-off.

  • it hurts accountability and transparency

With such immense and binary (“either use it, or lose it”) control over users’ and other service providers’ data, de facto infrastructure operators do not have any incentives to share information on what is happening with the data they gather. They also have no incentives to be transparent and open about their future plans or protocols used in their services. There is no accountability other than the binary decision to “use it or lose it”, which is always heavily influenced by the network effect and the huge numbers of users of these services.

  • it hurts predictability

With no transparency, no accountability, and lack of standardization, such de facto infrastructure operators can act in ways that maximize their profits, which in turn can be highly unpredictable, and not in line with users’ or the global Internet ecosystem’s best interests. Twitters’ changing of API TOS is a good example here.

  • it hurts interoperability

Such de facto infrastructure operators are strongly incentivised to shut-off any interoperability attempts. The larger the number of users of their service, the stronger the network effect, the more other services use their service, and the bigger the influence they can have on the rest of the Internet ecosystem. Social networks are a good example here – a Twitter user cannot communicate with a Facebook user, unless they also have an account on the other network.

This is obviously not the case with e-mail (I can run my own e-mail server), at least not yet. The more people use a single provider here (i.e. GMail), the stronger that provider becomes, and the easier it would be for its operator to shut-off interoperability with other providers. This is exactly what Google is doing with XMPP.

  • it hurts innovation

Lack of predictability, openness and independence obviously also hurts innovation. What used to be a free and open area of innovation is more and more becoming a set of closed-off walled-gardens controlled by a small number of powerful actors.

It is also worth noting that centralized infrastructure on any level (including the level of de facto infrastructure discussed herein) creates additional problems on human rights level: centralized infrastructure is easy to surveil and censor.


Hence, the first question to be asked is this: when does a private service become de facto public infrastructure?

At this point this question remains unanswered and there is not a single Internet Governance body, or indeed any actor, able to reply to it authoritatively. Nevertheless, we are all in dire need for an answer to this question, and I deem it a challenge for Internet Governance and an important topic that should be included in any Internet Governance Forums now and in the future.


The second question that ever more urgently requires an answer if we are to defend the open and not balkanized Internet is: what should be done about private services that have become de facto public infrastructure?

This question is also as of yet unanswered, but there are several possible proposals that can be made, including treating such situations as monopoly and breaking them up (so handling them outside Internet Governance), requiring public interoperable API available for other implementators, etc. This is perhaps not exactly in the purview of Internet Governance, it is however crucial for the Internet as a whole and I propose it be treated as a challenge to be art least considered at IGFs henceforth.

IM IN UR MINISTRY, CONSULTING UR INTERNETZ

This is an ancient post, published more than 4 years ago.

As such, it might not anymore reflect the views of the author or the state of the world. It is provided as historical record.

Usually when I rant write about public consultations of some government ideas, there’s not much good I can say. Well, for once this is not the case.

The Ministry of Administration and Digitization is working on their position for upcoming NetMundial Internet stakeholders meeting in Saõ Paulo. To prepare for that, the Ministry has announced a call for comments on a document prepared by the European Commission about Internet governance, and has invited several organisations and companies to weigh-in on the topic on a multistakeholder meeting in meatspace.

The topic is immensely important, and I hope to elaborate on that soon. In the meantime, however, I’d just like to say, that for some time now NGOs that are interested and competent in this area no longer have to knock on Ministries’ doors. Instead, we’re invited along ISPs, telcos, and large Internet companies, and can freely voice our opinions. Sometimes we even get listened-to.

Even better, this time one of the NGOs invited to comment and for the meeting was the Warsaw Hackerspace.

So we got @hackerspace.pl addresses into official ministerial communication, and two hackers into ministerial corridors. Expecting the media to go crazy about it in 3… 2… 1…

Encrypted VoIP that works

This is an ancient post, published more than 4 years ago.

As such, it might not anymore reflect the views of the author or the state of the world. It is provided as historical record.

Some of you might have already noticed (for example via my Diaspora profile) my infatuation with RetroShare. A very interesting communication and file-sharing tool that does deserve a proper, full review – for which I do not, unfortunately, have time.

There are some good things (full peer-to-peer decentralisation, full encryption), there are some less good things (using SHA1 and the daunting GUI). But today RetroShare really shined, and in an area that is constantly a chore for free software…

VoIP

Now, I know there are many free software projects trying to do VoIP, but none seems to be “there” yet. SIP is hard to set-up; Jitsi works on a single server but for some reason I have never been able to get a working VoIP call via Jitsi with a contact from a different server. One project that was closest to being usable was QuteCom… “was”, as there hasn’t been a single new release for 2 years now.

Enter RetroShare.

Just download the software, install it and have the keys generated (that happens automagically), and download the VoIP plugin if you don’t have it already included (chances are, you have; if not, on Linux retroshare-voip-plugin package is your friend, the other OS users can look here).

Now add a friend, start a chat and voilà, VoIP works. No account on any server needed, no trusting a third party, works behind NATs (tested!). And is already encrypted, so no one can listen-in on your communication.

The amazing part? During testing my lappy suspended to ram. After waking up a few minutes later the call worked as if nothing happened.