Skip to main content

Songs on the Security of Networks
a blog by Michał "rysiek" Woźniak

Even with EME, Mozilla will become "the browser that can't"

This is an ancient post, published more than 4 years ago.
As such, it might not anymore reflect the views of the author or the state of the world. It is provided as historical record.

During the weeks since Mozilla’s DRM-embracing decision to include EME there were quite a few voices defending Mozilla’s decision. Most of the serious defence basically boils down to: we had to; without EME/DRM support Mozilla would be the browser that can’t play video, and users would turn to other browsers which would jeopardize our work for freedom and open Internet in other areas.

As I have written before users already have little reason to stay with Firefox, and the strongest selling point for many of users still on Firefox has for a longest time been: that’s the freedom preserving browser. With EME/DRM in Firefox, this reason is moot.

What’s tragic is that even with EME/DRM inside, which already cost Firefox some users from the freedom crowd (and inspired at least one fork, of course), Firefox is bound to also lose in the less freedom-centred crowd.

Think about it for just a short while. The whole basic idea of DRM is flawed beyond repair – software that has to make some content available to a user (to be viewed, for example), and simultaneously make the same content not available to the same user (so that it’s not possible to copy it).

This scheme has serious problems working even in closed-source, black-box software (sometimes even fails hilariously). How is it supposed to work in an open-source browser?

Let’s ponder a scenario, shall we?

1. Mozilla implements EME in Firefox

…and has DRM solution suppliers (like Adobe) write DRM plugins for Firefox. That’s where we’re at today.

2. Firefox now has to have some sort of protection of the decoded media stream

…so that it’s only available to the browser itself (to display to the user), but not the extensions – otherwise get ready for an extension that grabs the decoded media stream and saves it to disk (completely side-stepping any DRM) in 3… 2… 1…

3. May the forks be with you!

Say, how about a fork that removes this very protection of the decoded media stream, but leaves in-place the rest of the EME/DRM infrastructure? Somebody is bound to do it. At this point DRM/EME is completely side-stepped in this Firefox fork.

One of the defenders of Mozilla’s EME/DRM decision, Ben Moskovitz, remarks:

enabling users to do more is a feature.

Being able to save the media stream to disk sounds to me like enabling users to do more. Let us guess, then, which Firefox version will now become more and more popular, eh?

4. Hollywood and DRM providers get wind of the fork

Is there anything Mozilla can do to plug this hole? Not as long as the code is open and free-as-in-freedom! Ah, well, Hollywood won’t have any of that hippie bullshit, so they push DRM providers to remove support for Firefox (and its forks).

5. Game over

Mozilla lands with EME infrastructure and no DRM providers willing to write a plugin using it (as it would jeopardize their relationship with Hollywood), freesofties have already long moved to some more freedom-preserving browser, and regular users move to any browser that has DRM plugins for its EME infrastructure. You know, the closed-source ones.

After all, why would they stay on “a browser that can’t”?

EuroDIG 2014

This is an ancient post, published more than 4 years ago.
As such, it might not anymore reflect the views of the author or the state of the world. It is provided as historical record.

Another day, another conference on Internet governance, this time close enough to go there on my own dime. Besides, Berlin is always a treat.

As was to be expected of a conference organised in ministerial halls, for the most part when it wasn’t objectionable, it was mind-boggingly dull. And yes, WiFi was as good as it gets on such events.

I have a strong policy of going to conferences mainly for the hallway/coffee chit-chat and making new acquaintances, and it was a winner this time around too.

Off for a “good” start

Starting off with a welcoming address by the powers that be, including Neelie Kroes, who deemed the conference so important, she made a video appearance (how about we agree on a rule that when you’re a politician wanting to have a point in a conference agenda, you can either come in person, or… pass entirely; no pre-recorded videos, please!), the conference gave no hope for anything of significance to happen within the confines of the programme.

Thankfully, you can always count on activists to bring the gravitas along. And while having Edward Snowden in the panel (or as a keynote speaker) would be the right thing to do, several Edwards Snowdens in the audience were the next best thing.

Multistakeholderism meets gender equality

The first panel focused on lessons learned from NETmundial, and made a good first impression with no chair available for the only female panelist. Were there any civil society participants to the panel? Of course not. Questions from the floor about that fact (asked by the undersigned) and about the glaring gender disproportion in the panel (asked by Mrs. O’Loughlin of Council of Europe) were waved-off as “off-topic”.

Representative of the organisers also remarked on how hard it was to find women for the panel. They tried, they just couldn’t find any on the right positions.

Let’s ponder about this for just a moment, even though I don’t even know where to start.

I could say, for instance, that equality (gender, and otherwise) was a big issue on NETmundial, as evidenced in the opening address by Nnenna Nwakanma. I could refer you, Dear Reader, to the concepts like glass ceiling, and note how this is no excuse for not including women in the panel on equal standing. I could, as I have in my question, note the irony of a panel about lessons from NETmundial (y’know, the multistakeholder conference on Internet governance) comprising almost entirely of men, and with no representative of the third sector.

Or I could point out, that including civil society in the panel might have made it easier for the organisers to find female panelists, as while the glass ceiling is indubitably also sadly present in civil society, it doesn’t seem to be as prevalent as in government and business sectors.

Here’s a simple exercise: make your own suggestions of female panelists. I have my own shortlist, of course.

Sudden outbreak of relevancy

However, there is always hope. The “When the public sphere became private” workshop proved to be both inspiring and interesting, and the exchange of ideas relevant and much deeper than I would have expected.

It did help that the topic sounded eerily familiar, but the discussion went far and wide, touching on a number of related issues.

Private vs. privately-owned

There was an important distinction that had to be made, as became apparent in the course of the discussion, between two meanings of the word “private”, in the context of communication infrastructures.

First meaning being “pertaining to or supportive of privacy”. Here, private communication medium would mean a communication medium that ensures the privacy of the communication between communicating parties.

The second one is, of course, “privately-owned”, with private communication medium meaning a medium owned by a private entity.

Obviously, similar distinction has to be made for the word “public” in the same context.

With this in mind it’s easy to see how crucial misunderstandings can arise when using these terms without making clear which of the particular meanings we have in mind. Specifically, privately-owned infrastructure can be (and often is) hostile towards privacy of the communicating parties.

Public sphere in private infrastructure

When the whole infrastructure is privately-owned, privacy is not the only problem. Public sphere is crucial to democratic processes, but today it is more and more being replaced by privately-owned and controlled fora. Public discourse should not, however, be contingent on rules made unilaterally by private entities. Or, as one of the workshop panelists neatly put it:

Public agora cannot underlie a business model based on surveillance

As always, the first step is admitting that we do have a problem, and I take it we are getting ready for such an admission. Finally. But what’s really interesting is the next step – what should we do about it? There is, unfortunately, no clear answer, but several ideas have been floated.

One of these is open standards, or making the operators of such privately-owned fora to at least supply APIs allowing full interoperability between different providers (think Facebook interoperating with Google+). Another (crazy, I give you that!) idea – floated by a friend of mine some time ago – is to have source code of all software available at least for inspection, just like ingredients listing on packaged food.

Yet another would be mandating privacy impact assessment on all lawmaking activities, and on infrastructural decisions made (for instance) on governmental levels.

Finally, there was this gem:

Governments need to pass human rights as technical requirements

That’s something that really got my attention, as for some time now I am pondering that we – the technical community, geeks, free-softies, etc. – should start making software with the assumption that if some abuse is possible, it is inevitable. And start designing our software for privacy just as we design it for security. I’ll elaborate on that in a separate post.

All of these need further thought and consideration; some might turn out workable, some might turn out impossible, and some combination of them might be the right way to proceed.

But the right questions are apparently finally being asked. Not holding my breath, but maybe next time we’re even able to find some less locked-down solution instead of a Twitter wall to bring in the remote participation…

Hacker in the Digital Affairs Council

This is an ancient post, published more than 4 years ago.
As such, it might not anymore reflect the views of the author or the state of the world. It is provided as historical record.

It’s official – I have been confirmed as a member of the Digital Affairs Council to the Minister of Administration and Digital Affairs. I was recommended by Internet Society Poland and Polish Linux Users Group.

What is the Digital Affairs Council anyway?

the Council is “minister’s advisory and consultative body” (as described in art.17 of the informatisation law). That means that on one hand it doesn’t really get to make direct decisions; on the other, however, Council’s recommendations will carry certain weight (at least, that’s the theory).

The Council is an evolution of the Informatisation Council, operating since 2005. Several members of the current Council had been involved in that previous installment.

According to the law, the Council will propose and opine projects of statements (among others, by the Council of Ministers), documents, development strategies, program projects and reports in the areas of informatisation, communications, information society development and rules regarding the functioning of public registers, rules and state of introducing ICT systems in public administration, and even Polish ICT terminology. And…

The Council can initiate activities related to informatisation, ICT market development, and development of information society.

The Council today has 20 members, representing administration, NGOs, technical organisations and business. What recommendations will the Council produce and which direction will it lean? How will the practicalities of its operation look like? Hard to say today. But the possibilities seem quite interesting.

Who’s in the Council?

I have had the pleasure of meeting several members of the Council on different occasions; not all of them, unfortunately. The ones I know paint an interesting picture.

  • Igor Ostrowski – Council Chairman; lawyer, Vice-Minister of Administration and Digital Affairs during anti-ACTA protests, before that a member of the Prime Minister’s Strategic Advisors Team; such a choice can only please, especially all “opennists” and privacy advocates out there.
  • Joanna Berdzik – Vice-Minister of Education, engaged in the Digital School project (including the Open Textbooks programme).
  • Dominik Skoczek – lawyer, represents the Polish Film-makers’ Association; during anti-ACTA protests he was the head of the Intellectual Property and Media Department in the Ministry of Culture and National Heritage, and responsible for the ACTA process; copyright maximalist, claiming that copyright reform proponents are only in it for “gratis access for users”.
  • Anna Streżyńska – well-known in Poland for her activities while presiding over the Office of Electronic Communications and successful fight against the Polish telco monopolist.
  • Katarzyna Szymielewicz – President and co-founder of the Panoptykon Foundation, unrelenting activist for privacy, freedom and personal autonomy in the times of pervasive surveillance.
  • Alek Tarkowski – “opennist”, Polish Creative Commons chapter co-ordinator, director of the Digital Centre; previously, with Igor Ostrowski, a member of the Prime Minister’s Strategic Advisors Team.
  • Elżbieta Traple – law professor, copyright law expert; during the post-ACTA Ministry of Administration and Digital Affairs workshops she proposed changes to Polish copyright law reaffirming fair use in the digital domain.
  • Jarosław Tworóg – Vice-President of the Board of the National Chamber of Electronics and Telecommunication; I’ve had the pleasure of taking part in several public consultation meetings along with Mr. Tworóg; expert in the area of electronics and telecommunication.
  • Agata Wacławik-Wejman – co-founder and Member of the Board of the Institute of Law and Society, policy counsel at Google.
  • Piotr VaGla Waglowski – operator of prawo.vagla.pl website, lawyer, activist, member of the Council of Panoptykon Foundation, co-initiator of organising Public Domain Day celebrations.

Hence we have openness and privacy activists on one hand, copyright maximalists and representatives of big IT companies on the other. What will come of this – we’ll see.

Public consultations and anonymity

This is an ancient post, published more than 4 years ago.
As such, it might not anymore reflect the views of the author or the state of the world. It is provided as historical record.

The problem of anonymity – and a connected issue of representativeness – in public consultations (and wider: generally in public debate) seem to be a Gordian knot. On one hand, anonymity is indicated as necessary for a truly independent discourse; on the other, in invites behaviour that is far from desirable.

We tried to tackle this issue (both in the panels and during the workshops) at the Nowe perspektywy dialogu (“New perspectives of dialogue”) conference, held within the framework of the W Dialogu (“In Dialogue”) project – in which the FOSS Foundation cooperates with the Institute of Sociology at the University of Warsaw.

The Problem

Anonymity in a discussion has some advantages:

  • higher comfort of voicing opinions – the participants don’t have to consider what their spouse, boss or priest thinks of what they have to say; nor do they have to be concerned with potential government retribution for opinions that are not in-line with the “party line”;
  • higher capacity to change opinions – as one of the attendees noted, anonymous participants are more likely and willing to admit error and change their opinion based on facts and subject matter arguments;
  • reasoning instead of personal connections – anonymity allows the discussion to move beyond personal connections, relations and animosities, and focus more on subject matter arguments and facts.

Obviously, there are also important drawbacks:

  • trolling – likely to be present in any exchange of ideas, trolls are especially drawn to on-line discussions, and anonymity is a strong contributing factor;
  • mandate – it is hard to ascertain that every member to an anonymous public debate has mandate to partake in it (consider a participatory budgeting debate in a local community: non-residents shouldn’t be able to influence the decision);
  • lack of transparency – participants can voice their own opinion, but can work in the interest of particular companies or groups of interests as well; while this is fine, transparency is crucial in a democratic society: information how a given interest group lobbied might important for the final decision, and it is non-trivial to provide accountability and transparency in an anonymous decision making process;
  • sock-puppets – with anonymous participation, what is to stop certain participants, companies or interests groups from using multiple artificial identities to sway the decision?

Would it be possible to have the anonymous cookie and eat it too, though?

Shades of anonymity

First of all, it is worth reminding that there are several shades shades of anonymity, depending on:

  • what data is anonymized (e.g. affiliation, full name, address, gender, etc.);
  • with regard to whom is it anonymized (e.g. other participants to a given discussion, discussion organizers, observers, public institutions, media, etc.);
  • at what stage of the discussion the data is anonymized (e.g. only during the discussion but available it ends, entirely and with regard to the whole discussion and all of its effects, only after the discussion has concluded, etc.).

Additionally, statements in a discussion can be:

  • not being signed at all, allowing for full anonymity – this way participants don’t can’t even know if any two statements were made by the same person, or different persons;
  • signed with a discussion-specific identifier (e.g. a random number), hiding the identity of authors, but making it possible to see which statements in a given discussion (but not beyond) are made by the same person;
  • signed with a global identifier in all discussions on a given platform (again: for example a random number or UUID), making it possible to check all statements a given person made in all discussions, but still not divulging their identity.

The first of these makes it impossible to follow a conversation (no way to be sure if we’re answering the same person, or some other participant). The second one allows for a better structuring of a given discussion, and to more easily follow the exchange of ideas. Last one doesn’t really differ from pseudonymity (apart from the fact that the identifier is chosen by the system, instead of the participants themselves), hence it makes it possible for participants to build identities of sorts within a given platform.

Different tools, different aims

Anonymity is a certain tool that can help us achieve certain goals, if we use it with care. How?

Polish Data Protection Supervisor, dr Wiewiórowski, made a simple yet powerful distinction: anonymity makes sense and is very useful in general, high-level consultation processes. As soon as we start consulting particular documents and discuss specifics, commas and numbers, transparency and accountability are much more important – as this is where particular interests really come into play, and we need ways to follow these very closely in a democratic society.

This was further supplemented by a thesis that a fully anonymous public consultation process needs to be evaluated with regard to subject matter by the consultation organisers, and its result should be treated as a guideline rather than a definite decision. If a given process is to be completely binding, it needs to be completely transparent.

Hence on one axis we have a whole spectrum of anonymity of public consultation processes, on the other – a spectrum of how general or particular a given process is and how binding it should be. We also know that there is a strong correlation between the two axes: the more detailed and binding a given consultation process is, the more transparency and accountability is needed, hence less anonymity for its participants.

This correlation, I would say, is extremely powerful in organizing the discussion around anonymity in public consultations. It also means that it is impossible to make a decision about anonymity in a given consultation process without deciding first what kind of a process it is supposed to be. This is also crucial to all attempts at creating tools aiming to support such processes.

It’s worth noting we already have examples of quasi-consultation processes from both ends of the spectrum:

  • general elections are partially anonymous (participants are identified to ascertain their mandate, but the vote itself is secret, so that it is impossible to attribute a given ballot to a given voter), while at the same time being very general, high-level and not really binding with regard to particular decisions to be made by representatives (as anybody who voted on a politician just to see them back-pedal from their election-time promises knows full well);
  • consensus meetings around a particular issue are meant to be non-anonymous, fully transparent and accountable (every participant is required to give their name and affiliation), because they are to a large degree binding and concrete.

Another interesting example is the Chatham House Rule:

When a meeting, or part thereof, is held under the Chatham House Rule, participants are free to use the information received, but neither the identity nor the affiliation of the speaker(s), nor that of any other participant, may be revealed.

Hence, during a meeting governed by the Rule participants are not anonymous to each other (which solves the problem of representativeness, helps structure the discussion better, etc), but after the meeting all participants can expect full anonymity with regard to who said what (which in turn helps make the discussion more open, honest and not tied-in with particular interests of participants’ affiliations).

Why being a pirate is not worth it

This is an ancient post, published more than 4 years ago.
As such, it might not anymore reflect the views of the author or the state of the world. It is provided as historical record.

I have lately been asked to write a short text on “why being a pirate is not worth it”. To be honest, I wasn’t entirely sure how to approach it, so we ended up changing the topic. However, challenges are there to be accepted, hence I decided to make an attempt in my free time and without deadlines. And no, even though my love towards the Polish Pirate Party is well-known, this is really not about them.

Undoubtedly, pirates have a very positive public image nowadays, and for some time now. This has to be romanticism’s illegitimate child, this fascination with pirates’ uneven, solitary struggle against the unforgiving elements, and resistance towards social norms of their day. Resistance, that banishes them from the society for good.

It’s hard to tell, though, which goes first: was the resistance a reaction to rejection, or the other way around? Each pirate would have their own story to tell, and their own reasons.

What we will definitely find in piracy – the idealized version, that is – admiration of the cold and brutal, yet beautiful nature, fascination with times long past (with their aesthetic and peculiar ethos) and tragic yet full of determination strife for personal freedoms, against all odds and “the system” (feudal, with some rudimentary capitalism). That strife is what resonates so well today.

Problem is: this image is so idealized, it’s almost unrecognisable. It’s a Hollywood version, simplified and painted pretty, but not having much relation to historical facts.

Pirates were excluded from the society, and constantly struggling with merciless elements, that’s undisputed. However, they were far from being as “anti-system”, as we’d like to think – they often had mandate from one of the sea powers, and operated in a manner we would call today “freelancing”. So much for the romantic ideal of a freedom fighter.

Sailing ship crews, especially pirates, were controlled by the iron will (and fist) of the captain, the death tall was always high, and the cruel sea was as much a reason for this as were brutal and inhumane punishments administered with the conviction (not that far from truth) that only fear can keep a crew of bandits in check. Full-blown feudalism, only at sea and drowned in blood.

Of course, pirates’ blood was not the only being spilt: crews of captured merchant ships were rarely spared – after all, who’s to feed and guard tens of prisoners in hard conditions at sea?

Pirate’s life was a cruel life of a bandit on uncompromising sea, threatened from every side: the elements, captain, fellow crew members, attacked crews and finally – navy ships, trying to keep control over trading routes.

Not a life to envy.


Those of you, who expected something about copyright law and copying in the Internet, might I remind that “piracy” is not downloading music from the Web. I’d like to suggest familiarizing oneself with this helpful infographic.

On Mozilla, DRM and irrelevance

This is an ancient post, published more than 4 years ago.
As such, it might not anymore reflect the views of the author or the state of the world. It is provided as historical record.

A sad day has come – Mozilla has announced they are bringing DRM EME to Firefox, due to fears that without it, its users will not be able to access some content, and hence will turn from Firefox towards other browsers.

And while Mozilla goes to great lengths to band-aid this situation as much as possible, a spoon full of sugar won’t make the medicine go down.

Defective by design

First of all, let’s state the obvious: DRM never really works. It can’t – it’s like trying to show something to a user without showing it to the user. The very idea is absurd, and in the digital world unworkable. What DRM does great is creating problems for paying users, for free software community, and beyond.

Mozilla knows this, they’re techies. Hence all the effort to make it seem as “detached” from Firefox itself, as possible.

Mozilla Chromium

What they do not appreciate, apparently, is that for a long time there has been less and less reasons to use Firefox instead of, say, Chromium (other similar browsers). From the end-users’ perspective, Chromium is faster, leaner, it has many of the same extensions available… And between the interface changes (copycating Google and bringing grief to those of us who took the time to customize their Firefox experience), and versioning scheme changes (copycating Google and bringing grief to those of us who have to support it in their infrastructure), Firefox is becoming more and more a Chromium look-and-feel-alike, instead of the groundbreaking web-browser it used to be.

Why use the copy if you can get the real deal?

A question of trust

For me that reason always was freedom and trust. I trusted Mozilla to protect and defend my freedoms on-line. And I supported this, as many like me, for instance by using Firefox and installing it for my family members and friends. For some time now, Mozilla is making moves that strain that trust. And for me personally, introducing DRM to Firefox might just be the straw that broke the camel’s back.

That means that as soon as I find a fitting, freedom-preserving replacement, I might start installing that for my friends and family.

Will they complain that some websites do not work, that some videos do not play? Yes they will. But we’ve been through that already – years ago, when Mozilla was taking back the web. Back when Mozilla was about making the web more open, fighting walled-gardens of content, upholding the principles of open web.

And back then, we’ve won.

The value of Mozilla

Mozilla never seemed to be about the numbers, and accounts, zeroes. Mozilla was about values, even when that meant some content was harder to access on Firefox. Those values – not numbers of users, and not sponsorship deals! – were what made Mozilla relevant.

Mozilla and W3C have got things backwards, it was Hollywood that needed to worry about being irrelevant. – Will Hill’s comment in a Diaspora thread

A decade ago we’ve been able to make a change by promoting an open and standards-compliant browser in the world where the whole Internet seemed written for a closed, non-standard blue E. We’ve been able to do that by standing up for Mozilla each time we noticed a website that didn’t work.

Today, Mozilla is not standing up for us in a world where choice and control are at risk.

And in the grand scheme of things, what the free and open Internet really needs more is not yet another mobile operating system, but a browser that respects and protects the values and ideas that are at the very heart of the open web.

Mozilla needs to stand on a principle, or it will not have a standing at all.

Not-quite-good-enough-Mundial

This is an ancient post, published more than 4 years ago.
As such, it might not anymore reflect the views of the author or the state of the world. It is provided as historical record.

I had been invited to join NETmundial a couple of weeks ago in São Paulo. It’s been an interesting learning experience for – well, I guess for all involved parties (a.k.a. “multiple stakeholders”; “multistakeholderism” was the buzzword du jour). Sadly, not very much more, though.

When the most resounding message in statements made from the stage by organisers and high-profile guests is that the outcome document has to be “good enough”, that sends a strong signal that mediocrity is to be expected.

In that regard, nobody got disappointed.

Now, I do not have as black an opinion of NETmundial as La Quadrature du Net; I even feel that Smári McCarthy’s view that [the entire conference was a waste of time] goes a wee bit too far.

Still I am far from the optimism expressed by the Polish Ministry of Administration and Digital Affairs (among others). Here’s why.

Background

It would be hard not to notice two prevailing contention issues in, of and about the Internet during the last year or so: privacy and net neutrality.

In either both governments and corporations are highly interested; in either different governments and corporate entities have (or claim to have) different interests. And – most importantly – both are inseparably connected to human rights in the digital era, and to the future of the Internet as a whole.

Discussion of these issues, especially privacy, gained much steam after Edward Snowden’s revelations about overreaching mass surveillance programmes run by the US National Security Agency.

In response to, among others, these revelations, in late 2013 Brazilian president Dilma Rousseff announced plans to host a global Internet governance meeting, which came to be The Global Multistakeholder Meeting on the Future of Internet Governance, a.k.a. NETmundial.

At the same time, for the last few years, there was a debate happening in Brazil around Marco Civil da Internet. The debate hinged on the very same two crucial issues – privacy and network neutrality. Few weeks before NETmundial the bill has cleared Brazilian congress, and was passed into law on the first day of NETmundial, April 23rd. The bill contains strong protection of network neutrality and privacy on the Internet.

Few weeks before NETmundial the European Parliament voted for a bill that would (among other things) protect network neutrality in the EU.

Process

The process seemed thought-through and geared towards multistakeholderism. The idea was to gather as many people, institutions, NGOs, governments, interested in Internet governance, as possible, get their input and prepare a single document, outlining the principles and the roadmap for Internet governance.

RFC

Discussion had started long before the April conference. First, a call for submission had been made (around 180 submissions had been received, including mine). Each had to refer either to principles, or roadmap.

Then, a first draft version of the outcome document has been published, and opened for comments. Hundreds of these flowed-in, and the call form comments has ended directly before the conference itself.

Plenary

Finally, the conference was organised as a single-track, massive (more than 800 people in attendance) plenary. After the usual official statements (made by – among others – Mrs. Rousseff, Sir Tim Berners-Lee, Vint Cerf, Nenna Nwakanma, and representatives of several governments, including the Polish Minister of Administration and Digital Affairs), a call for comments – this time submitted in person, via microphones – was open and continued for the better part of the 2 days.

There were 4 microphones: one for civil society, one for governments, one for academia and technical community, and one for business. There were about 200 representatives of each of these groups in the room, and each group has been represented more-or-less equally in the composition of the group of people on stage, running the event. Microphones were called upon sequentially, and each speaker had 2 minutes (later reduced to 1.5 minute) to voice their comment.

Interestingly, remote hub participants were also offered the floor (via an audio-video link) after each microphone call sequence, and there were quite a few quality “remote” remarks that added real value to the proceedings.

Each comment, each word, was transcribed and directly shown on-screen. All transcripts are also available on-line, which is a boon for transparency and accountability.

“It’s who counts the votes”

After the plenary ended for the day, all the comments were then processed and merged with the outcome document draft by the High-Level Multistakeholder Committee. Sadly, while on the plenary every group had the same power, the same amount of time to voice their concerns, things changed in the Committee: there were 3 representatives each from civil society, technical community, academia, business, and (surprisingly) “international organisations” (like the… European Commission!). However, there were 12 representatives of governments.

And the Committee meeting was not recorded nor transcribed. Every NETmundial participant could be in the room the Committee was working in, but they didn’t have a voice.

There goes multistakeholderism, accountability and transparency, out the window.

Content

In the comments, especially those voiced in the plenary, both net neutrality and privacy/mass surveillance issues were not only present, but – I would say – prevalent. While most comments in support of enshrining network neutrality and including strong wording against mass surveillance in the outcome document came (unsurprisingly) from the civil society, there were such voices also from governments, academia, technical community and business, including this great tidbit by Mr. James Seng:

businesses should also be protected from being coerced by their government or any other legal authorities into mass surveillance.

…and this great comment, coming (surprisingly) not from civil society, but from the government side – by Mr Groń, representing Polish Ministry of Administration and Digital Affairs (which does seem to get it as far as Internet is concerned):

Text in the current form may suggest that there might be mass surveillance interception and collection programs which are consistent with human rights and democratic values. By definition, mass surveillance is not consistent with human rights and democratic values.

The rule of law and democratic values states that surveillance must respect specific and strict rules. There must be specific legislation setting limits of powers of surveillance authorities and providing necessary protection for citizens’ rights. Use of surveillance mechanisms must be under supervision of court. Such mechanisms may be used only in a case of reasonable suspicion of committing a crime and only against specific person or persons.

Mechanism used must be proportional and may be used only for specific time period.

Many comments called for explicit acknowledgement of Edward Snowden’s role in the conception of NETmundial. Many others for outright calling access to the Internet a human right. Several about the need to connect developing nations.

I also took to the microphone to underline the issue of walled-gardens and consequent growing balkanisation of the Internet.

Of course, voices advocating stronger protection of imaginary property where also there, but (and again, this is my subjective take on it) there were much fewer of them than one would have expected.

And of course there were pro-censorship statements, thinly veiled behind the usual “think of the children” (Tunisia) and “the right of the government to decide what is best for the people” (China).

Outcome

As good as the comments were, the outcome document is sadly very disappointing. There was a strong urge to build a consensus around the document, which obviously meant that certain things were hard to introduce – but during the work of the committees merging comments with draft documents there were several positive changes introduced, including strong language against mass surveillance both in the Roadmap, and in the Principles. The latter being most clear-cut:

Mass surveillance is not compatible with the right to privacy or the principle of proportionality.

Then the draft document, merged and polished by the respective Principles and Roadmap committees, went under consideration of the High-Level Multistakeholder Committee. And that’s where things got cut and mangled. The strong anti mass surveillance language disappeared, leaving only watered-down version that can be read as if suggesting mass surveillance can be carried out in a way that is compatible with human rights law.

Make no mistake – this is due to vehement opposition to such strong condemnation of mass surveillance, voiced by none other than the United States. US representative went as far as to state, that in the view of the US (compare and contrast with the Polish statement above):

Mass surveillance not always a violation of privacy.

For the same reason there is no acknowledgement of Edward Snowden in the document, of course. And, of course, these were voiced unequivocally only at the not recorded nor transcribed HLMC meeting.

Net neutrality got a boot and was only included as a “point to be further discussed beyond NETmundial” (along with roles of stakeholders, jurisdiction issues and benchmarking).

Finally, intermediary liability only got a weak acknowledgement, anchored in “economic growth, innovation, creativity and free flow of information”, instead of human rights (like freedom of expression or privacy):

Intermediary liability limitations should be implemented in a way that respects and promotes economic growth, innovation, creativity and free flow of information.

Little wonder, then, that civil society organisations decided to voice their disappointment with the outcome document in a common statement; its the last sentence seems a fitting summary:

We feel that this document has not sufficiently moved us beyond the status quo in terms of the protection of fundamental rights, and the balancing of power and influence of different stakeholder groups.

Conclusions

The document is far from satisfactory, especially in the context of the very reasons NETmundial was conceived (mass dragnet surveillance by the US), and legislative work being done around network neutrality (including Marco Civil and the Europarlament vote). And as far as privacy and mass surveillance is concerned, we know it’s of rising importance for more than a decade. Time to up our game.

With FCC proposing watered-down and meaningless net neutrality rules during NETmundial proceedings, US agencies blatantly advocating more surveillance and smartphone remote “kill-switch” law being passed in California, NETmundial could have sent a strong, unambiguous signal about the need of protecting human rights also in the digital domain.

Instead, due to political pressure to find a compromise, however mediocre and meaningless (quipped “overwhelmingly rough consensus”), the outcome document doesn’t really introduce any new quality to the debate.

To some extent, though, it’s the journey that counts.

NETmundial was as much an Internet governance meet-up, as an experiment in multistakeholderism. And even though it was slanted (due to, among others, the HLMC having a large over-representation of governments), even though it was far from perfect, even though the process could have been better designed, it is still an experiment we can learn a lot from.

I feel that somewhere along the road NETmundial organisers missed the fact that:

Multistakeholderism is a framework and means of engagement, it is not a means of legitimization. – via Wikipedia

With eyes on the prize of a consensual outcome document, there was a vague feeling that civil society has been invited to the table to legitimize the process and the outcome, and that there are little to none concessions that would not be considered to keep all parties at the table.

It eventually turned out a bit better, and I find the fact that the US had to unequivocally advocate mass surveillance, is one of the positive outcomes of this meeting. The king had to acknowledge its lack of clothing.

While it is hard to disagree with Jeremié Zimmerman, writing for La Quadrature du Net:

Governments must consider the Internet as our common good, and protect it as such, with no compromise.

…we can, and should, learn from NETmundial. As Human Rights Watch put it:

What was evident throughout the two days of discussions in São Paulo is that a “multistakeholder” approach to Internet governance – however vague a term, or however difficult a concept to implement – is a far more inclusive and transparent approach than any process where only governments have a seat at the table

I think I’ll finish this off with a question raised by Smári McCarthy:

We’re going to need to do something better. The people running OurNETmundial were doing a fairly good job of drawing attention to the real issues. Perhaps OurNETmundial should become an event. But where? When? By whom? And how do we avoid cooption?

Irresponsible non-disclosure

This is an ancient post, published more than 4 years ago.
As such, it might not anymore reflect the views of the author or the state of the world. It is provided as historical record.

Yesterday Bloomberg broke the news that NSA is said to had known about http://en.wikipedia.org/wiki/Heartbleed Heartbleed] for months or years, without telling anybody – and the wheels of the media and blogosphere have started to churn out reactions from surprised through shocked to outraged.

Frankly, I am most surprised by the fact that anybody is surprised. After Snowden’s revelations we all should have already gotten used to the fact that what once was a crazy tin-foil hat paranoia, today is entirely within the realm of possible.

Even less surprisingly, a quick dementi has been issued on behalf of the NSA. Regular smoke and mirrors, as anybody could have expected, but with one very peculiar – and telling – paragraph (emphasis mine):

In response to the recommendations of the President’s Review Group on Intelligence and Communications Technologies, the White House has reviewed its policies in this area and reinvigorated an interagency process for deciding when to share vulnerabilities. This process is called the Vulnerabilities Equities Process. Unless there is a clear national security or law enforcement need, this process is biased toward responsibly disclosing such vulnerabilities.

What this means is that when a bug is found by a “security” agency, it might not get responsibly disclosed. If “there is a clear national security and law enforcement need”, it might be used in a weaponized form instead.

With the “America under attack” mentality and the ongoing “War on Terror” waged across the globe, we can safely assume that there is “a clear national security need”, at least in the minds of those making these decisions.

And we need to remember, that if there is a bug, and somebody has found it (but not disclosed it), somebody else will find it, eventually. It might be Neel Mehta or Marek Zibrow, who then discloses it responsibly; or it might be Joe Cracker, who exploits it or sells it to other shady organisations.

And because we all use the same encryption mechanisms, the same protocols and often the same implementations, it then will be used against us all.

Now, it is crucial to understand that it’s not about NSA and Heartbleed. It’s about all “security” agencies and any software bugs. By not responsibly disclosing discovered bugs “security” agencies make us all considerably less secure.

Regardless of whether NSA has or hasn’t known about Heartbleed, such a non-disclosure policy is simply irresponsible – and unacceptable.

Ecologic, Ford and surveillance

This is an ancient post, published more than 4 years ago.
As such, it might not anymore reflect the views of the author or the state of the world. It is provided as historical record.

A few months ago Jim Farley, Ford representative, blurted in a panel at CES that:

We know everyone who breaks the law, we know when you’re doing it. We have GPS in your car, so we know what you’re doing. By the way, we don’t supply that data to anyone.

Comments about where not very positive, to say the least, and both Mr Farley and Ford’s PR manager retracted this statement immediately – underlining that gathered data would only be used after anonimisation, or only after explicit consent by the driver. In other words, “this is no surveillance”.

Of course, once the data reaches Ford’s servers the only thing keeping Ford from giving them away is their promise. Seems pretty thin to me – especially with the money insurance providers can throw at this (not to mention law enforcement).

Ford isn’t the only company why strives to “help” drivers by gathering data on them. A Polish startup, Ecologic (winners of the Warsaw Startup Fest), had this to say (emphasis mine):

Damian Szymański, Gazeta.pl: What is Ecologic’s idea and how can it help us all lower costs of using cars?

Emil Żak, Robert Bastrzyk: Today nobody keeps track of costs of using their cars. Turns out that annually it can add up to more than the value of the car itself. Tires, petrol, insurance, repairs, etc. It all costs. Our device analyses every action of the driver. It signalises what we have done wrong and suggests, what we can change to lower the costs of petrol, for example. Moreover, we have access to this data 24h.

Total surveillance?

Not at all. The question is how the driver drives their car. Ecologic is a mobile app, online portal and a device that you connect in your car. Thanks to that we can have all sorts of data, for example about combustion…

What kinds of data are collected? Ecologic’s website claims that the device is “equipped with the motion sensor, accelerometer, SIM card, cellular modem and GPS”, and that:

The system immediately begins recording operating data of the vehicle, the GPS position and driving techniques in real-time.

So the idea is to collect data like GPS position, acceleration and breaking, vehicle utilization, driving technique, and sending these off to Ecologic’s servers. Seems that it doesn’t differ wildly from what Ford has in stock, with an (apparently) nice addition of the driver being able to check on their data and stats. Sounds great!

However, a question arises: what happens with the data? Even if Ford’s “promise” not to share with anybody seems thin, Ecologic doesn’t even try to hide that the real money is in selling access to gathered data.

In the “For Who” (sic) section of their website we can find the real target group (emphasis mine):

Private users – keep an eye on the young driver in the family Small business – fast and easy management of vehicles Fleets – keep the fleet under control & save costs Leasing Companies – lower the accident rate and track miles Insurance – give discounts on no-claims & safe driving

Of course one very important group is missing from that list: I am sure law enforcement will be quick to understand the utility of requiring any and all cars install the device, and not having to deal with costly traffic enforcement cameras any more without losing the ability to issue speeding tickets. After all, would Ecologic deny access to data to law enforcement?

Ah, but the Ecologic cares about drivers’ impression of being surveilled:

Your driver after work can switch off live tracking to feel conftable without impression that he is “spied”. A button on the mobile app allows the driver to indicate that the current trip is personal and help you to track private km. (sic!)

So the driver can “switch off live tracking”, but the system will nonetheless help you (i.e. the employer) track “private km”? So these data also have to land in Ecologic’s servers, eh? Apart from the employer, who else will have access to this “private trip” data? Insurance companies? Law enforcement goes without saying, of course.

In the interview, Ecologic claims that:

It’s all about motivation and healthy competition. We need to change the way we think. Instead of a stick, we want to give people two carrots.

It’s a pity that for the drivers themselves this translates into three sticks – employer, insurance provider and law enforcement.

Blurry line between private service and public infrastructure

This is an ancient post, published more than 4 years ago.
As such, it might not anymore reflect the views of the author or the state of the world. It is provided as historical record.

This is my NetMundial content proposal, with some typos fixed and minor edits.

Abstract

ICANN and IANA decentralisation efforts mark an important milestone in the evolution of the Internet: there is finally widespread recognition of the fact that centrally controlled bodies pose a threat to the free and open nature of the Internet. ICANN and IANA are, however, but a small part of a much larger problem.

More and more, communication platforms and methods are secondarily centralized; that is, in a network decentralized on lower protocol levels there are services being run that are centralized on higher levels. Running on a network based on open standards are closed services, that are then used by other entities as base for their services.

In other words, some private services – offering, for example, user authentication methods – are being used as a de facto infrastructure by large numbers of other entities.

If we recognize the dangers of centrally-controlled domain name system, we should surely recognize the danger of this phenomenon also.

Document

It is of great value that the importance of decoupling IP addresses management and the domain name system management from a single state actor has been recognized and that currently there is a strong push towards multistakeholderism in this area.

There is, however, a secondary emergent centralization happening on the Internet, that potentially can pose a comparable, or even bigger, threat to the interconnected, open and independent nature of this global network.

This centralization is harder to perceive as dangerous, as it is not being actively supported by any state actor; hence, it falls under the radar for many Internet activists and technologists, that would react immediately had similar process been facilitated by a government. It does, however, have a potential to bring negative effects similar to a state-sponsored centralization of infrastructure.

Another reason for this process to happen unnoticed or for the possible negative effects of it to be depreciated is that it is fluid and emergent on behaviour of many actors, enforced by the network effect.

This process is most visibly exemplified in Facebook gathering over a 1 billion of users, by providing a centrally-controlled walled-garden, and at the same time offering an API to developers willing to tap-into this vast resource, for example to use it as authentication service. Now, many if not most Internet services requiring log-in as one of their options offer Facebook log-in. Some (a growing number) offer Facebook as the only option. Many offer commenting system devised by Facebook, that does not allow anonymous comments – a user has to have a Facebook account to be able to partake in the discussion.

Similarily, Google is forcing Google+ on YouTube users; to a lesser extent, Google Search is being used by a swath of Internet services as their default internal search engine (that is, used to search their own website or service). GMail is also by far the most popular e-mail and XMPP service, which gives Google immense power over both.

These are two examples of services offered by private entities (in this case, Google and Facebook) that had become a de facto public infrastructure, meaning that an immense number other services rely and require them to work.

If we recognize the danger of a single state actor controlling ICANN or IANA, we can surely recognize the danger of a single actor (regardless of whether it is a state actor or not) controlling such an important part of Internet infrastructure.

Regardless of reasons, why this situation emerged (users’ lack of tech-savvy, service operators’ want of easiest and cheapest to implement and integrate solutions, etc), it causes several problems for the free and open Internet:

  • it hurts resillience

If such a large part of services and actors depend on a single service (like Facebook or GMail), this in and of itself introduces a single point of failure. It is not entirely in the realm of the impossible for those companies to fail – who will, then, provide the service? We have also seen both of them (as any other large tech company) have large-scale downtime events, taking services based on them down also.

  • it hurts independence

In the most basic sense, any user of a service based on these de facto infrastructures has to comply with and agree to the underlying service (i.e. Facebook, Google) Terms of Service. If many or most of Internet services have that requirement, users and service operators alike lose independence over what they accept.

  • it hurts openness

Operators of such de facto infrastructures are not obliged to provide their services in an open and standard manner – running mostly in the application layer these services usually any attempts of interoperation. Examples include Twitter changing their API TOS to shut-off certain types of applications, Google announcing the planned shut-off of XMPP server-to-server communication, Facebook using XMPP for the internal chat service with server-to-server shut-off.

  • it hurts accountability and transparency

With such immense and binary (“either use it, or lose it”) control over users’ and other service providers’ data, de facto infrastructure operators do not have any incentives to share information on what is happening with the data they gather. They also have no incentives to be transparent and open about their future plans or protocols used in their services. There is no accountability other than the binary decision to “use it or lose it”, which is always heavily influenced by the network effect and the huge numbers of users of these services.

  • it hurts predictability

With no transparency, no accountability, and lack of standardization, such de facto infrastructure operators can act in ways that maximize their profits, which in turn can be highly unpredictable, and not in line with users’ or the global Internet ecosystem’s best interests. Twitters’ changing of API TOS is a good example here.

  • it hurts interoperability

Such de facto infrastructure operators are strongly incentivised to shut-off any interoperability attempts. The larger the number of users of their service, the stronger the network effect, the more other services use their service, and the bigger the influence they can have on the rest of the Internet ecosystem. Social networks are a good example here – a Twitter user cannot communicate with a Facebook user, unless they also have an account on the other network.

This is obviously not the case with e-mail (I can run my own e-mail server), at least not yet. The more people use a single provider here (i.e. GMail), the stronger that provider becomes, and the easier it would be for its operator to shut-off interoperability with other providers. This is exactly what Google is doing with XMPP.

  • it hurts innovation

Lack of predictability, openness and independence obviously also hurts innovation. What used to be a free and open area of innovation is more and more becoming a set of closed-off walled-gardens controlled by a small number of powerful actors.

It is also worth noting that centralized infrastructure on any level (including the level of de facto infrastructure discussed herein) creates additional problems on human rights level: centralized infrastructure is easy to surveil and censor.


Hence, the first question to be asked is this: when does a private service become de facto public infrastructure?

At this point this question remains unanswered and there is not a single Internet Governance body, or indeed any actor, able to reply to it authoritatively. Nevertheless, we are all in dire need for an answer to this question, and I deem it a challenge for Internet Governance and an important topic that should be included in any Internet Governance Forums now and in the future.


The second question that ever more urgently requires an answer if we are to defend the open and not balkanized Internet is: what should be done about private services that have become de facto public infrastructure?

This question is also as of yet unanswered, but there are several possible proposals that can be made, including treating such situations as monopoly and breaking them up (so handling them outside Internet Governance), requiring public interoperable API available for other implementators, etc. This is perhaps not exactly in the purview of Internet Governance, it is however crucial for the Internet as a whole and I propose it be treated as a challenge to be art least considered at IGFs henceforth.