Skip to main content

Songs on the Security of Networks
a blog by Michał "rysiek" Woźniak

On Mozilla, DRM and irrelevance

This is an ancient post, published more than 4 years ago.
As such, it might not anymore reflect the views of the author or the state of the world. It is provided as historical record.

A sad day has come – Mozilla has announced they are bringing DRM EME to Firefox, due to fears that without it, its users will not be able to access some content, and hence will turn from Firefox towards other browsers.

And while Mozilla goes to great lengths to band-aid this situation as much as possible, a spoon full of sugar won’t make the medicine go down.

Defective by design

First of all, let’s state the obvious: DRM never really works. It can’t – it’s like trying to show something to a user without showing it to the user. The very idea is absurd, and in the digital world unworkable. What DRM does great is creating problems for paying users, for free software community, and beyond.

Mozilla knows this, they’re techies. Hence all the effort to make it seem as “detached” from Firefox itself, as possible.

Mozilla Chromium

What they do not appreciate, apparently, is that for a long time there has been less and less reasons to use Firefox instead of, say, Chromium (other similar browsers). From the end-users’ perspective, Chromium is faster, leaner, it has many of the same extensions available… And between the interface changes (copycating Google and bringing grief to those of us who took the time to customize their Firefox experience), and versioning scheme changes (copycating Google and bringing grief to those of us who have to support it in their infrastructure), Firefox is becoming more and more a Chromium look-and-feel-alike, instead of the groundbreaking web-browser it used to be.

Why use the copy if you can get the real deal?

A question of trust

For me that reason always was freedom and trust. I trusted Mozilla to protect and defend my freedoms on-line. And I supported this, as many like me, for instance by using Firefox and installing it for my family members and friends. For some time now, Mozilla is making moves that strain that trust. And for me personally, introducing DRM to Firefox might just be the straw that broke the camel’s back.

That means that as soon as I find a fitting, freedom-preserving replacement, I might start installing that for my friends and family.

Will they complain that some websites do not work, that some videos do not play? Yes they will. But we’ve been through that already – years ago, when Mozilla was taking back the web. Back when Mozilla was about making the web more open, fighting walled-gardens of content, upholding the principles of open web.

And back then, we’ve won.

The value of Mozilla

Mozilla never seemed to be about the numbers, and accounts, zeroes. Mozilla was about values, even when that meant some content was harder to access on Firefox. Those values – not numbers of users, and not sponsorship deals! – were what made Mozilla relevant.

Mozilla and W3C have got things backwards, it was Hollywood that needed to worry about being irrelevant. – Will Hill’s comment in a Diaspora thread

A decade ago we’ve been able to make a change by promoting an open and standards-compliant browser in the world where the whole Internet seemed written for a closed, non-standard blue E. We’ve been able to do that by standing up for Mozilla each time we noticed a website that didn’t work.

Today, Mozilla is not standing up for us in a world where choice and control are at risk.

And in the grand scheme of things, what the free and open Internet really needs more is not yet another mobile operating system, but a browser that respects and protects the values and ideas that are at the very heart of the open web.

Mozilla needs to stand on a principle, or it will not have a standing at all.

Not-quite-good-enough-Mundial

This is an ancient post, published more than 4 years ago.
As such, it might not anymore reflect the views of the author or the state of the world. It is provided as historical record.

I had been invited to join NETmundial a couple of weeks ago in São Paulo. It’s been an interesting learning experience for – well, I guess for all involved parties (a.k.a. “multiple stakeholders”; “multistakeholderism” was the buzzword du jour). Sadly, not very much more, though.

When the most resounding message in statements made from the stage by organisers and high-profile guests is that the outcome document has to be “good enough”, that sends a strong signal that mediocrity is to be expected.

In that regard, nobody got disappointed.

Now, I do not have as black an opinion of NETmundial as La Quadrature du Net; I even feel that Smári McCarthy’s view that [the entire conference was a waste of time] goes a wee bit too far.

Still I am far from the optimism expressed by the Polish Ministry of Administration and Digital Affairs (among others). Here’s why.

Background

It would be hard not to notice two prevailing contention issues in, of and about the Internet during the last year or so: privacy and net neutrality.

In either both governments and corporations are highly interested; in either different governments and corporate entities have (or claim to have) different interests. And – most importantly – both are inseparably connected to human rights in the digital era, and to the future of the Internet as a whole.

Discussion of these issues, especially privacy, gained much steam after Edward Snowden’s revelations about overreaching mass surveillance programmes run by the US National Security Agency.

In response to, among others, these revelations, in late 2013 Brazilian president Dilma Rousseff announced plans to host a global Internet governance meeting, which came to be The Global Multistakeholder Meeting on the Future of Internet Governance, a.k.a. NETmundial.

At the same time, for the last few years, there was a debate happening in Brazil around Marco Civil da Internet. The debate hinged on the very same two crucial issues – privacy and network neutrality. Few weeks before NETmundial the bill has cleared Brazilian congress, and was passed into law on the first day of NETmundial, April 23rd. The bill contains strong protection of network neutrality and privacy on the Internet.

Few weeks before NETmundial the European Parliament voted for a bill that would (among other things) protect network neutrality in the EU.

Process

The process seemed thought-through and geared towards multistakeholderism. The idea was to gather as many people, institutions, NGOs, governments, interested in Internet governance, as possible, get their input and prepare a single document, outlining the principles and the roadmap for Internet governance.

RFC

Discussion had started long before the April conference. First, a call for submission had been made (around 180 submissions had been received, including mine). Each had to refer either to principles, or roadmap.

Then, a first draft version of the outcome document has been published, and opened for comments. Hundreds of these flowed-in, and the call form comments has ended directly before the conference itself.

Plenary

Finally, the conference was organised as a single-track, massive (more than 800 people in attendance) plenary. After the usual official statements (made by – among others – Mrs. Rousseff, Sir Tim Berners-Lee, Vint Cerf, Nenna Nwakanma, and representatives of several governments, including the Polish Minister of Administration and Digital Affairs), a call for comments – this time submitted in person, via microphones – was open and continued for the better part of the 2 days.

There were 4 microphones: one for civil society, one for governments, one for academia and technical community, and one for business. There were about 200 representatives of each of these groups in the room, and each group has been represented more-or-less equally in the composition of the group of people on stage, running the event. Microphones were called upon sequentially, and each speaker had 2 minutes (later reduced to 1.5 minute) to voice their comment.

Interestingly, remote hub participants were also offered the floor (via an audio-video link) after each microphone call sequence, and there were quite a few quality “remote” remarks that added real value to the proceedings.

Each comment, each word, was transcribed and directly shown on-screen. All transcripts are also available on-line, which is a boon for transparency and accountability.

“It’s who counts the votes”

After the plenary ended for the day, all the comments were then processed and merged with the outcome document draft by the High-Level Multistakeholder Committee. Sadly, while on the plenary every group had the same power, the same amount of time to voice their concerns, things changed in the Committee: there were 3 representatives each from civil society, technical community, academia, business, and (surprisingly) “international organisations” (like the… European Commission!). However, there were 12 representatives of governments.

And the Committee meeting was not recorded nor transcribed. Every NETmundial participant could be in the room the Committee was working in, but they didn’t have a voice.

There goes multistakeholderism, accountability and transparency, out the window.

Content

In the comments, especially those voiced in the plenary, both net neutrality and privacy/mass surveillance issues were not only present, but – I would say – prevalent. While most comments in support of enshrining network neutrality and including strong wording against mass surveillance in the outcome document came (unsurprisingly) from the civil society, there were such voices also from governments, academia, technical community and business, including this great tidbit by Mr. James Seng:

businesses should also be protected from being coerced by their government or any other legal authorities into mass surveillance.

…and this great comment, coming (surprisingly) not from civil society, but from the government side – by Mr Groń, representing Polish Ministry of Administration and Digital Affairs (which does seem to get it as far as Internet is concerned):

Text in the current form may suggest that there might be mass surveillance interception and collection programs which are consistent with human rights and democratic values. By definition, mass surveillance is not consistent with human rights and democratic values.

The rule of law and democratic values states that surveillance must respect specific and strict rules. There must be specific legislation setting limits of powers of surveillance authorities and providing necessary protection for citizens’ rights. Use of surveillance mechanisms must be under supervision of court. Such mechanisms may be used only in a case of reasonable suspicion of committing a crime and only against specific person or persons.

Mechanism used must be proportional and may be used only for specific time period.

Many comments called for explicit acknowledgement of Edward Snowden’s role in the conception of NETmundial. Many others for outright calling access to the Internet a human right. Several about the need to connect developing nations.

I also took to the microphone to underline the issue of walled-gardens and consequent growing balkanisation of the Internet.

Of course, voices advocating stronger protection of imaginary property where also there, but (and again, this is my subjective take on it) there were much fewer of them than one would have expected.

And of course there were pro-censorship statements, thinly veiled behind the usual “think of the children” (Tunisia) and “the right of the government to decide what is best for the people” (China).

Outcome

As good as the comments were, the outcome document is sadly very disappointing. There was a strong urge to build a consensus around the document, which obviously meant that certain things were hard to introduce – but during the work of the committees merging comments with draft documents there were several positive changes introduced, including strong language against mass surveillance both in the Roadmap, and in the Principles. The latter being most clear-cut:

Mass surveillance is not compatible with the right to privacy or the principle of proportionality.

Then the draft document, merged and polished by the respective Principles and Roadmap committees, went under consideration of the High-Level Multistakeholder Committee. And that’s where things got cut and mangled. The strong anti mass surveillance language disappeared, leaving only watered-down version that can be read as if suggesting mass surveillance can be carried out in a way that is compatible with human rights law.

Make no mistake – this is due to vehement opposition to such strong condemnation of mass surveillance, voiced by none other than the United States. US representative went as far as to state, that in the view of the US (compare and contrast with the Polish statement above):

Mass surveillance not always a violation of privacy.

For the same reason there is no acknowledgement of Edward Snowden in the document, of course. And, of course, these were voiced unequivocally only at the not recorded nor transcribed HLMC meeting.

Net neutrality got a boot and was only included as a “point to be further discussed beyond NETmundial” (along with roles of stakeholders, jurisdiction issues and benchmarking).

Finally, intermediary liability only got a weak acknowledgement, anchored in “economic growth, innovation, creativity and free flow of information”, instead of human rights (like freedom of expression or privacy):

Intermediary liability limitations should be implemented in a way that respects and promotes economic growth, innovation, creativity and free flow of information.

Little wonder, then, that civil society organisations decided to voice their disappointment with the outcome document in a common statement; its the last sentence seems a fitting summary:

We feel that this document has not sufficiently moved us beyond the status quo in terms of the protection of fundamental rights, and the balancing of power and influence of different stakeholder groups.

Conclusions

The document is far from satisfactory, especially in the context of the very reasons NETmundial was conceived (mass dragnet surveillance by the US), and legislative work being done around network neutrality (including Marco Civil and the Europarlament vote). And as far as privacy and mass surveillance is concerned, we know it’s of rising importance for more than a decade. Time to up our game.

With FCC proposing watered-down and meaningless net neutrality rules during NETmundial proceedings, US agencies blatantly advocating more surveillance and smartphone remote “kill-switch” law being passed in California, NETmundial could have sent a strong, unambiguous signal about the need of protecting human rights also in the digital domain.

Instead, due to political pressure to find a compromise, however mediocre and meaningless (quipped “overwhelmingly rough consensus”), the outcome document doesn’t really introduce any new quality to the debate.

To some extent, though, it’s the journey that counts.

NETmundial was as much an Internet governance meet-up, as an experiment in multistakeholderism. And even though it was slanted (due to, among others, the HLMC having a large over-representation of governments), even though it was far from perfect, even though the process could have been better designed, it is still an experiment we can learn a lot from.

I feel that somewhere along the road NETmundial organisers missed the fact that:

Multistakeholderism is a framework and means of engagement, it is not a means of legitimization. – via Wikipedia

With eyes on the prize of a consensual outcome document, there was a vague feeling that civil society has been invited to the table to legitimize the process and the outcome, and that there are little to none concessions that would not be considered to keep all parties at the table.

It eventually turned out a bit better, and I find the fact that the US had to unequivocally advocate mass surveillance, is one of the positive outcomes of this meeting. The king had to acknowledge its lack of clothing.

While it is hard to disagree with Jeremié Zimmerman, writing for La Quadrature du Net:

Governments must consider the Internet as our common good, and protect it as such, with no compromise.

…we can, and should, learn from NETmundial. As Human Rights Watch put it:

What was evident throughout the two days of discussions in São Paulo is that a “multistakeholder” approach to Internet governance – however vague a term, or however difficult a concept to implement – is a far more inclusive and transparent approach than any process where only governments have a seat at the table

I think I’ll finish this off with a question raised by Smári McCarthy:

We’re going to need to do something better. The people running OurNETmundial were doing a fairly good job of drawing attention to the real issues. Perhaps OurNETmundial should become an event. But where? When? By whom? And how do we avoid cooption?

Irresponsible non-disclosure

This is an ancient post, published more than 4 years ago.
As such, it might not anymore reflect the views of the author or the state of the world. It is provided as historical record.

Yesterday Bloomberg broke the news that NSA is said to had known about Heartbleed for months or years, without telling anybody – and the wheels of the media and blogosphere have started to churn out reactions from surprised through shocked to outraged.

Frankly, I am most surprised by the fact that anybody is surprised. After Snowden’s revelations we all should have already gotten used to the fact that what once was a crazy tin-foil hat paranoia, today is entirely within the realm of possible.

Even less surprisingly, a quick dementi has been issued on behalf of the NSA. Regular smoke and mirrors, as anybody could have expected, but with one very peculiar – and telling – paragraph (emphasis mine):

In response to the recommendations of the President’s Review Group on Intelligence and Communications Technologies, the White House has reviewed its policies in this area and reinvigorated an interagency process for deciding when to share vulnerabilities. This process is called the Vulnerabilities Equities Process. Unless there is a clear national security or law enforcement need, this process is biased toward responsibly disclosing such vulnerabilities.

What this means is that when a bug is found by a “security” agency, it might not get responsibly disclosed. If “there is a clear national security and law enforcement need”, it might be used in a weaponized form instead.

With the “America under attack” mentality and the ongoing “War on Terror” waged across the globe, we can safely assume that there is “a clear national security need”, at least in the minds of those making these decisions.

And we need to remember, that if there is a bug, and somebody has found it (but not disclosed it), somebody else will find it, eventually. It might be Neel Mehta or Marek Zibrow, who then discloses it responsibly; or it might be Joe Cracker, who exploits it or sells it to other shady organisations.

And because we all use the same encryption mechanisms, the same protocols and often the same implementations, it then will be used against us all.

Now, it is crucial to understand that it’s not about NSA and Heartbleed. It’s about all “security” agencies and any software bugs. By not responsibly disclosing discovered bugs “security” agencies make us all considerably less secure.

Regardless of whether NSA has or hasn’t known about Heartbleed, such a non-disclosure policy is simply irresponsible – and unacceptable.

Ecologic, Ford and surveillance

This is an ancient post, published more than 4 years ago.
As such, it might not anymore reflect the views of the author or the state of the world. It is provided as historical record.

A few months ago Jim Farley, Ford representative, blurted in a panel at CES that:

We know everyone who breaks the law, we know when you’re doing it. We have GPS in your car, so we know what you’re doing. By the way, we don’t supply that data to anyone.

Comments about where not very positive, to say the least, and both Mr Farley and Ford’s PR manager retracted this statement immediately – underlining that gathered data would only be used after anonimisation, or only after explicit consent by the driver. In other words, “this is no surveillance”.

Of course, once the data reaches Ford’s servers the only thing keeping Ford from giving them away is their promise. Seems pretty thin to me – especially with the money insurance providers can throw at this (not to mention law enforcement).

Ford isn’t the only company why strives to “help” drivers by gathering data on them. A Polish startup, Ecologic (winners of the Warsaw Startup Fest), had this to say (emphasis mine):

Damian Szymański, Gazeta.pl: What is Ecologic’s idea and how can it help us all lower costs of using cars?

Emil Żak, Robert Bastrzyk: Today nobody keeps track of costs of using their cars. Turns out that annually it can add up to more than the value of the car itself. Tires, petrol, insurance, repairs, etc. It all costs. Our device analyses every action of the driver. It signalises what we have done wrong and suggests, what we can change to lower the costs of petrol, for example. Moreover, we have access to this data 24h.

Total surveillance?

Not at all. The question is how the driver drives their car. Ecologic is a mobile app, online portal and a device that you connect in your car. Thanks to that we can have all sorts of data, for example about combustion…

What kinds of data are collected? Ecologic’s website claims that the device is “equipped with the motion sensor, accelerometer, SIM card, cellular modem and GPS”, and that:

The system immediately begins recording operating data of the vehicle, the GPS position and driving techniques in real-time.

So the idea is to collect data like GPS position, acceleration and breaking, vehicle utilization, driving technique, and sending these off to Ecologic’s servers. Seems that it doesn’t differ wildly from what Ford has in stock, with an (apparently) nice addition of the driver being able to check on their data and stats. Sounds great!

However, a question arises: what happens with the data? Even if Ford’s “promise” not to share with anybody seems thin, Ecologic doesn’t even try to hide that the real money is in selling access to gathered data.

In the “For Who” (sic) section of their website we can find the real target group (emphasis mine):

Private users – keep an eye on the young driver in the family Small business – fast and easy management of vehicles Fleets – keep the fleet under control & save costs Leasing Companies – lower the accident rate and track miles Insurance – give discounts on no-claims & safe driving

Of course one very important group is missing from that list: I am sure law enforcement will be quick to understand the utility of requiring any and all cars install the device, and not having to deal with costly traffic enforcement cameras any more without losing the ability to issue speeding tickets. After all, would Ecologic deny access to data to law enforcement?

Ah, but the Ecologic cares about drivers’ impression of being surveilled:

Your driver after work can switch off live tracking to feel conftable without impression that he is “spied”. A button on the mobile app allows the driver to indicate that the current trip is personal and help you to track private km. (sic!)

So the driver can “switch off live tracking”, but the system will nonetheless help you (i.e. the employer) track “private km”? So these data also have to land in Ecologic’s servers, eh? Apart from the employer, who else will have access to this “private trip” data? Insurance companies? Law enforcement goes without saying, of course.

In the interview, Ecologic claims that:

It’s all about motivation and healthy competition. We need to change the way we think. Instead of a stick, we want to give people two carrots.

It’s a pity that for the drivers themselves this translates into three sticks – employer, insurance provider and law enforcement.

Blurry line between private service and public infrastructure

This is an ancient post, published more than 4 years ago.
As such, it might not anymore reflect the views of the author or the state of the world. It is provided as historical record.

This is my NetMundial content proposal, with some typos fixed and minor edits.

Abstract

ICANN and IANA decentralisation efforts mark an important milestone in the evolution of the Internet: there is finally widespread recognition of the fact that centrally controlled bodies pose a threat to the free and open nature of the Internet. ICANN and IANA are, however, but a small part of a much larger problem.

More and more, communication platforms and methods are secondarily centralized; that is, in a network decentralized on lower protocol levels there are services being run that are centralized on higher levels. Running on a network based on open standards are closed services, that are then used by other entities as base for their services.

In other words, some private services – offering, for example, user authentication methods – are being used as a de facto infrastructure by large numbers of other entities.

If we recognize the dangers of centrally-controlled domain name system, we should surely recognize the danger of this phenomenon also.

Document

It is of great value that the importance of decoupling IP addresses management and the domain name system management from a single state actor has been recognized and that currently there is a strong push towards multistakeholderism in this area.

There is, however, a secondary emergent centralization happening on the Internet, that potentially can pose a comparable, or even bigger, threat to the interconnected, open and independent nature of this global network.

This centralization is harder to perceive as dangerous, as it is not being actively supported by any state actor; hence, it falls under the radar for many Internet activists and technologists, that would react immediately had similar process been facilitated by a government. It does, however, have a potential to bring negative effects similar to a state-sponsored centralization of infrastructure.

Another reason for this process to happen unnoticed or for the possible negative effects of it to be depreciated is that it is fluid and emergent on behaviour of many actors, enforced by the network effect.

This process is most visibly exemplified in Facebook gathering over a 1 billion of users, by providing a centrally-controlled walled-garden, and at the same time offering an API to developers willing to tap-into this vast resource, for example to use it as authentication service. Now, many if not most Internet services requiring log-in as one of their options offer Facebook log-in. Some (a growing number) offer Facebook as the only option. Many offer commenting system devised by Facebook, that does not allow anonymous comments – a user has to have a Facebook account to be able to partake in the discussion.

Similarily, Google is forcing Google+ on YouTube users; to a lesser extent, Google Search is being used by a swath of Internet services as their default internal search engine (that is, used to search their own website or service). GMail is also by far the most popular e-mail and XMPP service, which gives Google immense power over both.

These are two examples of services offered by private entities (in this case, Google and Facebook) that had become a de facto public infrastructure, meaning that an immense number other services rely and require them to work.

If we recognize the danger of a single state actor controlling ICANN or IANA, we can surely recognize the danger of a single actor (regardless of whether it is a state actor or not) controlling such an important part of Internet infrastructure.

Regardless of reasons, why this situation emerged (users’ lack of tech-savvy, service operators’ want of easiest and cheapest to implement and integrate solutions, etc), it causes several problems for the free and open Internet:

  • it hurts resillience

If such a large part of services and actors depend on a single service (like Facebook or GMail), this in and of itself introduces a single point of failure. It is not entirely in the realm of the impossible for those companies to fail – who will, then, provide the service? We have also seen both of them (as any other large tech company) have large-scale downtime events, taking services based on them down also.

  • it hurts independence

In the most basic sense, any user of a service based on these de facto infrastructures has to comply with and agree to the underlying service (i.e. Facebook, Google) Terms of Service. If many or most of Internet services have that requirement, users and service operators alike lose independence over what they accept.

  • it hurts openness

Operators of such de facto infrastructures are not obliged to provide their services in an open and standard manner – running mostly in the application layer these services usually any attempts of interoperation. Examples include Twitter changing their API TOS to shut-off certain types of applications, Google announcing the planned shut-off of XMPP server-to-server communication, Facebook using XMPP for the internal chat service with server-to-server shut-off.

  • it hurts accountability and transparency

With such immense and binary (“either use it, or lose it”) control over users’ and other service providers’ data, de facto infrastructure operators do not have any incentives to share information on what is happening with the data they gather. They also have no incentives to be transparent and open about their future plans or protocols used in their services. There is no accountability other than the binary decision to “use it or lose it”, which is always heavily influenced by the network effect and the huge numbers of users of these services.

  • it hurts predictability

With no transparency, no accountability, and lack of standardization, such de facto infrastructure operators can act in ways that maximize their profits, which in turn can be highly unpredictable, and not in line with users’ or the global Internet ecosystem’s best interests. Twitters’ changing of API TOS is a good example here.

  • it hurts interoperability

Such de facto infrastructure operators are strongly incentivised to shut-off any interoperability attempts. The larger the number of users of their service, the stronger the network effect, the more other services use their service, and the bigger the influence they can have on the rest of the Internet ecosystem. Social networks are a good example here – a Twitter user cannot communicate with a Facebook user, unless they also have an account on the other network.

This is obviously not the case with e-mail (I can run my own e-mail server), at least not yet. The more people use a single provider here (i.e. GMail), the stronger that provider becomes, and the easier it would be for its operator to shut-off interoperability with other providers. This is exactly what Google is doing with XMPP.

  • it hurts innovation

Lack of predictability, openness and independence obviously also hurts innovation. What used to be a free and open area of innovation is more and more becoming a set of closed-off walled-gardens controlled by a small number of powerful actors.

It is also worth noting that centralized infrastructure on any level (including the level of de facto infrastructure discussed herein) creates additional problems on human rights level: centralized infrastructure is easy to surveil and censor.


Hence, the first question to be asked is this: when does a private service become de facto public infrastructure?

At this point this question remains unanswered and there is not a single Internet Governance body, or indeed any actor, able to reply to it authoritatively. Nevertheless, we are all in dire need for an answer to this question, and I deem it a challenge for Internet Governance and an important topic that should be included in any Internet Governance Forums now and in the future.


The second question that ever more urgently requires an answer if we are to defend the open and not balkanized Internet is: what should be done about private services that have become de facto public infrastructure?

This question is also as of yet unanswered, but there are several possible proposals that can be made, including treating such situations as monopoly and breaking them up (so handling them outside Internet Governance), requiring public interoperable API available for other implementators, etc. This is perhaps not exactly in the purview of Internet Governance, it is however crucial for the Internet as a whole and I propose it be treated as a challenge to be art least considered at IGFs henceforth.

IM IN UR MINISTRY, CONSULTING UR INTERNETZ

This is an ancient post, published more than 4 years ago.
As such, it might not anymore reflect the views of the author or the state of the world. It is provided as historical record.

Usually when I rant write about public consultations of some government ideas, there’s not much good I can say. Well, for once this is not the case.

The Ministry of Administration and Digitization is working on their position for upcoming NetMundial Internet stakeholders meeting in Saõ Paulo. To prepare for that, the Ministry has announced a call for comments on a document prepared by the European Commission about Internet governance, and has invited several organisations and companies to weigh-in on the topic on a multistakeholder meeting in meatspace.

The topic is immensely important, and I hope to elaborate on that soon. In the meantime, however, I’d just like to say, that for some time now NGOs that are interested and competent in this area no longer have to knock on Ministries’ doors. Instead, we’re invited along ISPs, telcos, and large Internet companies, and can freely voice our opinions. Sometimes we even get listened-to.

Even better, this time one of the NGOs invited to comment and for the meeting was the Warsaw Hackerspace.

So we got @hackerspace.pl addresses into official ministerial communication, and two hackers into ministerial corridors. Expecting the media to go crazy about it in 3… 2… 1…

Encrypted VoIP that works

This is an ancient post, published more than 4 years ago.
As such, it might not anymore reflect the views of the author or the state of the world. It is provided as historical record.

Some of you might have already noticed (for example via my Diaspora profile) my infatuation with RetroShare. A very interesting communication and file-sharing tool that does deserve a proper, full review – for which I do not, unfortunately, have time.

There are some good things (full peer-to-peer decentralisation, full encryption), there are some less good things (using SHA1 and the daunting GUI). But today RetroShare really shined, and in an area that is constantly a chore for free software…

VoIP

Now, I know there are many free software projects trying to do VoIP, but none seems to be “there” yet. SIP is hard to set-up; Jitsi works on a single server but for some reason I have never been able to get a working VoIP call via Jitsi with a contact from a different server. One project that was closest to being usable was QuteCom… “was”, as there hasn’t been a single new release for 2 years now.

Enter RetroShare.

Just download the software, install it and have the keys generated (that happens automagically), and download the VoIP plugin if you don’t have it already included (chances are, you have; if not, on Linux retroshare-voip-plugin package is your friend, the other OS users can look here).

Now add a friend, start a chat and voilà, VoIP works. No account on any server needed, no trusting a third party, works behind NATs (tested!). And is already encrypted, so no one can listen-in on your communication.

The amazing part? During testing my lappy suspended to ram. After waking up a few minutes later the call worked as if nothing happened.

So you want to censor the Internet...

This is an ancient post, published more than 4 years ago.
As such, it might not anymore reflect the views of the author or the state of the world. It is provided as historical record.

Internet censorship proposals are tabled with amazing regularity – and usually are completely detached from reality of how the Internet and digital communication works. For the proponents, censorship seems an “easy and effective solution to a problem”, while in fact technical solutions to social problems simply do not work, and have a tendency to break things. Badly.

In preparation to one of the consultation meetings around this subject (even though Polish political climate is rather hostile to censorship ideas at the moment, we still get consultation meetings about it, from time to time) I have prepared a list of questions that have to be asked and answered regarding any central-level parental filter Internet censorship proposals (PDF and ODT available; I’d like to thank Mr Adam Haertle for his suggestion on extending question no.11).

If anybody feels like using this as a base for a checklist, please be my guest! Same goes for additions, suggestions, improvements.

Internet censorship questions

This document attempts at gathering all the relevant questions that need to be asked and answered with regard to any proposal of introducing a central-level Internet porn censorship solution, and can be used as a map of the related issues that would also need to be decided on.

Questions herein are for the most part not deeply technical and do not require an answer containing any concrete technical solutions. They also do not touch economy-related issues.

1. What definition of pornography is to be used in the context of the proposed solution? In particular: i. Are graphic works and animations not created via image recording techniques to be included in that definition? ii. Are textual works describing sexual acts to be included also? iii. Are audio materials to be included? iv. Are works of art containing or presenting nudity to be included? If not, how are they going to be differentiated? v. Are biology and sexual education materials to be included? If not, how are they going to be differentiated?

2. Who is to decide on putting given content on the blocked content list? In particular: i. What oversight measures are proposed to combat instances of putting (willfully or by mistake) non-pornographic content on said list? ii. Will the blocked content list public, or secret? iii. If the list is to be kept secret, what are the reasons for doing so?

3. How is the content to be blocked going to be identified? In particular: i. Is the content identification to be based on textual keywords within content itself? ii. Is it to be based on keywords in URL leading to content? iii. Is it to be based on an explicit blacklist of URLs? iv. Is it to be based on an explicit blacklist of domains? v. Is it to be based on an explicit blacklist of IP addresses? vi. Is it to be based on image recognition? vii. Is it to be based on audio recognition? viii. Is it to be based on checksum comparison? ix. Is it to be based on a combination of methods? If so -- which methods are to be employed?

4. What remedy procedure is considered in case of blocking of content that does not fulfill the definition of pornography? In particular: i. Where and to whom such incidents are to be reported? ii. What would the confirmation or denial procedure for such reports be?

5. What remedy procedure is considered in case of not blocking of content that does fulfill the definition of pornography? In particular: i. Where and to whom such incidents are to be reported? ii. What would the confirmation or denial procedure for such reports be?

6. Are parents/legal guardians/subscribers to have control over the scope of blocking? In particular: i. Will they be able to indicate that given content should be excluded from blocking, even though it does fulfill the definition of pornography? ii. Will they be able to indicate that given content should be blocked, even though it does not fulfill the definition of pornography?

7. Is the blocking solution to be opt-in, opt-out, or is the choice to be presented upon first connection? In particular: i. Is the choice going to apply to all devices using a given connection? ii. Is the choice going to apply only to a particular device on any connection? iii. Is the choice going to apply only to a particular device on a particular connection?

8. Is the choice to enable blocking is to apply also to institutional subscribers and companies? In particular: i. If not, does that that mean no blocking, or mandatory blocking? ii. Is it to apply to libraries? iii. Is it to apply to schools? iv. Is it to apply to universities and other higher education institutions? v. Is it to apply to public hot-spots run by local communities? vi. Is it to apply to public hot-spots run by private service providers? vii. Is it to apply to hot-spots provided only for private service providers' customers? viii. Is it to apply to hot-spots run by private companies for their employees?

9. Will content explaining how to circumvent blocking also be blocked?

10. How is HTTPS or other SSL/TLS-encrypted traffic to be handled? In particular: i. Is HTTPS/TLS/SSL traffic to be ignored altogether? ii. Is HTTPS/TLS/SSL traffic to be blocked? iii. Is HTTPS/TLS/SSL traffic to have encryption layer broken and content filtered?

11. How is private communication to be handled? In particular: i. Is e-mail and Internet messaging communication to be filtered? ii. Are peer-to-peer networks to be filtered? iii. Are MMS messages to be filtered? iv. Is private audio-video (including VoIP) communication to be filtered? v. Is private audio communication via regular and mobile phones to be filtered?

12. How is encrypted private communication to be handled? In particular: i. Is such communication to be blocked? ii. Is such communication to be ignored? iii. Is such communication to have encryption layer broken and content filtered?

13. Are solutions regarding HTTPS/TLS/SSL and private and encrypted private communication to be implemented in networks operated by institutional subscribers and companies, as per question 8. above?


I’d love to see some answers to these questions from each and every person that proposes or supports central-level parental filters Internet censorship.

This is why we can't have nice IRC

This is an ancient post, published more than 4 years ago.
As such, it might not anymore reflect the views of the author or the state of the world. It is provided as historical record.

…or how the FOSS Foundation and the Warsaw Hackerspace got (temporarily) banned from #debian at Freenode.

Asking about Ubuntu on Debian’s IRC channels is not considered nice – and being a lurker there for years I can understand why. These are two different systems, and trying to get Debian people to work on your Ubuntu problem is more often than not wasting their resources and your time. There are better places to get support for Ubuntu.

Having said that, when somebody makes such a misstep, the right way to proceed is inform. Especially when the question is not about Ubuntu itself, but about a tool used by both distros.

I can understand that many people ask such questions in #debian, and that some need a bit more of an incentive to move to the right channel. We wouldn’t want to ban a whole hackerspace because of one user like that, now would we?

“You should know better”

Well, apparently some would. And not only this – every single other person that asked why the whole Warsaw Hackerspace’s network was banned from the channel, also got immediately banned, with a dry explanation in the kick message:

you should know better

Because I asked about the situation, while connecting from the Free and Open Source Software Foundation’s infrastructure, the whole FOSSF got affected:

[00:08:35](02.02.14) <rysiek|pl> abrotman: hey, that's a damn good idea to just ban a whole hackerspace because somebody asked about apt-get in #debian [00:08:55](02.02.14) *** Mode #debian +o abrotman by ChanServ [00:08:56](02.02.14) *** Mode #debian +b *!*@master.fwioo.pl by abrotman [00:08:57](02.02.14) <-* abrotman has kicked rysiek|pl from #debian (you should know better)

Now that’s a way to make new friends, abrotman!

“The ban will expire”

Friends or no friends, the whole FOSSF network got banned from #debian. We’re doing a lot on that distro, all our servers are running it, providing stable and safe services for projects we run. Bottom line – if we’re banned from #debian, spreading Free Software in Poland gets this much harder.

So I started to look around for ways to get in contact with people that might be able to help. Posted on Diaspora, asked in #freenode, got sent to #debian-ops. There I (and several other people from the Warsaw Hackerspace) have tried to reason with the op in question:

[00:33:20](02.02.14) <abrotman> and having you both come in and whine doesn't help [00:33:23](02.02.14) <q3k> you jus tbanned a community od ~60 people [00:33:33](02.02.14) <q3k> which is not really excellent. [00:34:15](02.02.14) <rysiek|pl> abrotman: "come and whine"? I'm sorry, but you just banned a host with many users owned by the organisation I represent [00:34:34](02.02.14) <rysiek|pl> abrotman: because I asked about your attitude towards a user in #debian [00:34:35](02.02.14) <abrotman> The ban will expire, folks can ask for a +e

“The ban will expire” was the only real answer we got.

“You had to escalate why?”

Turned out there are many things that “won’t help”:

[01:13:46](02.02.14) <abrotman> Posting on diaspora probably won't help .. [01:15:17](02.02.14) <rysiek|pl> abrotman: and this is my fault... how? [01:15:31](02.02.14) <abrotman> You had to escalate why?

I guess the question about escalating is the real question here. Did it have to escalate to banning the whole nat.hackerspace.pl because somebody asked a question containing the word “Ubuntu”? Did it have to escalate to banning the whole master.fwioo.pl because I asked about why the Warsaw Hackerspace got banned from #debian?

Being excellent to each other

I understand being an op is a tough cookie, I really do, especially in very popular channels like #debian. And I understand that people get tired, annoyed, frustrated doing that. I appreciate their work, just as I would like people to appreciate the work I do.

But that is no justification for indiscriminately banning whole networks. As Quinn Norton has said at 30C3, “it is time for us to up our game”. I believe we can do better.

The bans have been lifted now, thanks to some other good soul in the #debian channel, and I hope once all parties involved get some well-deserved sleep, we’ll be able to draw conclusions, and then go past this.

Decentralize where your mouth is

This is an ancient post, published more than 4 years ago.
As such, it might not anymore reflect the views of the author or the state of the world. It is provided as historical record.

I have lately come across such a comment:

Hi sorry for barging in, but with all of the projects now based around decentralisation, I thought a common place to exchange ideas would be good.

“Oh”, thought I, “this is gonna be good!” A grand idea: let’s create a place to talk about decentralisation. After all, decentralization is so important and all. Where should we create such a place? Diaspora? Friendica? Any other decentralized, federated service?

I have created a subreddit as a place for as many projects to collaborate and share experiences, research and general comments.

Facepalm.

What better place to talk about decentralization than a centralized service, right?..


It’s not technology we need to change, it’s our mentality.