Skip to main content

Songs on the Security of Networks
a blog by Michał "rysiek" Woźniak

Internet is not a problem

This is an ancient post, published more than 4 years ago.
As such, it might not anymore reflect the views of the author or the state of the world. It is provided as historical record.

This is the text of a speech given at the 2nd FLOSS Congress in Katowice, Poland, on June 7th 2013; and on the CopyCamp conference in Warsaw, Poland on 1st of October, 2013.

Interesting times we live in, aren’t they. Times of the digital revolution that changes the way we think. A breakthrough requiring a change of approach in almost every area of human thought.

It’s hard to overestimate the importance of our ancestors starting to use tools. Today we rightly consider this a turning point in human history. Up until lately, however, all the tools we have ever used and created, extended our physical capabilities: we used them to throw farther, hit harder, cut stronger materials.

The inventions of the computer and the Internet are the first tools in the history of humankind that extend – so directly – our mental abilities! We can count faster and more precisely, have access to an insurmountable wealth of information and knowledge, and communicate with speeds that just 2 generations ago were a pure fantasy. That’s a change of an era, happening before our very eyes!

In the physical world moving is the most basic operation. When I give something to somebody, I lose it. If I want to get it back, somebody has to let go of it. In the digital world, the most basic operation is copying. Even when I “move” a file from hard-drive to a pendrive, in fact it gets copied and then deleted.

Copying something in the physical world is arduous and costly, if even possible at all. In the digital world, it’s the most basic operation one can perform. Sharing suddenly is not inextricably connected with loss. This single fact makes a world of difference.

Such ease of sharing gave us the Free Software movement, Wikipedia, and libre culture. It gave us OpenStreetMap and innumerable other wonderful projects, that at their heart have this core value: sharing. Sharing of knowledge, of data, of any results of our work.

This means, however, that old business models – built around the physical world’s difficulty of copying – start to be inoperable. Just as the business model of horse-and-buggy makers stopped being operable once cars were invented.

The tragedy of our times is that there are people who find this mundane reason enough to treat this amazing chance, this one of a kind revolution in our ability to access knowledge, culture, information… like it was a problem. It’s truly tragic that instead of looking for new business models, time and money is spent on finding technical and legislative means of introducing a rule from the physical world into the digital reality: trying to make copying hard again.

I believe that to be the wrong approach. Previous similarly important invention – writing and the printing press – are universally considered rather positive developments; the Internet and the digital revolution are as important and valuable, and crucial.

And there are business opportunities available! Instead of saving buggy manufacturers, why not ponder selling cars? Instead of creating artificial bounds for the free flow of ideas and information, why not find ways of building a new, digital economy for the new, digital era? Built on new assumptions and new rules, rather than badly emulating rules from the Old World. New economy that treats the user as a partner, not as a thief.

The Internet isn’t a problem. It’s an opportunity. Let’s seize it!

Libel Culture

This is an ancient post, published more than 4 years ago.
As such, it might not anymore reflect the views of the author or the state of the world. It is provided as historical record.

While I am not a big fan of libel lawsuits (as they are often used to stifle freedom of speech or science), this one time I am very content about them. Alek Tarkowski and Igor Ostrowski are suing Maciej Strzembosz for libel and defamation. It will probably have consequences for the whole copyright reform debate in Poland, and potentially the whole EU.

Why is that important?

Alek and Igor are well-known copyright reform and libre culture activists in Poland. Alek is Creative Commons Poland coordinator and an active member of any debate related to openness, especially when concerned with culture or science. Igor held the office of the under-secretary of state at the Ministry of Administration and Digitization (Polish acronym: MAC) during the anti-ACTA campaign and was instrumental in helping the anti-ACTA agenda go through; he also presided over the meeting where the plan of ACTA signing has been announced, and was as shocked and appaled about the plan as we were.

Both had also been part of the Group of Strategic Advisors to the Prime Minister before MAC came to being, and were actively advocating open data, open access, open government and open education agenda to the Prime Minister.

Their work indubitably is and was very important to bring Polish government and public debate to where we are now on these issues.

Maciej Strzembosz on the other hand is a film-maker, chair of the (MPAA-like) Polish National Chamber of Audio-Video Producers, and a long-time vocal opponent of open licensing, Creative Commons, copyright reform. This is the kind of person that calls file-sharers “thieves” and libre culture activists “Google’s pawns”.

What’s it all about?

Apparently, Mr Strzembosz called the wrong activists “thieves” and “Google’s pawns”. Alek and Igor are suing him for libel for his public statements containing these very epithets in relation to them.

Do you have your pop-corn ready?

Why I find -ND unnecessary and harmful

This is an ancient post, published more than 4 years ago.
As such, it might not anymore reflect the views of the author or the state of the world. It is provided as historical record.

’UPDATE: highlighted the harmfulness of incompatibility of “no derivatives” licenses with libre licenses (including other CC-licenses); heartfelt thanks to Carlos Solís for the Spanish translation. ¡Gracias! *

There are two basic arguments for licensing some works under a “no derivatives” license (e.g. any -ND Creative Commons license, or the GNU Verbatim Copying License):

  • some authors do not wish for their works to be modified, twisted, used in ways they do not approve of;
  • some works (for example, expressing somebody’s opinion) are fundamentally different from other kinds of works and should remain invariant.

I believe both are specious. And I feel “no derivatives” licenses are both ineffective and counter-productive. Here’s why.

“I don’t want my work twisted!”

So you’re an author and you do not wish your work to be twisted or modified to say something you didn’t want to say. There are two possibilities of such modification:

  • somebody takes your work, twists it and publishes it under your name, suggesting you wrote this;
  • somebody modifies your work and publishes it under their own name, as a derivative work.

The first possibility would be illegal regardless of the license! Nobody has the right to claim your authorship over something you did not create; nobody has the right to modify your work and claim it is still your work. -ND licenses are unnecessary for that purpose, it’s already in the copyright law.

As far as derivative works that build upon the original but change the meaning are concerned (without misrepresentation of authorship), I do not feel we need licensing restrictions for that. That feels too close to censorship for my liking – “thou shall not use my own words against me”; “I don’t like what you’re trying to say so I will use copyright law to stiffle your speech”.

Besides, creating parodies is fair use and no amount of “no derivatives” licensing clauses willl change that. Same goes for quotes. Your words will be used in works that say something you do not wish to say, whether you like it or not!

In that sense, “no derivatives” licenses are ineffective.

“Some works should be invariant!”

This argument hinges upon an assumption that some works (memoirs, documentation, opinion pieces) are fundamentally different than others and hence should be preserved as they are.

First of all, all of what I wrote above applies here. Such works cannot be “modified” anyway, “modification” is in fact creation of a derivative work, nobody can (legally) misrepresent authorship or the derived work as the original; such works, also, can be quoted and parodied, regardless of the license. “No derivatives” is ineffective.

“No derivatives” licensing stops people from doing things most of us would say are desirable. Like improving upon some work, creating better arguments or updated versions. Like translating into another language to disseminate knowledge and argumentation. These things are genuinely good, but people that would like to do them will look at the license, and will then get the message they cannot proceed…

More importantly, however, this argument assumes that there is only one context in which a given work can be used. E.g. “an essay on free software” as an article to read and get argumentation from. Or a “memoir” as a historical document describing one’s views and fortune.

Thing is, any work can be used in any context, and often is.

Think for a moment about a teacher in an IT class using an essay on free software as educational material, modifying it just a bit so that the class can better understand it or relate to it, or using it as a basis for an in-class discussion. “No derivatives” would not allow for that.

Think about how artists used different kinds of “materials” for their works of art – for example Duchamp’s “Fountain”. A “memoir” or an “opinion piece” could easily be used in an “art” context, for example as a basis of vocabulary for some free-software scripted (as in: done by scripts in interpreted programming languages) poem writing contest. An example of something similar is HaikuLeaks.

I bet we could find similar haikus in GNU documentation, FSF policy papers, Linux documentation. “No derivatives” would not allow anybody to use these in such a context – and I would say that’s a genuine loss.

And in that sense “no derivatives” licenses are counter-productive.

Muddying the waters

“No derivatives” licensing is also truly harmful.

They make it harder to explain what libre licensing is. Many people believe that any CC or GNU license is “free as in freedom”, while in fact CC-*-ND and GNU Verbatim licenses cannot be considered as such. This distinction is both crucial and hard to convey.

They also cause segmentation within the CC/GNU-licensed group of works: some such works (often co-existing on a single OS or in a single repository) are licensed in a way that makes them incompatible with others. Some cannot be modified or used (namely, “no derivatives”-licensed) in new works, while others can.

This muddies the waters, makes libre licensing this much harder to explain and this much harder to take advantage of.

TL;DR

  • “No derivatives” licensing does not protect against things we want it to protect against (either because these are explicitly allowed for by the copyright law, or because the copyright law already disallows them);
  • and at the same time stops people from doing things we might consider interesting or beneficial;
  • while making it harder to promote libre licensing and libre-licensed works.

One year anniversary of Anti-ACTA

This is an ancient post, published more than 4 years ago.
As such, it might not anymore reflect the views of the author or the state of the world. It is provided as historical record.

It’s a year today since the whole Europe joined Poland in anti-ACTA protests. Since then we had Polish PM admitting his mistake, politicians calling ACTA “passé”, some serious lobbying in the European Parliament, and finally the death of ACTA in a beautiful festival of democracy.

The debate over copyright reform is far from over and many years will still pass until it is brought up to speed with current technology and social norms, with how culture is being created, shared, remixed by thousands upon thousands for the joy of millions. The War on Fun will continue; copyright abuse will get worse before it gets better; finally, the hypocrisy of copyright maximalists will be enough to bring them down…

…But in the meantime, rejoice! A year ago the whole Europe decided to take a stand and assert our rights, our freedoms and our power to decide. And with one voice, citizens of all European states shouted: no to ACTA.


This is a tribute to them (by prof. Edward Lee).

Fighting Black PR around OER

This is an ancient post, published more than 4 years ago.
As such, it might not anymore reflect the views of the author or the state of the world. It is provided as historical record.

I have already written about the black PR campaign waged by the traditional publishers’ lobby against the Polish open textbooks government programme, and I have given a talk on 29C3 on this topic. Time for a write-up of the arguments used by the lobby – and how to counter them.

As I have written before, it is crucial to understand that what the anti-OER lobby actually cannot swallow is the “open” part – the libre licenses the textbooks are supposed to be published under, as they uproot traditional publishers’ business models.

However, because openness in education is such a good idea, the lobby knows full well they cannot attack it directly. Instead, they argue against the whole programme on other grounds.

I shall present the arguments I have encountered during the last year, and ideas how to counter them.

Cost

The most-often used argument against the programme: creating the textbooks will cost a lot of public money, which could be arguably better spent.

The fact of the matter is, however, that traditional textbooks are not cheap at all. In Poland a set of textbooks for a single child for a given year costs around €150. Taking into account that average pay in Poland is around €1000, and net minimal wage is €280, this is not a small sum of money. The whole Polish school textbooks market is currently worth around €250mln.

What’s more, the Polish government subsidizes poorer families for them to be able to buy textbooks, to the tune of €32mln annually. The cost of the whole e-textbooks pilot programme (creation of 18 textbooks for grades 4-6 of the public schools) is to cost €11mln.

Once created, open textbooks can be reused, updated, remixed and improved by anybody. This means that the cost (from the general public’s point of view) of creating them will be to a large extent a one-time investment – which cannot be said about traditional textbooks, which are restrictively copyrighted by the publishers’.

The argument is hence moot.

Equipment

It is very unfortunate that the open textbooks programme in Poland is called “e-textbooks”, as that creates ambiguity as to the role electronic equipment (laptops, tablets, e-book readers) will play in it. Ambiguity that is being exploited by textbook publishers’ lobby by scaring the general public with costs of equipment purchase (supposedly covered by the parents), upkeep, and with related problems (charging, theft, malfunctions, etc).

The crux here is that the programme is not about equipment, and that equipment has a completely secondary role in it. The main reason for the programme is the openness. It’s true that electronic versions of the materials will be prepared, but all materials will have print-ready versions, and all materials will be available in open formats, precluding requiring any particular make of equipment.

Open textbooks will be available to students (and other interested parties) via the Internet, and it will be possible to print them out in schools and libraries. This also has the added benefit that students will not have to carry heavy books every day with them – an argument that might seem superficial, but is raised time and again by parents, teachers and medical practitioners.

The most absurd take on the equipment argument is that “tablets do not create a second-hand market, as traditional textbooks do”. This is something that actually was present in one of the articles by publishers’ lobby, and is wrong on both accounts. Tablets do enjoy a thriving second-hand market, while traditional textbooks in Poland – in no small part due to deliberate actions by the publishers, like bundling exercise booklets within textbooks – have a hard time in supporting it.

Quality

Traditional textbook publishers claim that only their expertise in textbook creation can guarantee their proper quality, and that no “crowdsourced” textbook effort can match it.

First of all, the open textbook programme in Poland is not simply crowdsourcing the creation of textbooks. The programme mandates 4 higher education institutions as subject matter partners and one highly-regarded technological institution as a technological partner. The textbooks are to be prepared by experts in their subjects in cooperation with education theorists and practitioners.

Secondly, openness of the process and the resources can only help their quality, as the more people are watching and able to engage with the process, the sooner errors get fixed. This is the model the whole free/libre/open source software community works in, and the adoption of FLOSS (especially in science and technical communities) seems to confirm its quality. This is also the model Wikipedia works in, with good results.

Finally, open educational resources projects around the world prove that the crowdsourcing model works and delivers high quality educational materials.

Had the publishers been genuinely concerned with textbook quality, they would release their textbooks under open licenses, allowing for their fast improvement by a large community. They have not, hence it is safe to assume that – unsurprisingly – quality is not their main concern.

Unfair business practices

Textbook publishers claim that the government programme constitutes unfair business practice, and they even sent a letter, threatening legal action against any higher education institutions that would consider taking part in the programme.

Legal analysis of said letter clearly shows that that claim is not at all supported. It is preposterous to claim that a government programme can be an act of unfair business practice; besides, the publishers were invited to partake in the programme – they declined.

Regardless, however, of government involvement in this project, if indeed offering free and open materials would constitute an unfair business practice, we would have to shut down Wikipedia and make FLOSS illegal – they, too, offer free and open materials and solutions; they, too, endanger certain business models.

Finally, the real reason for this letter was to stifle and halt the open textbooks programme – had all the higher education institutions taken it at face value and declined to take part in the programme due to fear of litigation, the programme couldn’t have continued. This was a scare tactic, and indeed treading the line of unfair business practices itself.

Market destruction

This programme, it is claimed, will destroy the market worth €250mln, and cause thousands of people to lose their jobs.

The fact that a given product or service puts a given business model in jeopardy is not an argument against this product or service. It is a clear sign it’s time to seek a new business model. And open textbooks allow for new business models – textbook publishers could, if they only wanted, build new business models on them. For example, they could offer high quality printing services, or adapt open textbooks to particular needs of particular profiled schools.

It is additionally claimed that the destruction of this market will harm the whole economy. This is a broken window fallacy – the fact that parents will now spend less money on school textbooks doesn’t mean that this money will not get spent at all.

IT industry will reap the profits

Somebody has to create the infrastructure in schools, somebody has to get equipment support contracts… j’accuse! The whole programme is just a money grab by the IT industry, say the textbook publishers.

In the light in the previous “market destruction” argument, it is odd when the lobby uses this argument. After all, had this been true, their fighting against the IT industry market would itself constitute an attempt at “market destruction”.

This argument is all the more peculiar when we remind ourselves of the fact that this programme is not about equipment, and that money goes not to IT companies, but to open textbook authors. Once we realise that not a single IT industry lobbyist was present on any of public consultation meetings regarding the programme, the argument falls squarely into the realm of absurdity.

Centralized education system

This argument is calculated to play on emotions of people still remembering the socialist state in Poland, by claiming that this is a way of introducing a centralised education system.

This is nothing new, however – during the last 25 years all textbooks had to be vetted by the Ministry of Education. Open textbooks can only loosen the grip of the central government on the educational resources, as anybody will now be able to build upon a vetted textbook.

Death of books (and death of culture)

And the final argument: children are already spending too much time in front of computer screens, and it is ever harder to get them to read a book. Making the textbooks so that they are to be read on a computer screen or electronic device will only make things worse and will spell the end of books as we know it.

Believe it or not, this was also claimed: our culture is a culture of the book, and if the books die, our whole culture will die with them.

And of course yet again we have to remind ourselves that the “electronic” part of the programme is not the relevant part, and that all materials will be available in print-ready versions.

Then again, there is a question of means and of the end. Access to information, to education, to knowledge is the end, and the paper book is just the means. Whether or not it dies is yet to be seen, but we already have many other – some arguably better – means of accessing the written word. It seems safe to assume that our culture is not threatened.

HOWTO: effectively argue against Internet censorship ideas

This is an ancient post, published more than 4 years ago.
As such, it might not anymore reflect the views of the author or the state of the world. It is provided as historical record.

During last few years I have been involved in arguing against several attempts at introducing Internet censorship in Poland. Some of these where very local and went almost unnoticed outside Poland (like Rejestr Stron i Usług Niedozwolonych – the Register of Unlawful Websites and Services, in 2010); some where a part of a larger discussion (like the implementation debate around EU directives, that allowed, but not mandated, introducing child porn filters in EU member states); one made a huge splash around the world (I write about anti-ACTA campaign efforts here).

At this point I have gathered quite some experience in this. Due to censorship ideas gaining support even in apparently democratic countries I have decided it’s time to get it all in one place for others to enjoy.

The Ground Rules

There are some very important yet simple things one has to keep in mind when discussing censorship ideas. They can be best summarized by an extended version of Hanlon’s Razor:

Never attribute to malice that which is adequately explained by incompetence, laziness or stupidity.

More often than not the fact that somebody proposes or supports Internet censorship is not a result of malicious intent – however tempting such an assumption might be. Usually such support stems from the fact that people (including policymakers):

  • do not understand how Internet works,
  • do not see the connection between their idea and censorship,
  • do not grasp the technical problems and the cost of implementing such ideas,
  • do not see nor understand the danger of implementing them.

There are two areas one has to win in in order to have a chance of striking down such ideas:

  • logical argumentation based on technical issues;
  • purely emotional public debate.

The former is the easier one, and can give a good basis for the latter one – which is the endgame, the crucial part of winning such arguments.

The Adversaries

There are usually five main groups of people that one has to discuss with in such a debate:

  • politicians;
  • civil servants;
  • law enforcement, uniformed and secret services;
  • genuinely involved (if sometimes misguided) activists;
  • business lobbyists.

There is also a sixth, crucial group that has to be swayed to win: the general public. To communicate with that group, you also need the media.


Politicians are very often the first to call for Internet censorship, and as a rule are in it for short-term political gain, not for long-term social change. The social change bit is just an excuse, the real reason why they float such ideas is more often then not politics and gaining popular support or getting their names out in the mainstream media.

Sometimes it’s enough to convince them personally, sometimes what is needed is the only argument a politician understands always – an appeal to the authority of the general public, that needs to be vocal against censorship. It is usually not wise to assume they have malicious intent (i.e. stifling opposition), this only complicates discussing with them.

Civil servants usually do not have strong feelings one way or the other, or at least they are not allowed to show them; they do what their superiors (the politicians) tell them to do. There is no gain in alienating them – if you get militant or hostile towards them, they might then start actively supporting the other side. They are very often not technical, they might not understand the intricacies of the technology involved; they also might not grasp or see the civil rights implications.

Law enforcement, uniformed and special services treat such ideas as a power grab or at least a chance to get a new tool for doing their jobs. They usually understand the technical issues, and usually don’t care about the civil rights issues involved. They see themselves as the defenders of law and order, and implicitly assume that the end justifies the means – at least in the context of Internet censorship and surveillance. They will not get swayed by any arguments, but do not usually use emotional rhetoric.

Pro-censorship activists feel very strongly about some particular social issue (child porn; gambling; porn in general; etc.) and believe very deeply that Internet censorship is a good solution. They have a very concrete agenda and it is very hard to sway them, but it is possible and worth a try. One should not assume malicious intent on their part, they genuinely feel that Internet censorship would bring capital-G Good to the world.

They usually do not understand the technical issues nor costs involved in implementing such ideas, although they might understand the civil rights issues. If they do not grasp them, explaining these to them might be a very good tactic. If they do, they might make a conscious choice of prioritising values (i.e. “one child that does not see porn on the Internet justifies a small infringement of freedom of speech”).

When made aware of the costs of implementation, they will claim that “no price is too big to pay”.

Business lobbyists tend to be present on both sides. The lobbyists for the ISPs will fight Internet censorship, as it means higher costs of doing business for them – however, as soon as there are cash incentives on the table (i.e. public money for implementing the solutions), many will withdraw their opposition.

There are usually not many pro-censorship lobbyists, at least not on public meetings. They are not possible to sway, and will support their position with a lot of “facts”, “fact sheets”, “reports”, etc., that after closer consideration will turn to be manipulative, to say the least. Taking a close look at their arguments and being prepared to strike them one by one tends to be an effective tactic, if resource-intensive. It might be possible, however, to dispel first few “facts” supplied by them and use that as a reason to dismiss the rest of their position.

General public is easily swayed by emotional arguments – like “think of the children”. However, due to the nature of these and the fact that the general public does not, en masse, understand technical issues involved, it is not easy to make a case against Internet censorship, especially if the public is not at least a bit opposed to censorship and surveillance in general.

It is, nevertheless, crucial to have the public on your side, and for that one needs strong emotional arguments, and very strong factual, technical arguments to weaken the emotional pro-censorship arguments.

In order to be able to communicate with the general public you need media. It is crucial to have high-quality press releases, with all the information needed provided within (so that it is as easy for the media as possible to run with the material).

It is also very important to remember that media will distort, cut, twist information and quotes, and take them out of context. This, also, should not usually be attributed to malice, but to the way modern media work, and to lack of technical expertise among journalists. Hence, the language has to be thought-through and as clear (and as easy and accessible for the casual reader) as possible. Or more.

Media communiques should be short, succint and to-the-point. This at the same time helps them being understood by the general public, makes it easier for the media to run the material and makes it harder to distort it.

When communicating with the media it is also helpful to try and keep political neutrality, by focusing on the issues and not on party membership nor programmes; and to provide actionable items from time to time, for example open letters with a specific and unambiguous questions to the pro-censorship actors regarding legality, costs, technical issues, civil rights doubts, etc., to which (if run by the media) the actors will be compelled to answer.


Each of these groups, and often each of the actors involved, needs to be considered separately.

Each may be possible to sway with different arguments and in different contexts – public meetings, with press and media, will put pro-censorship politicians in hot water if there is a visible public opposition; more private meetings are a better choice when the public is generally pro-censorship but there are politicians or civil servants that oppose it, or consider opposing it: sometimes all they need is a good argument they could use publicly to support their position.

The Excuses

The reasons – or excuses – for a pro-censorship stance are usually twofold:

  • social;
  • political.

Sometimes the social reasons given (i.e. child pornography or pornography in general, gambling, religion-related, public order, etc.) can be taken at face-value as the real, factual reasons behind an Internet censorship idea. This was the case several times in Poland, and probably is the case in most European censorship debates.

Sometimes, however, they are just an excuse to cover the more insidious, real political agenda (like censoring dissent speech and opposition, as in China, Iran, Korea).

The crucial issue here is that it is not easy to tell whether or not there is a political agenda underneath the social argumentation. And while it is counter-productive to assume malice and such political agenda in every case, it is also expedient to be aware of the real possibility it is there, especially when the number of different actors involved in such a debate is taken into account.

Social excuses

There is a number of (often important and pressing) social issues that are brought up as reasons for Internet censorship, including:

  • child pornography (this is by far the most potent argument used by censorship supporters, and it is bound to show up in a discussion sooner or later, even if it starts with a different topic – it is wise to be prepared for its appearance beforehand);
  • pornography in general;
  • gambling;
  • addictions (alcohol, drugs available on the internet, allegedly also to minors);
  • public order (this one is being used in China, among others);
  • religion-related;
  • libel laws;
  • intellectual monopolies,
  • local laws (like Nazi-related speech laws in Germany).

The crucial thing to remember when discussing them is that no technical solution ever directly solved a social problem, and there is no reason to believe that the technical solution of Internet censorship would solve any of the social issues above.

Censorship opponents also have to be prepared for the inevitable adding of new social excuses in the course of the debate. For example, in Poland with the Register of Illegal Sites and Services, the Internet censorship idea was floated due to anti-gambling laws and foreign gambling sites. During the course of the discussion there were other excuses used to justify it, namely child pornography and drug-related sites.

That’s why it is important not only to debate the merits of the excuse, but to show that Internet censorship and surveillance is never justified, regardless of the issue it is supposedly meant to tackle.

It is worth noting, however, that such adding of additional excuses for censorship can backfire for its proponents. If the anti-censorship activists make the pro-censorship actors (i.e. by using the “slippery slope” argument) state clearly at the beginning of the discussion that such censorship shall be used for the stated purpose only, adding additional excuses for it later can be countered by a simple pointing that out and claiming that they are already slipping down this metaphorical slope even before the measures are introduced.

Political reasons

These are fairly straightforward. Being able to surveil and censor all Internet communications (and with each passing day the importance of Internet as a communication medium rises) is a powerful tool in the hands of politicians. It enables them to make dissent and opposition disappear, make it hard or impossible for them to communicate, easily establish the identities of oppositionists.

As Internet censorship requires deep packet inspection, once such a system is deployed, there are no technical issues stopping those in control to modify the communications in transit. That opens the door to even broader set of possibilities for a willing politician, including false flag operations, sowing dissent among the ranks of opposition, and similar actions.

The Counter-arguments

There are three main groups of arguments that can be used to fight Internet censorship and surveillance ideas:

  • technical and technology-based;
  • economy- and cost-related;
  • philosophical (including those based in human rights, freedom of speech, etc.).

At the end of this section some useful analogies are also provided.

The good news is, all things considered there are very strong anti-censorship arguments to be made in all three areas. The bad news, however, is that all three kinds need to be translated to or used in emotional arguments to sway the general public at some point.

Again, as a rule neither the general public nor the politicians and civil servants that further the pro-censorship agenda have decent understanding of issues involved. Putting the issues in easily-grasped and emotionally loaded examples or metaphors is an extremely potent tactic.

Several counter-arguments (for instance, jeopardising e-economy, or pushing the blocked content into darknets, as discussed below) are related to the Law of Unintended Consequences: the fact that we cannot ever predict all possible consequences of any action, especially intrusive actions with regard to complex systems. Introducing censorship in the Internet is just such a case. Calling upon this fact and this law can itself be a good counter-argument.

It is also well worth keeping in mind to make sure (if at all possible in a given local political situation) that the anti-censorship action cannot be manoeuvred into any particular political corner (i.e. so that it’s not called “leftist issue”). Censorship and freedom of speech are issues that are of interest to people from any side of the political spectrum and being able to reach out even to groups that would not be willing to agree with you on other issues is crucial.

Technical arguments

Due to the technical make-up of the Internet there are several strong technical arguments to be made against Internet censorship. The main categories these fall into are:

  • it requires far-reaching infrastructural and topological changes to the network;
  • it requires high-end filtering equipment that will likely not be able to handle the load anyway;
  • it does not work: it is easy to circumvent, it does not block everything it is supposed to, and it blocks things that are not supposed to be blocked.

There are several ways content might be blocked/filtered on the Internet, and several levels that censorship can operate at. Each has its strong and weak points, none can guarantee 100% effectiveness, all have problems with over-blocking and under-blocking, all are costly and all require Internet surveillance.

Effectiveness of Internet censorship measures is never complete, as there are multiple ways of circumventing them (depending on the given measure).

Over-blocking occurs when a legal content that should not be blocked is accidentally blocked by a given censorship measure. Depending on the particular scheme chosen this might be a problem pronounced more or less, but it is always present and inevitable. It does not relate to situations where the block list intentionally contains certain content that should not officially be blocked.

Similarly, under-blocking is content that officially should be blocked, but accidentally isn’t. It is not content accessible by circumvention, but simply content that is accessible without using any special techniques that “slipped through the fingers” of the particular censorship scheme.

Both the resources required (equipment, processing power, bandwidth) and the cost of handling the list of blocked content also vary between censorship schemes and depend on method used.

Whether or not a method employs deep packet inspection (DPI) is indicative of both how intrusive and how resource-intensive it is.

Below a short summary of possible blocking methods is provided, with information on the above factors. Possible circumvention methods are summarized at the end of this sub-section.


DNS-based blocking:
over-blocking probability: high
under-blocking probability: medium
required resources: small
list handling cost: medium
circumvention: very easy
employs DPI: no

DNS-based blocking requires ISPs (who usually run their own DNS servers, being default for their clients) to de-list certain domains (so that they are not resolvable when using these DNS servers). This means that the costs of implementing it are small.

However, as users can easily use other DNS servers simply by configuring their network connection to do so (not a difficult task), this method is extremely easy to circumvent.

This method has a huge potential for over-blocking, as due to certain content whole domains would be blocked. This means that it has a potential to bring down a website or a forum due to a single entry published on them.

Due to websites purposefully publishing content that is supposed to be blocked changing their domain names often (sometimes in the course of hours!), list handling costs and risk of under-blocking are medium.

IP address-based blocking:
over-blocking probability: high
under-blocking probability: medium
required resources: small
list handling cost: medium
circumvention: medium
employs DPI: no

IP-based blocking requires the ISPs to either block certain IP addresses internally or route all the outgoing connections via a central, government-mandated censoring entity. It is only superficially harder to circumvent, while retaining most if not all problems of DNS-based blocking.

Both IP address-based blocking and DNS-based blocking do not employ deep packet inspection.

Websites that purposefully publish content that is supposed to be blocked can circumvent IP-based blocks by changing the IP (which is just a bit more hassle than changing the domain-name); users wanting to access blocked websites can use several methods, admittedly a bit more complex than with DNS-based blocking.

It is possible to improve the effectiveness of an IP-based block (and making it harder to circumvent) by blockiong whole IP ranges or blocks; this, however, dramatically rises the probability of over-blocking.

URL-based blocking:
over-blocking probability: low
under-blocking probability: high
required resources: medium
list handling cost: high
circumvention: medium
employs DPI: yes

This method employs deep packet inspection.

Because this method blocks only certain, URL-identified content, not whole websites or servers (as do DNS-based and IP-based methods), it has much lower potential for accidental over-blocking. This also entails it has a higher potential for under-blocking, as the content can be available on the same server under many different URLs, and changing just a small part of the name defeats the filter.

Users wanting to access blocked content have also a wealth of methods (including proxies, VPNs, TOR, darknets, all discussed below).

Dynamic blocking (keywords, image recognition, etc.):
over-blocking probability: high
under-blocking probability: high
required resources: very high
list handling cost: low
circumvention: medium
employs DPI: yes

This method uses deep packet inspection to read the contents of data being transmitted, and compares it with a list of keywords, or with image samples or video (depending on the content type).

It has a very serious potential for over-blocking (consider blocking all references to “Essex” based on the keyword “sex”; consider blocking Wikipedia articles or biology texts related to human reproduction), and of under-blocking (website operators can simply avoid using known keywords, or use strange spelling, for instance: “s3x”).

Combating under-blocking with extending keyword lists only exacerbates the over-blocking problem. Combating over-blocking with complicated keyword rule-sets (i.e. “sex, but only if there are white-space characters around it”) only makes it easier to circumvent it for website operators (i.e. “sexuality” instead of “sexual”).

List handling costs are low, but this method requires huge computing and bandwidth resources, as each and every data-stream on the network needs to be inspected, scanned and compared to keywords and samples. It is especially costly for images, videos and other non-text media.

Users still can circumvent the block in several ways.

Hash-based blocking:
over-blocking probability: low
under-blocking probability: high
required resources: very high
list handling cost: high
circumvention: medium
employs DPI: yes

Hash-based blocking uses deep packet inspection to inspect the contents of data-streams, hashes them with cryptographic hash functions and compares to a known database of hashes to be blocked. It has a low potential for over-blocking (depending on the quality of hash functions used), but a very high potential for under-blocking, as a single small change to the content entails a change of the hash, and hence content not being blocked.

Resource needs here are very high, as not only all the data-streams need to be inspected in real-time, they also need to be hashed (hash functions are computationally costly) and the hashes compared against a database. Costs of handling the hash-lists are also considerable.

Users can circumvent the block in several ways.

Hybrid solutions (i.e. IP-based + hash-based):
over-blocking probability: low
under-blocking probability: high
required resources: medium
list handling cost: high
circumvention: medium
employs DPI: yes

In order to compromise between high-resource, low-over-blocking hash-based blocking and low-resource, high-over-blocking IP- or DNS-based solutions, a hybrid solution might be proposed. Usually it means that there is a list of IP addresses or domain names for which the hash-based blocking is enabled, hence only operating for a small part of content. This method does employ deep packet inspection.

Required resources and list handling costs are still considerable, and under-blocking probability is high, while circumvention by users is not any harder than for hash-based block.


There are several circumvention methods possible to be employed by users willing to access blocked content.

Custom DNS server settings can be used to easily circumvent DNS-based blocking. It does not require almost any technical prowess and can be used by anybody. There is a number of publicly available DNS servers, possible to use for this purpose. There is no way to easily block the use of this method without deploying censorship methods other than pure DNS-blocking.

Proxy servers, especially anonymous ones, located outside the area where a censorship solution is deployed can be used quite easily to circumvent any blocking method; users can modify their operating system or browser settings, or install browser additions that make using this circumvention method trivial. It is possible to block the proxy servers themselves (via IP-blocking, keyword blocking, etc.), however it is infeasible to block them all, as they are easy to set-up.

Virtual Private Networks (including “poor man’s VPNs” like SSH tunnels) require more technical prowess and usually a (usually commercial) VPN service (or SSH server) outside the area with blocking deployed. Blocking all VPN/SSH traffic is possible, but requires deep packet inspection and is a serious problem for many legitimate businesses using VPNs (and SSH) as their daily tools of trade, to allow their employees access to corporate networks from outside physical premises, via a secured link on the Internet.

TOR, or The Onion Router, is a very effective (if a bit slow) circumvention method. It is quite easy to set-up – users can simply download the TOR Browser Bundle and use it to access the Internet. Due to the way it works it is nigh-impossible to block TOR traffic (as it looks just like vanilla HTTPS traffic), to the point that it is known to allow access to the uncensored Internet to those living in areas with most aggressive Internet censorship policies – namely China, North Korea and Iran.

None of the censorship solutions is able to block content on darknets – virtual networks accessible anonymously only via specialised software (for instance TOR, I2P, FreeNet), and guaranteeing high resilience to censorship through technical composition of the networks themselves. Because darknets are both practically impossible to block entirely and not allowing for any content blocking within them, they are effectively the ultimate circumvention methods.

The only downside to using darknets is their lower bandwidth.

Indeed, deploying Internet censorship pushes the to-be-blocked content into darknets, making it ever-harder for law enforcement gather evidence and researchers gather data on the popularity of a given type of censored content. This is further discussed in the philosophical arguments sub-section.


While not necessarily a circumvention tool, TLS/SSL defeats any censorship method that relies on deep packet inspection, as the contents of data-streams are encrypted and readable only to the client machine and the host it is communicating with – and hence unavailable to the filtering equipment.

TLS/SSL provides end-to-end encrypted, secure communication; initially used mainly by banking and e-commerce sites, now being employed by ever-rising number of websites, including social networks. Accessing websites with https:// instead of http:// is making use of TLS/SSL; it is however used to provide secure layer of communication also for many other tools and protocols (for instance, e-mail clients or some VoIP solutions).

Once a DPI-based censorship solution is deployed, affected users and services will gradually and naturally gravitate to this simple yet very effective solution. This means that any DPI-based censorship scheme must handle TLS/SSL communication. This can only be done in two ways:

  • block it altogether;
  • perform a man-in-the-middle (or MITM) attack on encrypted data-streams.

Blocking is not hard (TLS/SSL communication streams are quite easy to filter out). However, as TLS/SSL is a valid, legal and oft-used way of providing security for users by legitimate businesses, especially banks, this is not a viable solution, as it will cause outrage of users, security researchers and financial companies (or, indeed, all companies relying on TLS/SSL for their security needs).

Performing a man-in-the-middle attack means getting in a way of an encrypting data-stream, decrypting it, checking the contents, re-encrypting them and sending them to their destination, preferably in a way that neither the client, nor the server notice the intrusion. With properly created and signed certificates this is only viable if the censorship equipment has a special digital certificate allowing for that.

There have been instances where such certificates leaked from compromised Certificate Authorities (or CAs) and were used by oppressive regimes for MITM attacks on TLS/SSL; also, some filtering equipment takes advantage of such certificates – albeit provided wilfully and legally by one of the CAs that is co-operating with a given filtering equipment vendor – to perform clandestine MITM attacks in the surveiled network.

Performing MITM on TLS/SSL is a very resource-intensive operation and only adds costs to the already high-cost DPI-based censorship schemes – filtering devices equipped with digital certificates allowing for performing clandestine MITM are considerably more costly.

A different argument carries more weight here, however. Performing a man-in-the-middle attack is even more intrusive and violating than deep packet inspection. It is a conscious act of breaking encrypted communication in order to get to its contents and then covering one’s tracks in order to make the communicating parties feel safe and unsurveiled. There are not many more hostile digital acts a government can perform on its citizenry.

Moreover, using MITM on all connections in a given network lowers trust level dramatically. This results in citizens not trusting their banking, financial and e-commerce websites, and all other websites that employ TLS/SSL, hence has a huge potential to hurt the whole e-economy.

It also defeats the purpose of using TLS/SSL-encrypted communication to provide security. By doing so, and by lowering users’ trust towards TLS/SSL in general, it makes them more vulnerable and insecure on the Internet.

Finally, clandestine MITM can be discovered by simply removing the Certificate Authority that issued the certificate used by filtering equipment from the certificate store used by client software – and can be performed by the users themselves. This will have a side-effect of making all connections to all websites that also employ certificates from this given CA, and all connections that a MITM attack is performed on, marked with “invalid certificate” error by client software (e.g. browsers).

Economical arguments

The economical arguments to a large extent stem from the technical issues involved. Infrastructural changes needed would be costly, the cost of the required amounts of high-end filtering equipment would be astronomical, and there are labour costs involved, too (hiring people to select content to be blocked, and to oversee the equipment). The costs, of course, differ from scheme to scheme and from country to country, but are always considerable.

It is also very important to underline the hidden costs that ISPs (and hence – their clients) will have to cover in many such schemes. If the ISPs will be required to implement content filtering in their networks, they will have to foot the bill. Once this is made abundantly clear, the ISPs might become strong supporters for the anti-censorship cause.

If the scheme would entail the government paying the ISPs for the implementation of the measures, it will be hard to get them on-board, but then simply estimating the real costs of such measures and getting the word out that this will be paid by the taxpayer is a very strong instrument in and of itself.

Either way, requiring transparency, asking the right questions about the costs and who gets to pay them, making cost estimates and publishing them and the answers is very worthwhile.

It is easy to overlook the broad chilling effects on the whole Internet-related economy due to Internet censorship schemes being rolled out, and general economy costs related. Uncertainty of law, of blocking roles (which cannot be clear and unambiguous, for reasons discussed below), of a website – after all being an investment in many cases – being available at all to the intended public, and of ways of appealing an unjust content blocking will disincentivize businesses to invest in web presence.

Hence, a whole industry will take a blow, and with it the whole economy.

Philosophical arguments

This topic is ripe with philosophical issues. These for the most part boil down to the question whether or not the end (i.e. blocking child pornography, or any other excuse) justifies the means (infrastructure overhaul, huge costs, infringement of freedom to communicate and the right to privacy)?

Of course the main axis of anti-censorship philosophical arguments are civil rights. Right to privacy, freedom of speech, secrecy of correspondence are mostly codified in international treaties and are a very strong basis here.

However, to make censorship proponents (and the general public!) understand the civil rights implications of their ideas, it is crucial to fight the distinction between “real world” and “virtual world”.

For every technically literate person this distinction does not exist and it is clear these are just figures of speech. However, for most Internet censorship proponents, this distinction feels real. Indeed, such an impression is the enabler. It implies that current laws, regulations, civil rights statutes, etc., do not work in the “virtual world”. It is perceived as a tabula rasa, a completely new domain, where rules are only to be created, and hence it is okay to introduce solutions that in the “real world” would be considered unacceptable.

Physical world examples are very helpful here – the classic one being the postal service opening, reading and censoring our paper-mail as a metaphor of Internet censorship and surveillance.

There is also the question of the “real-ness” of the “virtual world” for Internet censorship proponents. The fact that for them the Internet is a “virtual” space means that censorship and surveillance there do not “really” harm anybody, do not “really” infringe upon “real” people’s civil rights. Curiously, pro-censorship actors are incoherent here – as when they start speaking about the harm done by the issue they propose censorship as a solution to (i.e. pornography), they see it as “real” harm done to “real” people.

It is well worth to point out in such a debate that either the harm in the “virtual world” is “real”, and hence Internet censorship is unacceptable; or it is not “real” – in which case it is unneeded.

A question of legality of acts that the content to be blocked is related to is also a valid one. There are two possibilities here:

  • the acts are legal themselves, while the content is not;
  • the acts and the content are both illegal.

The former case is hard to argue for even for the proponents of Internet censorship scheme. Why should certain content be blocked if acts it depicts or relates to are not illegal? The arguments used here will orbit around the idea of making the content censored as the first step to making the acts illegal, and they should be vehemently opposed.

In the latter case (that is, for example, the case of child pornography) one could argue that it is of crucial importance to stop the acts from happening (in this case, sexual abuse of children), and blocking the content is in no way conducive to that aim.

It does not directly help stopping the acts; it does not help find the culprits; it even makes it harder for them to get caught – often information contained in the content (GPS location encoded in the pictures; ambient sound in videos) or related to the means of distribution (owner of server domain name; IP logs on the hosting server) are crucial to establishing the identity of the culprit, and blocking the content removes the possibility to use such data.

Blocking such content is swiping the problem under the rug – also in the sense that the problem becomes less visible but in no way less real. Policy makers and general public can get convinced that the problem is “solved” even though it still exists under the radar (i.e. children are still sexually abused even though it’s harder to find content related to that on the Internet, due to blocking). This results in less drive for finding solutions to the real problem, and less data for researchers and law enforcement.

Another argument is related to the lists of content to be blocked. There are three issues here:

  • how secure are the lists?
  • what are the rules of blocking the content?
  • who creates, revises and controls them?

If the lists contain addresses, URLs or identifying information on “evil” content, and as there are no blocking means that are thoroughly effective (there are ways around every method), obviously these lists themselves will be in high demand among those interested in such content. Simply put, they will be complete wish-lists for them. And as such they are bound to leak.

There is a good argument to be made that the very creation of such lists (which are necessary for censorship schemes) is in and of itself a reason not to introduce such measures.

Because the lists themselves cannot be made public (due to all reasons mentioned above), there is no public oversight of lists’ contents – and hence there is serious concern of over-blocking or blocking content that in no way fits the intended description of content to be blocked. This is a slippery slope: once such a system is introduced, more and more types of content will get blocked.

As far as rules are concerned, often it is also hard to precisely define the content that is supposed to be blocked. In the context of child pornography, for example, establishing the age of the person on the picture is often a challenge, even for experts; should pictures of young-looking adults also be blocked? And is it pornography if it is not sexually explicit – should any picture of a young naked person be blocked? What about sexually explicit graphics/drawings depicting apparently under-age persons, should they get blocked, too? If so, what about stories? Then we land in a situation where genuine works of art (for example, Vladimir Nabokov’s Lolita) should apparently be blocked.

And if even viewing of the blocked content is illegal, under what legal framework should the list creators be able to review it? They would have to view it, to review it, so they would be breaking the law. If the law would permit them to do it, why and on what grounds? If it’s bad for everybody, it is certainly also bad for them…

Final list-related issue here can be shortened to a well-known quip “who watches the watchers”. People that control the blocking lists have immense power and immense responsibility. As there is no oversight, there is a large possibility for mischief and manipulation. Especially when most vocal proponents of some of the Internet censorship schemes are not exactly the most consistent themselves.

Lists’ secrecy gives birth to yet another issue – that of lack of due process. If rules of blocking are not clear and unambiguous (they can’t be), and hence there is a serious concern for content being blocked that should not have been blocked (there is), how exactly can such incorrectly-blocked content operator appeal their content being blocked if the lists are secret? How do they even get information about the blocking, to distinguish it from a technical error on the network?

This can cause serious financial losses and there should be a way for content operators to get informed that their content is being blocked, why is it blocked and what are their possibilities of challenging such blocking. However, due to the secrecy of the process and the lists, this information cannot be provided, not to mention the additional costs of informing every single entity who’s content is blocked.

Also, surprisingly large number of pro-censorship actors that do not have ulterior motives treat any civil rights based criticism of their position personally, as if the opponents were suggesting that they do indeed have ulterior motives and are going to use the censorship and surveillance infrastructure for their own political gains.

This is something that eluded me for a long time. Only after a meeting on which I used “the next guy” argument certain pro-censorship actor (high-level representative of the Ministry of Justice) understood that we are not attacking him personally, and that there are indeed valid civil rights issues at hand.

“The next guy” argument is a very nifty way of disarming an emotionally loaded situation like that, and basically states that nobody assumes that the person (politician, civil servant, etc.) we are currently discussing Internet censorship with has ulterior motives and will abuse the system when introduced – however, nobody knows who “the next guy”, the next person to hold that office or position, will be. And it is against their potential abuse we are protesting today.

A special case of government-mandated opt-out Internet censorship is also worth considering. Such schemes have been proposed around the world (most notably in the UK), and are construed in order to answer some of the civil rights issues involved with blocking content that is legal but unsavoury (porn, for instance).

While the proponents of such measures claim that it completely solves these issues, this is not the case, as opt-out means that individuals willing to access the unsavoury content would have to divulge their data to their ISPs or content blocking operators, and hence be formally associated with the unsavoury content. This is not something many would be willing to do, even though they would indeed want to access the content.

A successful line of arguing against opt-out is to propose a similar, but opt-in solution. This would give a block on unsavoury content to those who want it, but would not create the situation described above. However, instituting such a block on a central level could be a stepping stone for mandating a central censorship solution (as the costs and technical difficulties would be similar if not the same), and hence opt-out blocking should be opposed entirely, with opt-in as a last-resort proposition.

Emotional arguments

The basic strategy is to call things by their real names – removal or blocking of content without a court order is censorship, and due to technical make-up of the Internet it is only possible with complete surveillance. There is no way around these terms, and censorship opponents can and should use them widely when speaking about such ideas. Again, using paper-mail censorship surveillance metaphors (private mail opening, reading, censoring on post offices) is very important to convey the seriousness of the issue.

Based on the cost of such solutions an emotional argument can be made that such money could be much better spent, for example on hospitals, road safety programmes, orphanages. There is no shortage of problems that need solving and the money should go there, instead of financing morally and legally questionable, technologically unfeasible censorship ideas.

It can also be argued that Internet censorship introduces collective punishment – all Internet users and Internet businesses are being punished for actions of a small group of criminals. The money and resources used for Internet censorship should be instead used to punish the guilty, not the general public.

Attempting to find organisations that try to solve the problem that the Internet censorship scheme is officially trying to solve (i.e. sexual abuse of children, creation of child pornography), but are against the censorship as a method, is also viable and advised. It is quite possible that such an organisation exists (for instance, in Poland the KidProtect.pl fundation, fighting sexual child abuse, was very vocally opposed to Internet censorship, for many reasons stated in this text), and having them as an ally is extremely effective.

If everything else fails, and as a last resort, an ad personam argument can be made that a given proponent of Internet censorship measures has a hidden agenda and wants to introduce the measures for their own personal aims. Using this argument is not advisable, however, especially early in the debate, as it ensures that such a person (and their community) will most certainly become hostile and even stronger a proponent of censorship measures than before. Using this argument is not recommended at all.

Useful analogies

These analogies are very useful in conveying the technical set-up of the Internet and the technical issues around censoring it.

IP address:
a physical street address, it can lead to several different businesses and individuals (i.e. domains).

Domain name:
a name (either business or personal) that identifies a particular business or person under a given physical street address (i.e. IP address).

Domain name resolution:
a process of “resolving” a personal or business name to a physical street address (i.e. IP address), so that a package (i.e. data) can be delivered to them.

Deep packet inspection:
opening physical mail, breaking the envelope and reading the contents in order to be able to decide whether or not to censor it (as opposed to just reading the addressee and the sender data available on the envelope).

Proxy:
asking somebody else to send the package (i.e. data) for you and forward you the answer of the addressee.

HTTPS:
Sending encrypted snail-mail.

Man-in-the-Middle:
Opening encrypted snail-mail, decrypting, reading, re-encrypting it and re-sending it to the addressee. Usually an attempt is made to do it in a clandestine way, so that neither sender nor addressee are aware of it.

Useful quotes

A very good collection of quotes useful in the context of anti-censorship debates is available on WikiQuote; it is also worth looking through civil rights and free speech related quotes. Some of the highlights below.

They who can give up essential liberty to obtain a little temporary safety, deserve neither liberty nor safety.
Benjamin Franklin

I disapprove of what you say, but I will defend to the death your right to say it.
Evelyn Beatrice Hall

If we don’t believe in freedom of expression for people we despise, we don’t believe in it at all.
Noam Chomsky

The Net interprets censorship as damage and routes around it.
John Gilmore

Border conditions for preserving subjectivity in the digital era

This is an ancient post, published more than 4 years ago.
As such, it might not anymore reflect the views of the author or the state of the world. It is provided as historical record.

Please note: in this text I use the term “subjectivity” in the meaning of the right to be recognised as an acting subject, as opposed to beaing treated as a mere object of actions of others.

Thanks to the hospitality of this year’s Citizen Congress I had the pleasure of co-hosting (with Katarzyna Szymielewicz of Panoptykon Foundation and Jarosław Lipszyc of Modern Poland Foundation) within its framework a workshop “How to be a subject in the cyberworld”.

An broad group of practitioners, technicians, lawyers, new media experts, pedagogues and philosophers pondered for almost 3 hours how one can defend oneself from being reduced to an object, product or data point in the Net (and outside of it).

Subjectivity - whatever that is?

In order to get us on the right track, prof. Paweł Łuków summarized what subjectivity is. For our needs we can assume that it means recognizing that each human being is an independent actor, decision making subject, who has their own aims and wants, and acts in a particular way.

In other words we’re talking about subject-object comparison. Our history has no shortage of instances of treating subjects as objects and we can all agree that subjectivity is a value well worth defending.

Subjectivity in the digital era

The digital era and new means and possibilities of communicating bear with them a wealth of consequences for subjectivity. First of all, we can purposefully evade the recognition of our subjectivity, and with it the responsibility for our actions: we can consciously rescind our subjectivity.

This alone allows for unsurpassed freedom of speech. Never in history could we so easily hide and voice our opinions anonymously. Paradoxically, giving subjectivity up, hiding our identity makes it easier to express oneself fully and freely. One does not have to fear repercussions and repression for an opinion when nobody knows that they are its author.

On the other hand, this creates an air of impunity. The fine line between freedom and waywardness is being crossed – trolling and hate speach are good examples here.

Regardless, however, of our conscious decision to keep or hide our subjectivity, we might become an unwilling object of actions of other actors. Actors that might not be interested in recognizing and respecting our subjectivity. That is where dangers to privacy arise; we are being treated as data points or assets that are to be monetised.

We expect to be protected from dangers from both of these directions – that of persons exploiting impunity behind the wall of anonimity; and that of actors purposefully not respectful of our subjectivity – by public administration. Yet another paradox arises here: even though the democratic system is built upon the respect for subjectivity of each and every citizen, systemic solutions very often ignore it (for example, in the name of being more effective). In other words, attempting to defend subjectivity by public administration might create dangers of its own to it.

Mode of work

The workshop has been divided into four parts. In the first we tried to identify problems that have to be faced when defending subjectivity in the digital era, and threats to it; in the next free we made an attempt to find solutions to these problems in areas of: education, infrastructure and law.

Each of these parts took much longer than we had predicted. This topic is much more complicated and vast that one would expect.

Please note: the term “media” is being used here broadly and means also social networks and social media, internet forums, blogs and other methods of electronic communication.

Problems and threats

Problems and threats to subjectivity, which we have identified together during the workshop, are:

  • dangers to privacy;
  • obsoleteness of law with regard to technology (i.e. copyright law);
  • lack of citizen control over communication and information channels;
  • centralisation (of infrastructure, of communication, of information);
  • lack of effective guarantees of basic human rights;
  • information assymetry (i.e. between clients and software companies, or users and social network operators);
  • lacking education and lack of competence of citizens;
  • lack of understanding of new media (by citizens, policymakers, courts… – i.e. fetishization of the Internet, the “virtual reality”);
  • obstacles for empathy (due to rising proxyification of communication, impossibility of sending and receiving non-verbal signals; trolling is an effect of that);
  • tempo of changes effectively surpassing our ability to learn and accomodate them;
  • public infrastructure shortcomings (including lack of innovation within the infrastructure);
  • censorship and self-censorship (including censorship that is completely unregulated – like terms of service in privately-owned social networks);
  • marketization of education and information;
  • language barriers.

While this eluded us on the workshop, I believe one more problem should be also indicated here:

  • bundling, which is complicating or making it outright impossible to correctly compare services (e.g. of mobile plans, Internet access, etc).

Below are solution ideas and border conditions for protecting subjectivity, divided into three areas.

Potential solutions and border conditions

Education:

  • media education (in kindergardens, schools, for adults, for teachers);
  • open educational resources (used in education and actively developed);
  • open educational tools (as above);
  • teaching methods, instead of tools (change of the paradigm);
  • getting education programmes up to speed;
  • teaching critical thinking;
  • fostering the sense of self-worth;
  • teaching how to control the media;
  • language competence development (with regard to both mothertongue and foreign languages);
  • “open source” education, abandoning the XIX-century educational model (fostering student participation in the education process, instead of current model of “student-master relationship”);
  • breaking down the barriers;
  • educating on law, esp. on basic human rights.

Infrastructure:

  • decentralisation of infrastructure (on several levels);
  • broadly-adopted open standards (including within social networks, allowing for interoperability of different providers);
  • privacy by default;
  • privacy by design;
  • public, freely available access to broadband Internet (including public WiFi in public libraries);
  • open repositories (of data, open access, resources, publicly-financed source code, etc);
  • transparency of information about services;
  • civil involvement in infrastructure (designing, building, upkeep – including support for hackerspaces/fablabs).

Law:

  • transparent and functional public consultation system;
  • evidence-based policy and thorough regulatory impact analysis;
  • copyright reform;
  • easy-to-understand law interpretation and commentary available (created within the legislation creation process);
  • government-mandated, unequivocal, binding law interpretation;
  • law and regulation system analysis and review in order to remove inconsistencies;
  • laws guaranteeing network neutrality;
  • guarantees against disconnection of citizens (e.g. making n-strikes laws illegal);
  • right to analogue interaction with public administration;
  • right to move private data (e.g. between social networks);
  • right to be forgotten (removing/deleting data);
  • right to use any and all services and applications as long as they are not actively harming the network.

Above problem and potential solution catalogues should not be treated as final – rather, as a start of a discussion. A discussion that is sorely needed, and of which they can be a good base for.

Social blogosphere

This is an ancient post, published more than 4 years ago.
As such, it might not anymore reflect the views of the author or the state of the world. It is provided as historical record.

I already wrote about a Diaspora-based comment system, but here’s a question that I have been pondering for some time now: why not ramp it up a notch?..

Why not make a whole blog on Diaspora or Friendica (or any other federated, decentralized social network)?

The functionality is all there - you have your:

  • user accounts;
  • comments;
  • profile section;
  • permalinks;
  • media (videos/images) embeding.

Just change the theme to more bloggy and – hey, presto! – you have a full-featured blog. With a vengaence!

Added (social) value

Thing is, it’s not only a blog.

It works like one, it looks like one, it feels like one, but… it’s so much more! Other people, from other compatible services (i.e. other Diaspora pods and Friendica servers) can seamlessly connect to your blog, engage with it, comment on it and (if you allow it) re-share your posts.

Even more, you have the power of aspects at your disposal – you can publish some content only to certain, selected people you trust. Have you ever wanted to create a “private blog” that only a handful of friends can access? Well, now you can.

Suddenly, the whole libre, decentralized, federated social network is your audience. Everybody with an account on any of several pods or servers can be a first-class citizen on your blog or website. This, of course, works both ways: all of those that create accounts on your website to comment on the entries there are automagically first-class citizens on the whole distributed, federated social network.

Did you say: websites?

Yes. Indeed I have. Because in fact if you need any kind of website that is supposed to have comments, obviously you can take advantage of this idea, too! Instead of using WordPress or Drupal, why not use Friendica with a nice custom bloggy theme and have your content already present on the libre side of social networking?..

Just imagine the whole Internet of websites that can connect and engage with each other, a decentralized and federated web where discussion and cooperation strives and no single actor can cenzor, surveil nor shut it down!..

Embrace fragmentation

This is an ancient post, published more than 4 years ago.
As such, it might not anymore reflect the views of the author or the state of the world. It is provided as historical record.

When you’re in the IT business, fragmentation is bad. Whether it’s your platform that gets fragmented (hullo there, Android), or you’re tha admin that has to support all different versions of popular browsers (hullo, Firefox and Chromium o’ver. 9000), fragmentation is bad news and more work.

Also for social and political movements or organisations (formal and informal, regardless) fragmentation is bad. Organisation loses people, hence also clout and political power to make the changes they want to make. The movement splinters into small, irrelevant groups that cannot take on the Big League…

Only, it’s not true anymore.

Packaging

Have you ever tried choosing a mobile plan? I have. It’s daunting – and I used to work in a mobile tech R&D lab!

It’s daunting not because of technology, though, and not because it’s hard to make it easier to make choices there. It’s daunting because telecom companies are working hard to package features and pricing in plans in a way that effectively makes it impossible to really compare and contrast plans between different operators.

You want Feature X? Okay, you’ll get it within the plan at A and B, but at C you will have to pay additionally for that; but, C has lower rates overall and Feature Y, offered only by C and A (for an additional price)! Ah, but B has Feature Z, very similar to Y (but just-not-the-same)… and so it goes.

This is also how mainstream political parties work. I agree with Policy X of Party A, and Policy Y of Party B; but A has Anti-Y-policy and B has Anti-X-policy even though X and Y can be compatible. The effect? I cannot in clear conscience choose a party that actually fits me.

Why? Because it’s all pre-packaged. You cannot get your pick of the issues, features, policies. You have to pick from packages containing some you agree with, some you don’t.

Social movements often work this way, too. You want to support the Movement X because of Policy A? Well, that will mean you also support Movement X’s other policies that you might not be too fond of. And while with mobile plans and (less, but still) political parties you are bound to choose something, with social movements the effect is that most people choose not to choose. Packaging kills involvement.

Enter fragmentation.

Less is more

Suppose for a moment that we could choose to engage in furthering Policy A without having to support Policies B and C; and that Movement X that engages or coordinates efforts around Policy A does so inclusively, inviting any and all to join-in and help out, regardless of their support for other policies Movement X stands for.

Suddenly, John Doe (a stern opponent of Movement X’s other policies, yet a supporter of Policy A) can feel invited to just help with this particular policy or issue. Net effect – one more supporter of Policy A!

This is exactly how Anti-ACTA worked in Poland. There were NGOs that might not see eye-to-eye on most of things and people of all walks of life and political affiliations working towards a common goal. We embraced fragmentation and were able to bring in support of many times more people than we could have dreamt had we decided to exclude those we do not agree with on other issues.

Anarchists and right-wing activists protested hand in hand, just months after fighting each other on the streets of Warsaw.

The narrower the issue, the better defined the goal – the more people can feel invited and welcome to help out.

Smart fragmenting

This does not mean that organisations or movements should suddenly narrow their scope down, start focusing on single issues only. They should still be as comprehensive as they see fit. Some issues are not possible to fight for separated from the bigger picture.

But there certainly are issues that can be well-defined in a way that makes them at least partially self-contained. And then action can be organized around such an issue in an inclusive and welcoming way. This not only helps further the given issue, but also stimulates discussion between different people and different organisations.

Discussion that can lead to better mutual understanding, and better dialogue on the more general level.

SERVICES.TXT

This is an ancient post, published more than 4 years ago.
As such, it might not anymore reflect the views of the author or the state of the world. It is provided as historical record.

Hosting multiple different services on the same server and under the same domain name used to be simple. Set-up the services, optionally add MX or SRV records to the DNS zone if they are being run under a different IP, and you’re done. Hosting XMPP, SMTP, POP, IMAP, SSH, HTTP on the same domain was trivial, because they all by default use different ports.

Thanks to web-2.0-isation of the Internet, however, everything now seems to be hell-bent on using HTTP/HTTPS as the transport layer. This means, basically, that you cannot (for example) have StatusNet and Diaspora on the same server – simply because they both use port 80 (or with SSL/TLS, port 443) and have similar API paths. In other words, there is no way do distinguish between an API request to one or the other if they are both set up under the same domain name.

Thus, in this glorious 21st century we have made a huge step backwards – we now can’t really host multiple services under the same domain anymore.

Let’s fix it!

Introducing services.txt. Just like robots.txt and humans.txt, this human-readable file sits in the root directory of the webserver under a given domain, and informs any and all interested parties (including other instances of a given service, and that’s the crux!), that when they are looking for service X under this domain, they should use a given path.

Proposed syntax (but hey, let’s talk about it!):

servicename/apiname:version:path

  • servicename/apiname: a name of the service or api a given entry describes; in our example it would be ostatus (StatusNet implements the OStatus protocol) or diaspora,
  • version: either a version number (i.e. 1.0), or an asterisk to denote that any version will do,
  • path: this can either be a local path (i.e. /servicename), or a full URL if the service resides on a different server and/or port.

All lines starting with a hash (#) are considered comments and ignored.

So, if I had a StatusNet (ver.1.0) instance and a Diaspora instance running under this domain name, with the former running under /statusnet and the latter actually running under a subdomain dias.rys.io and accessible only via HTTPS on non-standard port 9443, my status.txt file would look thus:

# example services.txt file
# for rys.io
ostatus:1.0:/statusnet
diaspora:*:https://diasp.rys.io:9443

Of course having a service entry in services.txt would be optional – that is, if another instance of a given service wants to try to contact my instance and can’t find the entry for it (or, indeed, there is no services.txt file available), it just proceeds in a default manner for a given service.