This is an ancient post, published more than 4 years ago.
As such, it might not anymore reflect the views of the author or the state of the world. It is provided as historical record.
During last few years I have been involved in arguing against several
attempts at introducing Internet censorship in Poland. Some of these
where very local and went almost unnoticed outside Poland (like
Rejestr Stron i Usług Niedozwolonych – the Register of Unlawful
Websites and Services, in 2010); some where a part of a larger
discussion (like the implementation debate around EU directives, that
allowed, but not mandated, introducing child porn filters in EU member
states); one made a huge splash around the world (I write about
anti-ACTA campaign efforts here).
At this point I have gathered quite some experience in this. Due to
censorship ideas gaining
support even in apparently democratic countries I have decided it’s
time to get it all in one place for others to enjoy.
The Ground Rules
There are some very important yet simple things one has to keep in
mind when discussing censorship ideas. They can be best summarized by an
extended version of Hanlon’s
Razor:
Never attribute to malice that which is adequately explained by
incompetence, laziness or stupidity.
More often than not the fact that somebody proposes or supports
Internet censorship is not a result of malicious intent – however
tempting such an assumption might be. Usually such support stems from
the fact that people (including policymakers):
-
do not understand how Internet works,
-
do not see the connection between their idea and
censorship,
-
do not grasp the technical problems and the cost of
implementing such ideas,
-
do not see nor understand the danger of implementing
them.
There are two areas one has to win in in order to have a chance of
striking down such ideas:
- logical argumentation based on technical issues;
- purely emotional public debate.
The former is the easier one, and can give a good basis for the
latter one – which is the endgame, the crucial part of winning such
arguments.
The Adversaries
There are usually five main groups of people that one has to discuss
with in such a debate:
- politicians;
- civil servants;
- law enforcement, uniformed and secret services;
- genuinely involved (if sometimes misguided) activists;
- business lobbyists.
There is also a sixth, crucial group that has to be swayed to win:
the general public. To communicate with that group, you also need the
media.
Politicians are very often the first to call for
Internet censorship, and as a rule are in it for short-term political
gain, not for long-term social change. The social change bit is just an
excuse, the real reason why they float such ideas is more often then not
politics and gaining popular support or getting their names out in the
mainstream media.
Sometimes it’s enough to convince them personally, sometimes what is
needed is the only argument a politician understands always – an appeal
to the authority of the general public, that needs to be vocal against
censorship. It is usually not wise to assume they have malicious intent
(i.e. stifling opposition), this only complicates discussing with
them.
Civil servants usually do not have strong feelings
one way or the other, or at least they are not allowed to show them;
they do what their superiors (the politicians) tell them to do. There is
no gain in alienating them – if you get militant or hostile towards
them, they might then start actively supporting the other side. They are
very often not technical, they might not understand the intricacies of
the technology involved; they also might not grasp or see the civil
rights implications.
Law enforcement, uniformed and special services
treat such ideas as a power grab or at least a chance to get a new tool
for doing their jobs. They usually understand the technical issues, and
usually don’t care about the civil rights issues involved. They see
themselves as the defenders of law and order, and implicitly assume that
the end justifies the means – at least in the context of Internet
censorship and surveillance. They will not get swayed by any arguments,
but do not usually use emotional rhetoric.
Pro-censorship activists feel very strongly about
some particular social issue (child porn; gambling; porn in general;
etc.) and believe very deeply that Internet censorship is a good
solution. They have a very concrete agenda and it is very hard to sway
them, but it is possible and worth a try. One should not assume
malicious intent on their part, they genuinely feel that Internet
censorship would bring capital-G Good to the world.
They usually do not understand the technical issues nor costs
involved in implementing such ideas, although they might understand the
civil rights issues. If they do not grasp them, explaining these to them
might be a very good tactic. If they do, they might make a conscious
choice of prioritising values (i.e. “one child that does not see porn on
the Internet justifies a small infringement of freedom of speech”).
When made aware of the costs of implementation, they will claim that
“no price is too big to pay”.
Business lobbyists tend to be present on both sides.
The lobbyists for the ISPs will fight Internet censorship, as it means
higher costs of doing business for them – however, as soon as there are
cash incentives on the table (i.e. public money for implementing the
solutions), many will withdraw their opposition.
There are usually not many pro-censorship lobbyists, at least not on
public meetings. They are not possible to sway, and will support their
position with a lot of “facts”, “fact sheets”, “reports”, etc., that
after closer consideration will turn to be manipulative, to say the
least. Taking a close look at their arguments and being prepared to
strike them one by one tends to be an effective tactic, if
resource-intensive. It might be possible, however, to dispel first few
“facts” supplied by them and use that as a reason to dismiss the rest of
their position.
General public is easily swayed by emotional
arguments – like “think of the
children”. However, due to the nature of these and the fact that the
general public does not, en masse, understand technical issues
involved, it is not easy to make a case against Internet censorship,
especially if the public is not at least a bit opposed to censorship and
surveillance in general.
It is, nevertheless, crucial to have the public on your side, and for
that one needs strong emotional arguments, and very strong factual,
technical arguments to weaken the emotional pro-censorship
arguments.
In order to be able to communicate with the general public
you need media. It is crucial to have high-quality press
releases, with all the information needed provided within (so that it is
as easy for the media as possible to run with the material).
It is also very important to remember that media will distort, cut,
twist information and quotes, and take them out of context. This, also,
should not usually be attributed to malice, but to the way modern media
work, and to lack of technical expertise among journalists. Hence, the
language has to be thought-through and as clear (and as easy and
accessible for the casual reader) as possible. Or more.
Media communiques should be short, succint and to-the-point. This at
the same time helps them being understood by the general public, makes
it easier for the media to run the material and makes it harder to
distort it.
When communicating with the media it is also helpful to try and keep
political neutrality, by focusing on the issues and not on party
membership nor programmes; and to provide actionable items from time to
time, for example open letters with a specific and unambiguous questions
to the pro-censorship actors regarding legality, costs, technical
issues, civil rights doubts, etc., to which (if run by the media) the
actors will be compelled to answer.
Each of these groups, and often each of the actors involved, needs to
be considered separately.
Each may be possible to sway with different arguments and in
different contexts – public meetings, with press and media, will put
pro-censorship politicians in hot water if there is a visible public
opposition; more private meetings are a better choice when the public is
generally pro-censorship but there are politicians or civil servants
that oppose it, or consider opposing it: sometimes all they need is a
good argument they could use publicly to support their position.
The Excuses
The reasons – or excuses – for a pro-censorship stance are usually
twofold:
Sometimes the social reasons given (i.e. child pornography or
pornography in general, gambling, religion-related, public order, etc.)
can be taken at face-value as the real, factual reasons behind an
Internet censorship idea. This was the case several times in Poland, and
probably is the case in most European censorship debates.
Sometimes, however, they are just an excuse to cover the more
insidious, real political agenda (like censoring dissent speech and
opposition, as in China, Iran, Korea).
The crucial issue here is that it is not easy to tell whether or not
there is a political agenda underneath the social argumentation. And
while it is counter-productive to assume malice and such
political agenda in every case, it is also expedient to be aware of the
real possibility it is there, especially when the number of different
actors involved in such a debate is taken into account.
Social excuses
There is a number of (often important and pressing) social issues
that are brought up as reasons for Internet censorship, including:
- child pornography (this is by far the most potent argument used by
censorship supporters, and it is bound to show up in a discussion sooner
or later, even if it starts with a different topic – it is wise to be
prepared for its appearance beforehand);
- pornography in general;
- gambling;
- addictions (alcohol, drugs available on the internet, allegedly also
to minors);
- public order (this one is being used in China, among others);
- religion-related;
- libel laws;
- intellectual monopolies,
- local laws (like Nazi-related speech laws in Germany).
The crucial thing to remember when discussing them is that no
technical solution ever directly solved a social problem, and there is
no reason to believe that the technical solution of Internet censorship
would solve any of the social issues above.
Censorship opponents also have to be prepared for the inevitable
adding of new social excuses in the course of the debate. For example,
in Poland with the Register of Illegal Sites and Services, the Internet
censorship idea was floated due to anti-gambling laws and foreign
gambling sites. During the course of the discussion there were other
excuses used to justify it, namely child pornography and drug-related
sites.
That’s why it is important not only to debate the merits of
the excuse, but to show that Internet censorship and surveillance is
never justified, regardless of the issue it is supposedly meant to
tackle.
It is worth noting, however, that such adding of additional excuses
for censorship can backfire for its proponents. If the anti-censorship
activists make the pro-censorship actors (i.e. by using the “slippery
slope” argument) state clearly at the beginning of the discussion that
such censorship shall be used for the stated purpose only, adding
additional excuses for it later can be countered by a simple pointing
that out and claiming that they are already slipping down this
metaphorical slope even before the measures are introduced.
Political reasons
These are fairly straightforward. Being able to surveil and censor
all Internet communications (and with each passing day the importance of
Internet as a communication medium rises) is a powerful tool in the
hands of politicians. It enables them to make dissent and opposition
disappear, make it hard or impossible for them to communicate, easily
establish the identities of oppositionists.
As Internet censorship requires deep packet
inspection, once such a system is deployed, there are no technical
issues stopping those in control to modify the communications in
transit. That opens the door to even broader set of possibilities for a
willing politician, including false flag operations, sowing dissent
among the ranks of opposition, and similar actions.
The Counter-arguments
There are three main groups of arguments that can be used to fight
Internet censorship and surveillance ideas:
- technical and technology-based;
- economy- and cost-related;
- philosophical (including those based in human rights, freedom of
speech, etc.).
At the end of this section some useful analogies are also
provided.
The good news is, all things considered there are very strong
anti-censorship arguments to be made in all three areas. The bad news,
however, is that all three kinds need to be translated to or used in
emotional arguments to sway the general public at some point.
Again, as a rule neither the general public nor the politicians and
civil servants that further the pro-censorship agenda have decent
understanding of issues involved. Putting the issues in easily-grasped
and emotionally loaded examples or metaphors is an extremely potent
tactic.
Several counter-arguments (for instance, jeopardising e-economy, or
pushing the blocked content into darknets, as discussed below) are
related to the Law
of Unintended Consequences: the fact that we cannot ever
predict all
possible consequences of any action, especially intrusive
actions with regard to complex systems. Introducing censorship in
the Internet is just such a case. Calling upon this fact and this law
can itself be a good counter-argument.
It is also well worth keeping in mind to make sure (if at all
possible in a given local political situation) that the anti-censorship
action cannot be manoeuvred into any particular political corner
(i.e. so that it’s not called “leftist issue”). Censorship and freedom
of speech are issues that are of interest to people from any side of the
political spectrum and being able to reach out even to
groups that would not be willing to agree with you on other issues
is crucial.
Technical arguments
Due to the technical make-up of the Internet there are several strong
technical arguments to be made against Internet censorship. The main
categories these fall into are:
- it requires far-reaching infrastructural and topological changes to
the network;
- it requires high-end filtering equipment that will likely not be
able to handle the load anyway;
- it does not work: it is easy to circumvent, it does not block
everything it is supposed to, and it blocks things that are not supposed
to be blocked.
There are several ways content might be blocked/filtered on the
Internet, and several levels that censorship can operate at. Each has
its strong and weak points, none can guarantee 100% effectiveness, all
have problems with over-blocking and under-blocking, all are costly and
all require Internet surveillance.
Effectiveness of Internet censorship measures is
never complete, as there are multiple ways of circumventing them
(depending on the given measure).
Over-blocking occurs when a legal content that
should not be blocked is accidentally blocked by a given
censorship measure. Depending on the particular scheme chosen this might
be a problem pronounced more or less, but it is always present and
inevitable. It does not relate to situations where the block list
intentionally contains certain content that should not officially be
blocked.
Similarly, under-blocking is content that officially
should be blocked, but accidentally isn’t. It is not content
accessible by circumvention, but simply content that is accessible
without using any special techniques that “slipped through the fingers”
of the particular censorship scheme.
Both the resources required (equipment, processing
power, bandwidth) and the cost of handling the list of blocked
content also vary between censorship schemes and depend on
method used.
Whether or not a method employs deep packet inspection
(DPI) is indicative of both how intrusive and how
resource-intensive it is.
Below a short summary of possible blocking methods is provided, with
information on the above factors. Possible circumvention methods are
summarized at the end of this sub-section.
DNS-based blocking:
over-blocking probability: high
under-blocking probability: medium
required resources: small
list handling cost: medium
circumvention: very easy
employs DPI: no
DNS-based blocking requires ISPs (who usually run their own DNS
servers, being default for their clients) to de-list certain domains (so
that they are not resolvable when using these DNS servers). This means
that the costs of implementing it are small.
However, as users can easily use other DNS servers simply by
configuring their network connection to do so (not a difficult task),
this method is extremely easy to circumvent.
This method has a huge potential for over-blocking, as due to certain
content whole domains would be blocked. This means that it has a
potential to bring down a website or a forum due to a single entry
published on them.
Due to websites purposefully publishing content that is supposed to
be blocked changing their domain names often (sometimes in the course of
hours!), list handling costs and risk of under-blocking are medium.
IP address-based blocking:
over-blocking probability: high
under-blocking probability: medium
required resources: small
list handling cost: medium
circumvention: medium
employs DPI: no
IP-based blocking requires the ISPs to either block certain IP
addresses internally or route all the outgoing connections via a
central, government-mandated censoring entity. It is only superficially
harder to circumvent, while retaining most if not all problems of
DNS-based blocking.
Both IP address-based blocking and DNS-based blocking do not employ
deep packet inspection.
Websites that purposefully publish content that is supposed to be
blocked can circumvent IP-based blocks by changing the IP (which is just
a bit more hassle than changing the domain-name); users wanting to
access blocked websites can use several methods, admittedly a bit more
complex than with DNS-based blocking.
It is possible to improve the effectiveness of an IP-based block (and
making it harder to circumvent) by blockiong whole IP ranges or blocks;
this, however, dramatically rises the probability of over-blocking.
URL-based
blocking:
over-blocking probability: low
under-blocking probability: high
required resources: medium
list handling cost: high
circumvention: medium
employs DPI: yes
This method employs deep packet inspection.
Because this method blocks only certain, URL-identified content, not
whole websites or servers (as do DNS-based and IP-based methods), it has
much lower potential for accidental over-blocking. This also entails it
has a higher potential for under-blocking, as the content can be
available on the same server under many different URLs, and changing
just a small part of the name defeats the filter.
Users wanting to access blocked content have also a wealth of methods
(including proxies, VPNs, TOR, darknets, all discussed below).
Dynamic blocking (keywords, image recognition,
etc.):
over-blocking probability: high
under-blocking probability: high
required resources: very high
list handling cost: low
circumvention: medium
employs DPI: yes
This method uses deep packet inspection to read the contents of data
being transmitted, and compares it with a list of keywords, or with
image samples or video (depending on the content type).
It has a very serious potential for over-blocking (consider blocking
all references to “Essex” based on the keyword “sex”; consider blocking
Wikipedia articles or biology texts related to human reproduction), and
of under-blocking (website operators can simply avoid using known
keywords, or use strange spelling, for instance: “s3x”).
Combating under-blocking with extending keyword lists only
exacerbates the over-blocking problem. Combating over-blocking with
complicated keyword rule-sets (i.e. “sex, but only if there are
white-space characters around it”) only makes it easier to circumvent it
for website operators (i.e. “sexuality” instead of “sexual”).
List handling costs are low, but this method requires huge computing
and bandwidth resources, as each and every data-stream on the network
needs to be inspected, scanned and compared to keywords and samples. It
is especially costly for images, videos and other non-text media.
Users still can circumvent the block in several ways.
Hash-based blocking:
over-blocking probability: low
under-blocking probability: high
required resources: very high
list handling cost: high
circumvention: medium
employs DPI: yes
Hash-based blocking uses deep packet inspection to inspect the
contents of data-streams, hashes them with cryptographic
hash functions and compares to a known database of hashes to be
blocked. It has a low potential for over-blocking (depending on the
quality of hash functions used), but a very high potential for
under-blocking, as a single small change to the content entails a change
of the hash, and hence content not being blocked.
Resource needs here are very high, as not only all the data-streams
need to be inspected in real-time, they also need to be hashed (hash
functions are computationally costly) and the hashes compared against a
database. Costs of handling the hash-lists are also considerable.
Users can circumvent the block in several ways.
Hybrid solutions (i.e. IP-based + hash-based):
over-blocking probability: low
under-blocking probability: high
required resources: medium
list handling cost: high
circumvention: medium
employs DPI: yes
In order to compromise between high-resource, low-over-blocking
hash-based blocking and low-resource, high-over-blocking IP- or
DNS-based solutions, a hybrid solution might be proposed. Usually it
means that there is a list of IP addresses or domain names for which the
hash-based blocking is enabled, hence only operating for a small part of
content. This method does employ deep packet inspection.
Required resources and list handling costs are still considerable,
and under-blocking probability is high, while circumvention by users is
not any harder than for hash-based block.
There are several circumvention methods possible to
be employed by users willing to access blocked content.
Custom DNS server settings can be used to easily
circumvent DNS-based blocking. It does not require almost any technical
prowess and can be used by anybody. There is a number of publicly
available DNS servers, possible to use for this purpose. There is no way
to easily block the use of this method without deploying censorship
methods other than pure DNS-blocking.
Proxy
servers, especially anonymous ones, located outside the
area where a censorship solution is deployed can be used quite easily to
circumvent any blocking method; users can modify their operating system
or browser settings, or install browser additions that make using this
circumvention method trivial. It is possible to block the proxy servers
themselves (via IP-blocking, keyword blocking, etc.), however it is
infeasible to block them all, as they are easy to set-up.
Virtual Private
Networks (including “poor man’s VPNs” like SSH
tunnels) require more technical prowess and usually a (usually
commercial) VPN service (or SSH server) outside the area with blocking
deployed. Blocking all VPN/SSH traffic is possible, but requires deep
packet inspection and is a serious problem for many legitimate
businesses using VPNs (and SSH) as their daily tools of trade, to allow
their employees access to corporate networks from outside physical
premises, via a secured link on the Internet.
TOR,
or The Onion Router, is a very effective (if a bit slow)
circumvention method. It is quite easy to set-up – users can simply
download the TOR
Browser Bundle and use it to access the Internet. Due to the
way it works it is nigh-impossible to block TOR traffic (as it looks
just like vanilla HTTPS
traffic), to the point that it is known to allow access to the
uncensored Internet to those living in areas with most aggressive
Internet censorship policies – namely China, North Korea and Iran.
None of the censorship solutions is able to block content on
darknets
– virtual networks accessible anonymously only via specialised software
(for instance TOR, I2P,
FreeNet), and
guaranteeing high resilience to censorship through technical composition
of the networks themselves. Because darknets are both practically
impossible to block entirely and not allowing for any content blocking
within them, they are effectively the ultimate circumvention
methods.
The only downside to using darknets is their lower bandwidth.
Indeed, deploying Internet censorship pushes the to-be-blocked
content into darknets, making it ever-harder for law enforcement gather
evidence and researchers gather data on the popularity of a given type
of censored content. This is further discussed in the philosophical
arguments sub-section.
While not necessarily a circumvention tool, TLS/SSL
defeats any censorship method that relies on deep packet
inspection, as the contents of data-streams are encrypted and
readable only to the client machine and the host it is communicating
with – and hence unavailable to the filtering equipment.
TLS/SSL provides end-to-end encrypted, secure communication;
initially used mainly by banking and e-commerce sites, now being
employed by ever-rising number of websites, including social networks.
Accessing websites with https://
instead of
http://
is making use of TLS/SSL; it is however used to
provide secure layer of communication also for many other tools and
protocols (for instance, e-mail clients or some VoIP solutions).
Once a DPI-based censorship solution is deployed, affected users and
services will gradually and naturally gravitate to this simple yet very
effective solution. This means that any DPI-based censorship scheme must
handle TLS/SSL communication. This can only be done in two ways:
- block it altogether;
- perform a man-in-the-middle
(or MITM) attack on encrypted data-streams.
Blocking is not hard (TLS/SSL communication streams are quite easy to
filter out). However, as TLS/SSL is a valid, legal and oft-used way of
providing security for users by legitimate businesses, especially banks,
this is not a viable solution, as it will cause outrage of users,
security researchers and financial companies (or, indeed, all companies
relying on TLS/SSL for their security needs).
Performing a man-in-the-middle attack means getting in a way of an
encrypting data-stream, decrypting it, checking the contents,
re-encrypting them and sending them to their destination, preferably in
a way that neither the client, nor the server notice the intrusion. With
properly created and signed certificates this is only viable if the
censorship equipment has a special digital certificate allowing for
that.
There have been instances where such certificates leaked
from compromised Certificate Authorities (or CAs) and were
used by oppressive regimes for MITM attacks on TLS/SSL; also, some
filtering equipment takes advantage of such certificates – albeit
provided wilfully and legally by one of the CAs that is co-operating
with a given filtering equipment vendor – to perform clandestine MITM
attacks in the surveiled network.
Performing MITM on TLS/SSL is a very resource-intensive operation and
only adds costs to the already high-cost DPI-based censorship schemes –
filtering devices equipped with digital certificates allowing for
performing clandestine MITM are considerably more costly.
A different argument carries more weight here, however. Performing a
man-in-the-middle attack is even more intrusive and violating than deep
packet inspection. It is a conscious act of breaking encrypted
communication in order to get to its contents and then covering one’s
tracks in order to make the communicating parties feel safe and
unsurveiled. There are not many more hostile digital acts a government
can perform on its citizenry.
Moreover, using MITM on all connections in a given network lowers
trust level dramatically. This results in citizens not trusting their
banking, financial and e-commerce websites, and all other websites that
employ TLS/SSL, hence has a huge potential to hurt the whole
e-economy.
It also defeats the purpose of using TLS/SSL-encrypted communication
to provide security. By doing so, and by lowering users’ trust towards
TLS/SSL in general, it makes them more vulnerable and insecure on the
Internet.
Finally, clandestine MITM can be discovered by simply removing the
Certificate Authority that issued the certificate used by filtering
equipment from the certificate store used by client software – and can
be performed by the users themselves. This will have a side-effect
of making all connections to all websites that also employ certificates
from this given CA, and all connections that a MITM attack is performed
on, marked with “invalid certificate” error by client software
(e.g. browsers).
Economical arguments
The economical arguments to a large extent stem from the technical
issues involved. Infrastructural changes needed would be costly, the
cost of the required amounts of high-end filtering equipment would be
astronomical, and there are labour costs involved, too (hiring people to
select content to be blocked, and to oversee the equipment). The costs,
of course, differ from scheme to scheme and from country to country, but
are always considerable.
It is also very important to underline the hidden costs that
ISPs (and hence – their clients) will have to cover in many
such schemes. If the ISPs will be required to implement content
filtering in their networks, they will have to foot the bill. Once this
is made abundantly clear, the ISPs might become strong supporters for
the anti-censorship cause.
If the scheme would entail the government paying the ISPs for the
implementation of the measures, it will be hard to get them on-board,
but then simply estimating the real costs of such measures and getting
the word out that this will be paid by the taxpayer is a very strong
instrument in and of itself.
Either way, requiring transparency, asking the right questions about
the costs and who gets to pay them, making cost estimates and publishing
them and the answers is very worthwhile.
It is easy to overlook the broad chilling effects on the
whole Internet-related economy due to Internet censorship schemes being
rolled out, and general economy costs related. Uncertainty of
law, of blocking roles (which cannot be clear and unambiguous, for
reasons discussed below), of a website – after all being an investment
in many cases – being available at all to the intended public, and of
ways of appealing an unjust content blocking will disincentivize
businesses to invest in web presence.
Hence, a whole industry will take a blow, and with it the whole
economy.
Philosophical arguments
This topic is ripe with philosophical issues. These for the most part
boil down to the question whether or not the end (i.e. blocking child
pornography, or any other excuse) justifies the means
(infrastructure overhaul, huge costs, infringement of freedom to
communicate and the right to privacy)?
Of course the main axis of anti-censorship philosophical
arguments are civil rights. Right to privacy, freedom of
speech, secrecy of correspondence are mostly codified in international
treaties and are a very strong basis here.
However, to make censorship proponents (and the general
public!) understand the civil rights implications of their ideas, it is
crucial to fight the distinction between “real world”
and “virtual world”.
For every technically literate person this distinction does not exist
and it is clear these are just figures of speech. However, for most
Internet censorship proponents, this distinction feels real.
Indeed, such an impression is the enabler. It implies that current laws,
regulations, civil rights statutes, etc., do not work in the “virtual
world”. It is perceived as a tabula rasa, a completely new
domain, where rules are only to be created, and hence it is okay to
introduce solutions that in the “real world” would be considered
unacceptable.
Physical world examples are very helpful here – the classic one being
the postal service opening, reading and censoring our paper-mail as a
metaphor of Internet censorship and surveillance.
There is also the question of the “real-ness” of the “virtual world”
for Internet censorship proponents. The fact that for them the Internet
is a “virtual” space means that censorship and surveillance there do not
“really” harm anybody, do not “really” infringe upon “real” people’s
civil rights. Curiously, pro-censorship actors are incoherent here – as
when they start speaking about the harm done by the issue they propose
censorship as a solution to (i.e. pornography), they see it as “real”
harm done to “real” people.
It is well worth to point out in such a debate that either the
harm in the “virtual world” is “real”, and hence Internet censorship is
unacceptable; or it is not “real” – in which case it is
unneeded.
A question of legality of acts that the content to be blocked
is related to is also a valid one. There are two possibilities
here:
- the acts are legal themselves, while the content is not;
- the acts and the content are both illegal.
The former case is hard to argue for even for the proponents of
Internet censorship scheme. Why should certain content be blocked if
acts it depicts or relates to are not illegal? The arguments used here
will orbit around the idea of making the content censored as the first
step to making the acts illegal, and they should be vehemently
opposed.
In the latter case (that is, for example, the case of child
pornography) one could argue that it is of crucial importance to stop
the acts from happening (in this case, sexual abuse of children), and
blocking the content is in no way conducive to that aim.
It does not directly help stopping the acts; it does not help find
the culprits; it even makes it harder for them to get caught – often
information contained in the content (GPS location encoded in the
pictures; ambient sound in videos) or related to the means of
distribution (owner of server domain name; IP logs on the hosting
server) are crucial to establishing the identity of the culprit, and
blocking the content removes the possibility to use such data.
Blocking such content is swiping the problem under the
rug – also in the sense that the problem becomes less visible
but in no way less real. Policy makers and general public can get
convinced that the problem is “solved” even though it still exists under
the radar (i.e. children are still sexually abused even though it’s
harder to find content related to that on the Internet, due to
blocking). This results in less drive for finding solutions to the real
problem, and less data for researchers and law enforcement.
Another argument is related to the lists of content to be
blocked. There are three issues here:
- how secure are the lists?
- what are the rules of blocking the content?
- who creates, revises and controls them?
If the lists contain addresses, URLs or identifying information on
“evil” content, and as there are no blocking means that are thoroughly
effective (there are ways around every method), obviously these lists
themselves will be in high demand among those interested in such
content. Simply put, they will be complete wish-lists for them. And as
such they are bound to leak.
There is a good argument to be made that the very creation of such
lists (which are necessary for censorship schemes) is in and of itself a
reason not to introduce such measures.
Because the lists themselves cannot be made public (due to all
reasons mentioned above), there is no public oversight of lists’
contents – and hence there is serious concern of over-blocking or
blocking content that in no way fits the intended description of content
to be blocked. This is a slippery slope: once
such a system is introduced, more and more types of content will get
blocked.
As far as rules are concerned, often it is also hard to precisely
define the content that is supposed to be blocked. In the context of
child pornography, for example, establishing the age of the person on
the picture is often a challenge, even for experts; should pictures of
young-looking adults also be blocked? And is it pornography if it is not
sexually explicit – should any
picture of a young naked person be blocked? What about sexually
explicit graphics/drawings depicting apparently under-age persons,
should they get blocked, too? If so, what about stories? Then we land in
a situation where genuine works of art (for example, Vladimir Nabokov’s
Lolita)
should apparently be blocked.
And if even viewing of the blocked content is illegal, under what
legal framework should the list creators be able to review it? They
would have to view it, to review it, so they would be breaking the law.
If the law would permit them to do it, why and on what grounds? If it’s
bad for everybody, it is certainly also bad for them…
Final list-related issue here can be shortened to a well-known quip
“who
watches the watchers”. People that control the blocking lists have
immense power and immense responsibility. As there is no oversight,
there is a large possibility for mischief and manipulation. Especially
when most vocal proponents of some of the Internet censorship schemes
are not exactly the most consistent
themselves.
Lists’ secrecy gives birth to yet another issue – that of
lack of due process. If rules of blocking are not clear and
unambiguous (they can’t be), and hence there is a serious concern for
content being blocked that should not have been blocked (there is), how
exactly can such incorrectly-blocked content operator appeal their
content being blocked if the lists are secret? How do
they even get information about the blocking, to distinguish it from
a technical error on the network?
This can
cause serious financial losses and there should be a way for content
operators to get informed that their content is being blocked, why is it
blocked and what are their possibilities of challenging such blocking.
However, due to the secrecy of the process and the lists, this
information cannot be provided, not to mention the additional costs of
informing every single entity who’s content is blocked.
Also, surprisingly large number of pro-censorship actors that
do not have ulterior motives treat any civil rights based criticism of
their position personally, as if the opponents were suggesting
that they do indeed have ulterior motives and are going to use the
censorship and surveillance infrastructure for their own political
gains.
This is something that eluded me for a long time. Only after a
meeting on which I used “the next guy” argument certain pro-censorship
actor (high-level representative of the Ministry of Justice) understood
that we are not attacking him personally, and that there are indeed
valid civil rights issues at hand.
“The next guy” argument is a very nifty way of disarming an
emotionally loaded situation like that, and basically states that nobody
assumes that the person (politician, civil servant, etc.) we are
currently discussing Internet censorship with has ulterior motives and
will abuse the system when introduced – however, nobody knows who “the
next guy”, the next person to hold that office or position, will be. And
it is against their potential abuse we are protesting today.
A special case of government-mandated opt-out Internet
censorship is also worth considering. Such schemes have been
proposed around the world (most notably in
the UK), and are construed in order to answer some of the civil
rights issues involved with blocking content that is legal but unsavoury
(porn, for instance).
While the proponents of such measures claim that it completely solves
these issues, this is not the case, as opt-out means that individuals
willing to access the unsavoury content would have to divulge their data
to their ISPs or content blocking operators, and hence be formally
associated with the unsavoury content. This is not something many would
be willing to do, even though they would indeed want to access the
content.
A successful line of arguing against opt-out is to propose a similar,
but opt-in solution. This would give a block on unsavoury
content to those who want it, but would not create the situation
described above. However, instituting such a block on a central level
could be a stepping stone for mandating a central censorship solution
(as the costs and technical difficulties would be similar if not the
same), and hence opt-out blocking should be opposed entirely, with
opt-in as a last-resort proposition.
Emotional arguments
The basic strategy is to call things by their real names – removal or
blocking of content without a court order is
censorship, and due to technical make-up of the
Internet it is only possible with complete
surveillance. There is no way around these terms, and
censorship opponents can and should use them widely when speaking about
such ideas. Again, using paper-mail censorship surveillance metaphors
(private mail opening, reading, censoring on post offices) is very
important to convey the seriousness of the issue.
Based on the cost of such solutions an emotional argument can be made
that such money could be much better spent, for example on hospitals,
road safety programmes, orphanages. There is no shortage of problems
that need solving and the money should go there, instead of financing
morally and legally questionable, technologically unfeasible censorship
ideas.
It can also be argued that Internet censorship introduces collective
punishment – all Internet users and Internet businesses are
being punished for actions of a small group of criminals. The money and
resources used for Internet censorship should be instead used to punish
the guilty, not the general public.
Attempting to find organisations that try to solve the problem that
the Internet censorship scheme is officially trying to solve
(i.e. sexual abuse of children, creation of child pornography), but are
against the censorship as a method, is also viable and advised. It is
quite possible that such an organisation exists (for instance, in Poland
the KidProtect.pl fundation,
fighting sexual child abuse, was very vocally opposed to Internet
censorship, for many reasons stated in this text), and having them as an
ally is extremely effective.
If everything else fails, and as a last resort, an ad
personam argument can be made that a given proponent of Internet
censorship measures has a hidden agenda and wants to introduce the
measures for their own personal aims. Using this argument is not
advisable, however, especially early in the debate, as it ensures that
such a person (and their community) will most certainly become hostile
and even stronger a proponent of censorship measures than before. Using
this argument is not recommended at all.
Useful analogies
These analogies are very useful in conveying the technical set-up of
the Internet and the technical issues around censoring it.
IP address:
a physical street address, it can lead to several different businesses
and individuals (i.e. domains).
Domain name:
a name (either business or personal) that identifies a particular
business or person under a given physical street address (i.e. IP
address).
Domain name resolution:
a process of “resolving” a personal or business name to a physical
street address (i.e. IP address), so that a package (i.e. data) can be
delivered to them.
Deep packet inspection:
opening physical mail, breaking the envelope and reading the contents in
order to be able to decide whether or not to censor it (as opposed to
just reading the addressee and the sender data available on the
envelope).
Proxy:
asking somebody else to send the package (i.e. data) for you and forward
you the answer of the addressee.
HTTPS:
Sending encrypted snail-mail.
Man-in-the-Middle:
Opening encrypted snail-mail, decrypting, reading, re-encrypting it and
re-sending it to the addressee. Usually an attempt is made to do it in a
clandestine way, so that neither sender nor addressee are aware of
it.
Useful quotes
A very good collection of quotes useful in the context of anti-censorship debates
is available on WikiQuote; it is also worth looking through civil rights
and free
speech related quotes. Some of the highlights below.
They who can give up essential liberty to obtain a little temporary
safety, deserve neither liberty nor safety.
– Benjamin Franklin
I disapprove of what you say, but I will defend to the death your
right to say it.
– Evelyn Beatrice Hall
If we don’t believe in freedom of expression for people we despise,
we don’t believe in it at all.
– Noam Chomsky
The Net interprets censorship as damage and routes around it.
– John Gilmore