Skip to main content

Songs on the Security of Networks
a blog by Michał "rysiek" Woźniak

A link cannot be illegal

This is an ancient post, published more than 4 years ago.
As such, it might not anymore reflect the views of the author or the state of the world. It is provided as historical record.

For some time now there are ideas – even in Poland – to criminalize mere linking to copyright infringing materials on the Internet. This idea was also floated recently on the 5th Copyright Forum, by Polish copyright collectives like ZAiKS, and by a private company.

It’s not even the level of absurdity that gets me with ideas like that – it’s the fact that some people are apparently able to maintain a straight face while proposing it. It’s hard to be sure whether it’s a result to ignorance of some very basic concepts (not even regarding electronic communications – but human communication itself), or a conscious attempt at solving “the Internet problem”.

Anyway, let’s go from the top.

Internet hyperlinks fulfil the same role as bibliographic information: they merely inform where a given referenced content resides, without transmitting nor copying any of it in and of themselves. Link itself does not and cannot infringe upon anybody’s rights – it’s just information about where to look for a given content.

That means that a link is also useful information for those trying to combat copyright infringing materials. Thanks to links it’s easier to find content published without proper authorization in order to remove it from the Internet – of course in accordance with due process of law. Why do organisations that claim to work in authors’ interests try to deny themselves such a useful tool?..

Crucially, a person linking to some content has no way of ascertaining the legal status of that content. Even lawyers often have problems with that, due to copyright law’s Byzantine level of complication. Why would it be okay, then, to require such knowledge and insight from any person linking any content on the Internet?.. By the way, I wonder if every single link on ZAiKS website is “legal”…

Interesting, also, is what should the sanctions be for such a heinous crime? Should we expect a Police raid and extradition for a simple act of linking some content on our website?

And what about links to websites with such links? If a website containing “illegal links” would be itself illegal, linking to it would also be illegal, right? And how about links to websites that link to such websites? And so on… We can either stop it at the only sane place – the very beginning – by saying “links can’t be illegal”, or it’s turtles all the way down.

Regardless of possible sanctions, making linking potentially illegal would make Internet users afraid of linking. The very core functionality of the World Wide Web would suddenly become a minefield. The effects would not be very different from Iran’s “halal Internet” – with the small difference that purposeful censorship would be replaced with self-censorship. Maybe that’s the real aim of this proposal?

Proponents of this “solution” claim it is necessary due to ineffectiveness of notice and take down procedures of removing infringing materials. People that employ themselves with intentional commercial copyright infringement can, after all, change addresses and domain names very easily, and very fast. They can register their website or company abroad, making enforcement harder.

For some reason proponents of making linking illegal seem to think that warez-sites (which are the direct target of this proposal) operators can’t do the same thing with link farms as they do with their file hosting. Trying to enforce removal of “illegal” links will face the very same problems enforcement of notice and take down procedures face, and will be similarly ineffective.

It will, however, have one tangible effect: people registering and hosting perfectly legal websites in Poland will be afraid to link to anything – how can they be sure it’s not illegal?..

That’s still not everything, though! Obviously, searching for “infringing links” and sending out take down notices would have to be automated, just as infringing materials are often searched for and handled automatically. Such algorithms make mistakes, leading to removal of perfectly legal content.

For big players that might not seem to be a problem, but small music producers or individual artists – especially those publishing their works under libre licenses – will be severely disadvantaged, due to incomparably fewer resources at their disposal for demanding restoring of their incorrectly removed/blocked content. Considering that for such artists their freely available content might be an important element of their business model, for example helping build their fan base, such removal/blocking of content can cause real financial losses. Would copyright collectives fight for the rights of libre artists in such a situation?..

The same problem that is so clearly visible with notice and take down at the source will only be amplified if some links would be “illegal”. If (as with notice and take down) there are no sanctions for unfounded removal of a non-infringing link, service providers, hosting operators, etc, will prefer to be “rather safe than sorry” and remove everything that gets a notice. Links to libre-licensed content will get removed, and artists publishing them will inevitably suffer concrete financial and non-financial losses.

I do think, sometimes, that maybe such absurd ideas should be supported, as their introduction would help push people out of walled gardens and into decentralised, encrypted networks and tools like RetroShare, TOR or FreeNet.

Tools that would be much more effective in combating censorship and surveillance of their users.


The Court of Justice of the European Union has just ruled on a case with linking as the central issue – the verdict: linking is not illegal.

Now, this does not cover the exact case discussed here (linking to a material that is published illegally), but is nonetheless an extremely important outbreak of common sense.

Copyright reform debate lives on

This is an ancient post, published more than 4 years ago.
As such, it might not anymore reflect the views of the author or the state of the world. It is provided as historical record.

Since Polish and European citizens voiced their opinions on the need of copyright reform so clearly 2 years ago there is a feeling of anticipation in the air – what’s next? Brussels-based politicians hint (or outright state publicly) that everybody is waiting for some Polish move.

Can’t say I blame them. Widespread Anti-ACTA protests started in Warsaw; Polish Prime Minister was the first to admit ACTA was a mistake; politicians from Poland were also the first to grasp that (using the not-exactly-fitting language of Polish MP Gałażewski) ACTA was “passé” and also the first to start asking the right questions.

This past September finally something has happened. At the CopyCamp Polish MEP Paweł Zalewski has shared his ideas for copyright reform in the EU, and about two months later, together with Amelia Andersdotter and Marietje Schaake, announced them officially in the European Parliament.

A month later the European Commission opened-up consultations on the InfoSoc directive reform – a process we should all take part in! Time is of essence, as the deadline is 5th of February, but there are tools that help get involved. Use them!

Soon afterwards the Polish Ministry of Culture opened local consultations in order to create an official Polish stance in the InfoSoc reform consultations.

The Copyright Forum is one of the after-effects of the ACTA debate (the Ministry was responsible for the treaty within the Polish government), and of other situations where Ministry’s decisions and processes seemed less than transparent, so to speak. For which they have been heavily criticised.

The Ministry seems to learn on their mistakes, and does not wish to ever be called “non-transparent”, hence the Copyright Forum was born. Long story short, any and all organisations that are interested in copyright and its reform now have a chance to voice their opinions in an open debate, facilitated by the Ministry. Finally!

New consultations, old misunderstandings

The 5th Copyright Forum was about:

As far as the a sane stance on these issues is concerned, please see Open Education Coalitions’s response in this consultation process. I want to focus here on something else.

This was not the first (nor, hopefully, the last) copyright consultation meeting I partake in. Even though we (“us, opennists”) had been explaining our position for years, we still find basic lack of understanding (as I am not going to assume malicious, conscious mangling) of what we’re trying to say. It was clearly present in statements made on this Forum also. Let’s have a look at the most “interesting” of ideas and misrepresentations, shall we?

“Linking to illegal content should be illegal itself”

The idea here is that mere linking on the Internet to content that is in some way infringing on somebody’s intellectual property copyright should be illegal itself, because notice-and-take-down procedures are slow, complicated and ineffective.

Before we dive into how bad an idea this is, let’s stop for a moment on the “illegal content” part. That’s another of those language constructs that are artificially used in a way to slant the debate before it even starts. “Illegal content” is content that is illegal to share, reproduce, etc, under any circumstances, regardless of whether or not you have a license on the content itself. If Polish copyright collectives claim that the “content” created by the artists they (supposedly) represent is “illegal”, maybe they should call the Police?..

If we’re talking about infringement, we should call it infringement, nothing more, nothing less.

An idea that was also present on the Forum and is closely related to “making linking illegal” strategy is “making search engines remove links to infringing content”. Both ideas are completely absurd, for a number of reasons too long to be put in full here; here’s the skinny:

  • links are purely informational, just as bibliographic notes; penalisation of linking is as absurd as penalisation of bibliographic notes;
  • removing links to infringing content is sweeping the problem under the rug, instead of solving it at the source (e.g. by removing the infringing material);
  • a person that links to a given content has no practical way of ascertaining the legality of said content, not to mention that this legality can change over time;
  • this whole idea is claimed to be “necessary” in the light of “ineffectiveness” of notice and take down; well, if notice and take down is ineffective, what makes the proponents of such a measure think that they will have any more luck with removing links than with removing content itself?
  • regardless of its ineffectiveness, it will cause problems for works published under libre licenses, including free software.

More in-depth arguments are also available.

“Users of culture”

The division between “users” (or “consumers”]) of culture, and “creators” of it is as old as it is outdated. It had, perhaps, some sense in the times of mass-media, with their clear difference between broadcasters and audience. Today all you need to become an artist is a laptop, and all you need to reach your audience is the Internet.

Read-only culture became read-write again, finally. There is no meaningful line of division between “users” and “creators”. Everybody can be one or the other, as they choose.

Users’ responsibility

According to polish copyright law today, if I have access to a given work, I can download it, and use it (including sharing it non-commercially with my family and friends). I do not have to check whether or not that content has been shared with me legally. That’s the sharer’s problem.

Of course that’s something hard to swallow for the copyright collectives and their ilk. Hence the idea to change that, to make the user responsible for downloading and using content that might have been illegally published or shared. A proposal that is burdened with some of the same arguments as “making linking illegal” one above. Namely, how can a user check that, if even courts tend to have problems with it?

Should the illegally-shared works be marked in a certain way? If so, whose responsibility that would be? Artists’? Sharers’ themselves? If the latter. how can one be sure that the content gets marked truthfully? If the former, artists would have to gain control over every single shared copy… while somebody that would still want to share without proper authorisation will do so anyway.

Or maybe the “users of culture” (being creators themselves!) should limit themselves to just a few “kosher” channels? If so, which ones should these be? And who decides, on what grounds? Can I start such a channel myself, for example in the form of a blog, videolog, podcast? If so, how can my audience be sure that it is “legal” itself?

Finally, how should users of infringing content be punished? Are we to assume that copyright collectives are proposing the American model here?

“Everything that potentially allows anybody to make money – is commercial”

That’s an attempt at defining the hard to draw line between “commercial” and “non-commercial” use, and it’s done in a way that makes sure that any use of cultural work on the Internet is in fact commercial. After all, even if I were to completely non-commercially send a private e-mail with a picture attached to my family member, my ISP, their ISP and probably at least one ISP in between makes money in a quite real way.

Does it make sense to use such a broad definition of “commercial use”? After all, it’s legal for me in Poland to watch some movies together with my friends. But in such a situation there are several third parties than can profit from it – a taxi driver, public transport operator company, some local grocery stores where we buy the supplies for the evening… Does that make my watching movies with my friends “commercial”? Or, for that matter, if I am going to watch a movie myself, the electrical company is going to profit a bit. Is that commercial also, then?

And by the way, how about the copyright collectives themselves – after all they employ people, who profit from their activities…

Non-commercial vs. libre-licensed

That one’s a classic, with authors publishing their works under libre licenses being called “creators not planning to make money on their works”.

How many times do we have to repeat ,ad nauseam, that libre-licensed work does not have to be a pro-bono work? There are many ways to make money on digital works – sponsoring, crowdfunding, work-for-hire, adverts are just a few most obvious.

There are big corporations and small firms publishing some of their products on libre licenses – Intel, Google, RedHat, to name just a few best known. Stating contemptuously that publishing something under a libre license means that the author has no intention on profiting from it is either a sign of ignorance, or (much worse) willful attempt at marginalizing such creators.

“Some organizations are only interested in gratis access for users”

That’s also something we hear quite often. Usually from the same person. Mr Dominik Skoczek, once the head of the Intellectual Property and Media Department at the ministry of Culture (he was the person responsible for ACTA topic within the Ministry), today representing the Association of Polish Movie Producers (think: MPAA without the clout), had an abundance of occasions to hear from us that libre licensing is about something other than cash.

And that it’s not about “users”, as everybody can be a creator.

I wouldn’t go as far as to assume malice on part of Mr Skoczek; on the other hand if I assume that after years of our patient explanations of our views he still cannot grasp the not-so-complicated ideas behind them, Mr Skoczek could understandably feel offended…

Instead of pondering the source of such lack of understanding, then, I shall simply explain once more, that libre licenses, free culture, free software, etc, are not about gratis access, but about the possibility of creating and remixing. Creativity and culture are never in a void, all creative work is derivative, making works inaccessible for remixing up to 70 years after author’s death – is a barbaric attack on culture itself.

Authors of libre-licensed works and organisations demanding libre-licensing of works created with public funds demand not “access” for all, but allowing creativity for all.

A Polish lawyer (and author of a well-known blog), Piotr VaGla Waglowski, had asked one of the Polish copyright collectives (the one supposedly representing rights of authors like him) about the possibility of receiving what could most aptly be described as “his money”. The answer he had received was too long to be quoted in full here, but the important part is this:

Due to a very large number of entities entitled to receive these funds there is a danger of considerable atomization of remunerations to which they are entitled. This has direct bearing also on the possibility of such remuneration, namely on acceptable repartitioning model of the received funds. (…) In summary, we have no way of meeting your expectations at this time.

I don’t think I can find a better comment than VaGla himself:

By the way, is not remunerating authors by copyright collectives can itself be considered copyright infringement?

Creators’ inalienable right to financial gratification

That’s another very dangerous idea being passed under the guise of “working in artists’ interests” (by none other than copyright collectives, of course). The proposal is: as authors are often in a much worse negotiating position when discussing their remuneration terms with their publisher or producer, it should be made impossible for authors to abdicate or transfer their right to financial gratification. Thus, every time a work is “exploited”, artists themselves will also have to receive money.

These, of course, would go through copyright collectives. But that’s okay, as “the only chance for money reaching the artist are copyright collectives”, right?

Regardless of who should receive such royalties, the very fact of their introduction would make libre licenses ineffective – each use, even of a libre-licensed work, would mean payable royalties. That means Wikipedia would be (financially) impossible, along with open educational resources and the rest of the libre side of creativity.

Summing it all up

I guess the best summary of the Forum and many of the ideas expressed therein is a quote by one of the attendees (can’t wait for the release of the video recording of the forum to link directly):

Representatives of some of the NGOs here think this is all about some ideals – it’s not, it’s about hard cash!

What could I possibly add to that! We’re all waiting for the official stance of the Ministry (and the Polish government) in the European Commission consultation process – but we need not wait without action on our own part!

Neat HaCSS, or let's de-JS the Web a bit

This is an ancient post, published more than 4 years ago.
As such, it might not anymore reflect the views of the author or the state of the world. It is provided as historical record.

I like playing with technology, and I am particularly fond of playing with CSS/HTML. Not a big fan of JavaScript, though.

Don’t get me wrong, I understand the utility of JavaScript, and the power of jQuery, I really do. However, I believe both are overused and abused – much, if not most, of functionality of “rich Internet applications” (also known simply as “websites”), including transitions, animations and whatnot, can be implemented in HTML5 and CSS3, with JS used for AJAX requests. The advantages being: faster processing, smoother animations, more readable code, better separation of logic and presentation.

Lately I dived into 2 side-projects that allowed me to dust-off my CSS-fu while making a point about how JS is becoming less and less needed as far as presentation layer is concerned.

Sailing in the Cloud

The first little project is Sailors.MD, a simple website for my sailors-and-medical-professionals friends.

The second one is implementing something ownCloud (that great little project letting everyone self-host their own “cloud”) once had had that has later sadly been removed: sharing calendars and events publicly via a link with a token, or “link-sharing”. I know of at least one potential deployment where that was a show-stopper. So after complaining for a few months I decided to implement it myself.


Sailors.MD was a no-brainer – a completely new project, a simple static website (that will grow, some day, but not just yet), no magical mumbo-jumbo, it just needs to look nice and be functional. No JS needed there, period!

Now, ownCloud, on the other hand… JavaScript. JavaScript everywhere! Hacking together a nice CSS/HTML-based interface for internal- and link-sharing of calendars and events, with JS providing just the bare minimum of AJAX-ness, stemmed from the frustration of debugging JavaScript-based interface.

The Problem Challenge

I wanted the interface to retain full functionality – including animations, showing/hiding parts of the interface, etc – even without JavaScript enabled. There were several things that needed implementing in pure CSS/HTML, which seemed hard without JS:

  • sections of the interface collapsing/extending upon click…
  • …that are both directly linkable (i.e. via :target links), and persistent, allowing the user to interact with their contents and sub-sections;
  • showing some controls only when JS is enabled (a reverse-<noscript>, if you will);
  • elegant tooltips.
  • making element’s appearance depend on the state of elements that follow it (remember, there is no CSS selector/operator for being a parent of, and the general sibling operator ~ is one-way only);


Let’s hack!

First of all, some caveats: the below has not been tested on anything other than up-to-date versions of Firefox, Chromium and Rekonq (although IE 10+ should work). If you want to test them in anything else, be my guest, and I would love some feedback.

Secondly, all code in examples linked below is MIT-licensed (info in the examples themselves also).
Why at all? Because it’s good practice to put some license on your (even simple) code so that the rules of the road are clear for other developers.
Why MIT? Well, I’m a staunt supporter of copyleft licenses like the AGPL, but this is just too small a thing, and bulding on too many great ideas of other people for me to feel comfortable with slapping AGPL on it.

Okay, enough of this, on with the code!

The Targetable Displayable

Making sections of a website show/collapse upon click is not that hard once you wrap your head around one beautifully simple hack: CSS Checkbox Tabs. But I wanted more:

  • being able to put the menu in a completely different place in code than the section (i.e. making a menu on top of the site, for example, with sections hidden within the bowels);
  • section targeting via :target, so that they are directly linkable by users.

The first one is rather easy once you get that you can put a <label> wherever you like in the document, regardless where the relevant checkbox is, as long as you set label’s for attribute to checkboxes id.

Enabling :target was more tricky, tough: #target links do not set checkboxes :checked. Also, simply setting the id attribute on the section we want to have #target-linkable will not work: when the user clicks on a checkbox label to choose a different section, that one will still be expanded. Checkbox checking does not change the :target.

We could use:

:checked ~ :target {
   * rules making the :target collapsed

…but that would not work for any situation where the :target element is before the :checked checkbox (the sibling operator ~ is one-way, remember?).

Hey, why not use just :target and forget about checkboxes? Well, then we wouldn’t be able to have sub-sections, as there would be no way of saying “keep this section open even if the user chooses something else (the subsection)”, and there is no “parent of” operator (so there is no way of saying "keep it open if any of its children is a :target). So:

  • :checked on checkboxes/radioboxes keeps state, and that’s a biggie;
  • :target is directly linkable;
  • there is no way to connect the two.

Or is there? If the :target elements are always before the checkboxes, and these are always directly in front of the element that contains our expandable/contractable section, we might be able to get what we want. As long as all :target-able elements are in before of all relevant checkboxes. Tada!

What happens is:

  • from the get-go, no navigational checkbox/radiobox is checked;
  • if there is a #target, CSS rules for the right section container (based on the :target-ed hidden element) kick-in;
  • if now the user selects any sections, these are handled via checkboxes/radioboxes, and because the CSS :target rules for specific elements have the :not(:checked) sibling in chain required, :target rules stop working.



This seems a simple thing, right? Display elements when JavaScript is enabled gets tricky, however, when we’re not allowed to use JS to display them. Now, we all know the <noscript> tag, but here we need to do something exactly opposite, and <script> won’t do, as we’re not going to use JS for that.

Of course we can always use a style-within-noscript:

  #some-element {

…but that’s inelegant. First of all, we’re not supposed to have <style> tags within <body>, just as we’re not supposed to have <noscript> within <head>. Secondly, we might want to have the element gone from the element tree when JS is disabled to pull off other hacks – like a Displayable above, for instance.

Turns out, we can put HTML comments inside <noscript>. Not just that: we can put the start of a comment (<!--) in a different <noscript> element than the end (-->). And apparently these will be interpreted as start/end HTML comment tags only if JS is disabled (they are within <noscript> elements, after all!). That means that this:


…works like a reverse-<noscript> element. The <div> will get shown and included in the document tree only if JS is enabled. Here, check for yourselves by enabling and disabling JS in your browser and visiting the test case.

CSS Tooltips (aka TooltipCSS)

There is a myriad of tooltip JS/jQuery libraries, I’m not even going to link to them. Creating a HTML/CSS tooltip for a given element is also trivially easy (create a child element, use :hover on the parent to show it). Creating a tool-tip on any element matching a selector, without any additional HTML (no additional child elements, etc) – now this is a challenge!

What we need is a way to squeeze a new style-able element or two from any element in the DOM tree, without adding HTML, in pure CSS. Turns out that we have two: ::before and ::after. Yes, they are style-able, yes, we can put whatever we want in them. Unfortunately no, they can’t have child nodes.

But, can we conveniently pass tooltip text to them without making a separate CSS rule for each? Yes, we can.

The content property conveniently accepts attr() form. So we can have a single style stanza saying for example:

 a[title]::before {

…and bam!, all <a> elements with the title attribute set will have ::before pseudo-element containing the value of title attribute.

We can style ::before just like any other element; by making the parent element (the one being “tooltiped”, <a> in example above) relatively positioned and our ::before positioned absolutely we also have pretty good control how and where the tooltip appears.

Because pseudo-elements can’t have child elements (all HTML inside will get rendered as text), it seems we can’t have the small notch that makes a tooltip from a simple bubble. Ah, but we have ::after too, right! We can use it as our notch. If only there was a way to make a pure-CSS triangle from an element

Add a bit of CSS transitions to the mix (I’m using rgba() background/border colours instead of opacity, as opacity for some reason makes elements move just a tiny, annoying bit), remember to hide the elements that constitute our new shiny tooltip so that they won’t obstruct other elements (hint: use visibility:hidden instead of display:none, otherwise transitions won’t work) – et voilà!

A CSS Tooltip appears! It’s super-effective!

Depending on the state of the elements that follow

Well, that is simply impossible, as the ~ is one-way, and there is no parent of element. No, seriously.

But what we can do is play with the order of elements in the mix. Making all the elements (checkboxes, radio controls) the state we want to depend on precede the element we want to style depending on their state is the only way to go. The next logical step is to make that element be displayed in front of them.

That is no rocket surgery and can be achieved in a myriad of ways, for example by enclosing both the depending element and the source elements in a common container, set position: relative on the container and use position:absolute; top:0px on the element we want to be displayed first, like thus – and here’s how it works.

With all their powers combined…

Dry examples are fun and informational (don’t forget to test them with JavaScript disabled, too!), but only once you see it all working together the power of it all gets evident. So, enjoy. This example uses minimal JS to set checkboxes back and forth (in a way that is used in ownCloud calendar sharing interface), but nothing more. With JS disabled it should show the group’s status (“all checked”, “some checked”, “none-checked”).

All examples are valid HTML5 and valid CSS3. Of course, the code is on-line, you can grab it here. I’d love to hear your opinion, or see some other non-JS hacks.


There is some serious magic that can be done with CSS. Once we get an “is parent of” and a truly universal sibling selectors it will open up even more possibilities. JS seems convenient, but more and more people are looking with distrust (or disgust) at JS-infested websites, due to performance and privacy issues involved.

If something can be done in pure CSS/HTML, why not do it that way?

Information Account Number

This is an ancient post, published more than 4 years ago.
As such, it might not anymore reflect the views of the author or the state of the world. It is provided as historical record.

So I’m running the Warsaw Cryptoparty for a few months. one of the most important parts of each such meeting is, of course, explaining GPG/PGP. Which, as should be obvious to anybody who ever tried explaining it to non-tech users, is a highly non-trivial task.

Once you start digging into private and public keys, need to spread public keys but keep private ones, well, private; keysigning; and all the rest of highly technical and often counter-intuitive issues around it, the user either slumps into slumber, or loses track (and therefore intrest) entirely.

There must be a better way, right?

Well, how about we completely invert the way we explain GPG/PGP. Instead of starting off in technicalities and moving towards the user-space, as we used to do – maybe we should start with something familiar, something that offers users an easy and intuitive graps of the basics, and then, step by step, move towards the technicalities when and if needed?

So, we need an easy-to-grasp, yet fitting metaphore that would allow us to hook-into users’ existing intuitions and at the same time not cause them to pick up bad crypto habits.

It’s all just numbers, right?

This is my GPG/PGP key fingerprint:
07FD 0DA1 72D3 FC66 B910 341C 5337 E3B7 60DE C17F

I have it printed on my business card, for example. This is enough to download my public key from a keyserver, verify it, verify an e-mail signature, and to get my e-mail address or jabber ID from therein. But I don’t want to explain all this to a user that just got my business card.

Instead, I can just say:
Think about it as if it was my Information Account Number. You can use this number to send an encrypted, secure message to me, via any channel you wish (for example, e-mail). Also, if you see a message signed by this key, you can be certain it is from me.

This instantly rings a bell with anybody who used a bank account number to move around money. And it works. Think about it:

  • Whose account number you need to transfer them some money? The addressee’s.
  • Whose account number you check to be sure where the transfer came from? The sender’s.

And in GPG/PGP:

  • Whose fingerprint (and hence, public key) you need to encrypt message to somebody? The addressee’s.
  • Whose fingerprint you check to verify the origin of a signed message? The sender’s.

Additionally, the user can then use any GPG/PGP supporting software (e-mail client, jabber/XMPP client, some newfangled Facebook interface even) to send me an encrypted message and read an encrypted message that I would send to them. From users’ perspective the channel is not that relevant really, or at least doesn’t have to be.

All the technicalities and complication can be now effectively hidden by the software.

Ah, software, so that’s the catch!

Well, not really. The only things software (e.g. an e-mail client) needs to implement for this metaphore to work flawlessly is:

  • allowing the user to easily generate their own “information account number” (so, under the hood, a GPG/PGP keypair);
  • accepting the key fingerprint in the addressee field.

In the e-mail world, which I would wager a bet is most important here, the former is already handled decently by Thunderbird with Enigmail – keypair generation is dead easy. Minor enhancements could be made with the language being used to be a bit less technical, but as far as I’m concerned, we’re home here.

The latter is more or less trivial: a handler that recognizes a properly formatted fingerprint and does all the work (downloading the public key, giving the user a chance to choose an identity if several are provided with the key, encrypting the e-mail, etc) under the hood. Actually, a proof-of-concept already exists, too (you can use it to send me encrypted e-mail right now).


The bank account intuitions seem to work to our advantage here but there might be something I don’t see. If there is, do contact me or comment in this Diaspora thread.

There are also (simple enough, IMHO, but still) changes that are needed to be done to software (good candidates are MailPile and Enigmail) for that metaphore to work all the time. Still, I think it offers a good deal of simplification without being oversimplified. Ate the very least I hope it will spark a discussion on how to efficiently pass the knowledge to users without scaring them, hopefully by hooking into their pre-existing intuitions.

Friends of TTIP and data protection in Brussels

This is an ancient post, published more than 4 years ago.
As such, it might not anymore reflect the views of the author or the state of the world. It is provided as historical record.

I had the pleasure of attending the Friends of TTIP Breakfast Debate #4, about data protection, privacy and TTIP. What’s symptomatic is that one of the questions posed in the information about the debate, was phrased:

  • “What are the dangers of data protection for TTIP?”

What is clear from this little titbit are the priorities of people involved in TTIP negotiations. Namely, their main interest isn’t protecting rights and freedoms of EU citizens. It’s getting TTIP signed into law. Well, at least now we know, right?

Anyway, I was prepared by a friend with some interesting questions, that were both on-topic and probably quite stirring (we’re not great friends of TTIP ourselves). I was not prepared, however, for the surprise I was going to get.

Namely, panel speakers had many of the same questions!

The Panel: NSA, data protection and TTIP

The debate oscillated mainly around the question whether or not Snowden leaks and data protection are relevant to TTIP negotiations. There were some that claimed that these issues are separate and do not influence one another (namely, EPP MEP Axel Voss and Erica Mann, a Socialist MEP turned Facebook lobbyist). Mrs Mann went as far as to say that mixing privacy, data protection and TTIP discussions is dangerous. Mr Voss acknowledged however that safe harbour has its weaknesses.

Mrs Mann also stated, to my amusement, that “big ICT companies, like Facebook, are as European as they are American, as they operate here and have a lot of users here” and that there are already several EU companies building upon the infrastructure of Facebook (so, “Facebook is good for EU business” argument).

Kostas Rossoglou, a senior legal officer at BEUC, took a firm stance that data protection is a crucial topic for TTIP negotiations. Or, rather, that data protection is not for negotiations, as privacy is considered a human right in the EU, and cannot be waived in a trade treaty.

In no uncertain terms Mr Rossoglou stated that TTIP cannot be allowed to weaken existing EU consumer and data protection regulations, that arbitration processes regarding data protection and safe harbour are ineffective, and that it’s clear that the NSA scandal is connected both with data protection and TTIP – after all, how can we trust self-regulation by companies that have already broken our trust (by giving away users’ data to three-letter agencies, regardless whether it was legal in the USA or not)?

The conclusion is simple: safe harbour has proven ineffective at protecting EU citizens rights by being a de facto free pass for US companies and US agencies to work-around the EU personal data protection regulations, and US and EU privacy protection systems seem to be irreconcilable.

Rainer Koch, of Deutsche Telekom (and the fourth person in the panel) didn’t have much to say about data protection and privacy, being more interested in competitiveness that TTIP purportedly would strengthen.

Question time!

Then there were questions from the public. Several people spoke; notably, a representative of the European Commission commented on the topic by stating that NSA and data protection are not, in their opinion, topics related to TTIP, as TTIP negotiations are not concerned with fundamental rights (that’s an interesting take on the fact that TTIP simply seems to ignore data protection and privacy as fundamental human rights in the EU).

There was also a person from the US Department of Trade (the US party to the negotiations), explaining how well the arbitration actually works (hint: nope, it doesn’t, as a private citizen to actually start an arbitration process has to pay several hundreds of dollars, not returnable).

Many of questions I wanted to ask had already been brought up by Mr Rossoglou, but I still had a few up my sleeve:

Secrecy of negotiations

Around 5000 amendments, 100 compromise positions, but still 0 documents the public can read; how can TTIP have legitimacy when negotiated in such an opaque, non-transparent manner? I seem to remember other trade agreement that had been negotiated in secret…

To my surprise all the panelists agreed that more transparency is required. That’s a big step. Now we just have to make them follow up on that with concrete actions…

Facebook controls the economy

While many EU companies build upon Facebook, the power dynamic is extremely one-way – Facebook can unilaterally kill the whole industry with a stroke of a pen (just as Twitter has). Which European company has been represented in similar meetings in the US? Why isn’t there a European company here to support TTIP?

Of course, I had missed Deutsche Telekom’s participation to the panel, so I immediately retracted the last part – but the main question still stands. How can Facebook claim to be a boon to the economy if it can kill a whole branch of ICT market just by changing their API policy?

Does Facebook share as much with EU security agencies?

Do american companies whom Mrs Mann claims to be as European as American share as much data with European security agencies as they apparently do with NSA?

Mrs Mann’s reply was “we only give data upon legal request” – which leaves us to wonder if they heed such “legal” requests also in Belarus, Russia, China and Iran? After all, these countries also have courts and court orders, right?..

Facebook mixes the discussions itself

How can you claim, Mrs Mann, that privacy, data protection and TTIP topics are dangerous to mix while working for a company that actually builds its business model on this very mix? Private citizen data are your trade secrets! There was a time when Facebook didn’t even want to give users’ data to users that created the content themselves!

Somehow Mrs Mann was not willing to reply to this question.


During the wrap-up panel session all the panelists agreed that more transparency is needed. What was remarkable was that Mr Voss said that TTIP can be a good used to renegotiate safe harbour, and that the US has to understand how important an issue personal data protection and privacy is in EU, and has to acknowledge that, as apparently the US government does not respect EU privacy regulations; that deep-mining and analysing data taken by US government from Internet giants is simply not acceptable; and that safe harbour has to be improved, “otherwise we are not willing to go forward as we have in the past.”

These are some strong words, strong positions on important issues that seem so much better than what we had heard during the ACTA process. Hopefully they will influence the policy in the right way.

Social media, Polish Pirates style

This is an ancient post, published more than 4 years ago.
As such, it might not anymore reflect the views of the author or the state of the world. It is provided as historical record.

My not-too-optimistic (to say the least) evaluation of the Polish Pirate Party is not a secret to anybody, including their members and activists. All in all, my assessment of the Pirate movement in general is not clear-cut: much good came out of it, but it does seem like it’s time to move on (where to – that’s a whole different story).

One of the main points I tend to quarrel with the “P3” (as Polish Pirates choose to call themselves) is a certain popular social media portal. I can understand, of course, the need to reach out to people wherever they are (and for the most part they indeed tend to be on Failbook). I do feel, however, that the proper relationship (if any) between P3 and FB should be one getting people off of FB to P3, not advertising FB on P3’s website

Once somebody is already on your website, dear Pirates, why oh why do you see it the right thing to do to get them back to the portal that is in the midst of most of the issues Pirates officially get themselves involved in (like privacy, surveillance, censorship and enacting walls – pun not intended – within Internet, by privatisation of this once-free and open area for ingenuity, art and entrepreneurship)?

There is Diaspora, after all. A simple search there points to several active Pirate Party profiles. Most of them also use Zuckerberg’s roach motel of a social network], I’m sure, but at least they try to lessen its grip on human communication by offering a way for people not caught in walled-garden trap to get information and interact with them in a decentralised way.

Why am I writing all this? Well, I find it curious, amusing, and very, very sad that while the Chilean Pirate Party started sharing with me on Diaspora today, the Polish Pirate Party doesn’t even seem to have a profile there.

A rude comment

This is an ancient post, published more than 4 years ago.
As such, it might not anymore reflect the views of the author or the state of the world. It is provided as historical record.

Swedish Pirates are deciding today whom they put on the #1 spot on the ballot for the coming elections. The choice is between Christian Christian Engström and Amelia Andersdotter.

I believe Amelia is a much better choice: she’s a good leader and a very effective lobbyist, having been incredibly active in copyright reform, privacy, transparency and other debates. She also has a deep and intimate understanding of issues that arise in these debates. She’s an avid public speaker, and has great connections with the backbone of the Pirate movement – hackers, hackerettes, hackerspaces all around Europe.

Rick Falkvinge decided to put his weight behind Christian – and that’s perfectly fine, of course. There are two eerie things about his support, however.

First of all, Rick based his support for Christian mainly in money:

The reason is simple: between him and the other candidate for the ballot’s top position, Christian is the only one funding my keynoting and evangelizing.

Secondly – and that’s a biggie! – Rick (a pirate!) decided to censor the comments on his blogpost, and commented:

(If you want to campaign for the other candidate, use your own damn blog. A number of rude comments deleted.)

So, hereby I am using “my own damn blog”. And for the record, here’s my “rude comment” that got censored along with other comments:

I’m with Asta on this one. I am following your site, Rick, for years, and you have inspired many people, including me, to act and to get involved in the copyright reform debate — and with solid results (to mention the anti-ACTA movement in Poland).

But this is dismal. I understand the need to finance your activities, but this is not the right reason for political decision of this weight, by no means!

In my opinion, the right person for the No 1 spot is, unsurprisingly, Amelia Andersdotter. You may believe otherwise. But the discussion should be based upon merit, not on who pays whom what.

I stand by my comment, as do the other “rude commenters”. This is no way to act for a pirate.

TEDx Warsaw Women and privacy

This is an ancient post, published more than 4 years ago.
As such, it might not anymore reflect the views of the author or the state of the world. It is provided as historical record.

I was planning to attend TEDx Warsaw Women (it happens to be coming up soon), as it looks like an interesting event. Unfortunately, its Organisers decided that:

  1. they don’t give squat about attendees’ privacy;
  2. they ignore completely those of us who do not have an acconut with a certain social network.

Registration is performed via a Google form only, and current info is published exclusively on Facebook.

I rarely receive such a clear and unambiguous information regarding my persona non grata status on any event, so I would like to thank Organisers for being so frank on this. I am not inclined to pay for attendance with my privacy, my data.

While we’re at it, however, I’d like to point the Organisers to some other TEDx and TED talks. For example Bart Jacobs’ Fat, Dumb, Happy & Under Surveillance and Mario Rodriguez’s Facebook Privacy & Identity - Exploring your digital self.

Chris Soghoian also made an interesting talk on Why Google won’t protect you from big brother, and Eli Pariser offered a birds-eye view on What FACEBOOK And GOOGLE Are Hiding From The World.

Had TEDx Warsaw Women Organisers seen any of these talks they might have come to understand why requiring potential attendees to surrender their privacy to companies that are well-known for their hostility towards it is, simply put, not cool.

Would they then decide otherwise? Hard to say. Operant conditioning, employed by these companies, tends to be very effective. Not only on rats.

Copyreform at CopyCamp 2013

This is an ancient post, published more than 4 years ago.
As such, it might not anymore reflect the views of the author or the state of the world. It is provided as historical record.

When a copyright reformist NGO organizes a conference together with (among others) one of the biggest collection societies in Poland, Google, and the Warsaw Hackerspace, you know stuff is going to happen. Especially when Eben Moglen is the keynote speaker, with Jérémie Zimmermann and many of the Polish “opennists” of anti-ACTA fame following closely.

And had you been there you would not have been disappointed!

You’d see a Google rep talking about open innovation and complaining about how Amazon complicates life for Kindle users, and how it’s all fault of the European Union – just minutes after Eben posed a question about how the 20th century could have worked out if “books reported who reads them to the central authority”.

You’d see a copyright maximalisation lobbyist from a collection society trying to teach Eben about the free software movement and libre licensing of culture (the highlights: free software is an anti-copyright movement; Creative Commons licenses have been created by the users to force authors to give up their rights and their works).

You’d see talks about the history of copyright, the Internet master switch, the complicated relationship between copyright and privacy and many, many more (including mine, on how the Internet is not a problem).

One thing you would not have expected, however, is that the most important talk would be given by a politician. And that it would be…

…The Talk EU waited for 1.5 years

After the anti-ACTA protests and 4th of July, 2012 vote to reject the treaty, it was obvious that the time for copyright reform has come. People have spoken, and politicians have heard them – or so it seemed, at least.

It also seemed obvious that just as anti-ACTA protests have started in Poland, and just as the political will to reject it by the EU have started to form first in Poland, such copyright reform initiative should come from Poland. And so, everybody waited for any Polish politician to pick that topic up and run with it.

The wait was long, but apparently it is finally over: Paweł Zalewski, a Polish MEP, announced at CopyCamp that he shall propose a pan-European copyright reform initiative (yes, the quality is ghastly), with four major points:

  • shortening the copyright term to 50 years after the death of the original author (the minimum that is allowed by the TRIPS treaty);
  • introducing so-called open norm for fair use in EU;
  • legalizing non-commercial remix;
  • removing criminal sanctions for infringement, legalizing non-commercial sharing of culture.

I had the opportunity to provide an opinion on Mr Zalewski’s ideas on behalf of the FOSSF (along with a few other pro-copyright-reform NGOs in Poland), and am quite happy with it: it’s actually close to my copyright reform wishlist for what can be achieved within the terms of binding international treaties (like TRIPS or the Berne Convention).

Mr. Zalewski is now working out the exact shape and form of his proposal; it is to be presented in Brussels in November (and will almost certainly include proposed changes to InfoSoc directive). So we may now hope that there finally is a politician that intends on pursuing this topic.