Skip to main content

Songs on the Security of Networks
a blog by Michał "rysiek" Woźniak

Ecologic, Ford and surveillance

This is an ancient post, published more than 4 years ago.
As such, it might not anymore reflect the views of the author or the state of the world. It is provided as historical record.

A few months ago Jim Farley, Ford representative, blurted in a panel at CES that:

We know everyone who breaks the law, we know when you’re doing it. We have GPS in your car, so we know what you’re doing. By the way, we don’t supply that data to anyone.

Comments about where not very positive, to say the least, and both Mr Farley and Ford’s PR manager retracted this statement immediately – underlining that gathered data would only be used after anonimisation, or only after explicit consent by the driver. In other words, “this is no surveillance”.

Of course, once the data reaches Ford’s servers the only thing keeping Ford from giving them away is their promise. Seems pretty thin to me – especially with the money insurance providers can throw at this (not to mention law enforcement).

Ford isn’t the only company why strives to “help” drivers by gathering data on them. A Polish startup, Ecologic (winners of the Warsaw Startup Fest), had this to say (emphasis mine):

Damian Szymański, What is Ecologic’s idea and how can it help us all lower costs of using cars?

Emil Żak, Robert Bastrzyk: Today nobody keeps track of costs of using their cars. Turns out that annually it can add up to more than the value of the car itself. Tires, petrol, insurance, repairs, etc. It all costs. Our device analyses every action of the driver. It signalises what we have done wrong and suggests, what we can change to lower the costs of petrol, for example. Moreover, we have access to this data 24h.

Total surveillance?

Not at all. The question is how the driver drives their car. Ecologic is a mobile app, online portal and a device that you connect in your car. Thanks to that we can have all sorts of data, for example about combustion…

What kinds of data are collected? Ecologic’s website claims that the device is “equipped with the motion sensor, accelerometer, SIM card, cellular modem and GPS”, and that:

The system immediately begins recording operating data of the vehicle, the GPS position and driving techniques in real-time.

So the idea is to collect data like GPS position, acceleration and breaking, vehicle utilization, driving technique, and sending these off to Ecologic’s servers. Seems that it doesn’t differ wildly from what Ford has in stock, with an (apparently) nice addition of the driver being able to check on their data and stats. Sounds great!

However, a question arises: what happens with the data? Even if Ford’s “promise” not to share with anybody seems thin, Ecologic doesn’t even try to hide that the real money is in selling access to gathered data.

In the “For Who” (sic) section of their website we can find the real target group (emphasis mine):

Private users – keep an eye on the young driver in the family Small business – fast and easy management of vehicles Fleets – keep the fleet under control & save costs Leasing Companies – lower the accident rate and track miles Insurance – give discounts on no-claims & safe driving

Of course one very important group is missing from that list: I am sure law enforcement will be quick to understand the utility of requiring any and all cars install the device, and not having to deal with costly traffic enforcement cameras any more without losing the ability to issue speeding tickets. After all, would Ecologic deny access to data to law enforcement?

Ah, but the Ecologic cares about drivers’ impression of being surveilled:

Your driver after work can switch off live tracking to feel conftable without impression that he is “spied”. A button on the mobile app allows the driver to indicate that the current trip is personal and help you to track private km. (sic!)

So the driver can “switch off live tracking”, but the system will nonetheless help you (i.e. the employer) track “private km”? So these data also have to land in Ecologic’s servers, eh? Apart from the employer, who else will have access to this “private trip” data? Insurance companies? Law enforcement goes without saying, of course.

In the interview, Ecologic claims that:

It’s all about motivation and healthy competition. We need to change the way we think. Instead of a stick, we want to give people two carrots.

It’s a pity that for the drivers themselves this translates into three sticks – employer, insurance provider and law enforcement.

Blurry line between private service and public infrastructure

This is an ancient post, published more than 4 years ago.
As such, it might not anymore reflect the views of the author or the state of the world. It is provided as historical record.

This is my NetMundial content proposal, with some typos fixed and minor edits.


ICANN and IANA decentralisation efforts mark an important milestone in the evolution of the Internet: there is finally widespread recognition of the fact that centrally controlled bodies pose a threat to the free and open nature of the Internet. ICANN and IANA are, however, but a small part of a much larger problem.

More and more, communication platforms and methods are secondarily centralized; that is, in a network decentralized on lower protocol levels there are services being run that are centralized on higher levels. Running on a network based on open standards are closed services, that are then used by other entities as base for their services.

In other words, some private services – offering, for example, user authentication methods – are being used as a de facto infrastructure by large numbers of other entities.

If we recognize the dangers of centrally-controlled domain name system, we should surely recognize the danger of this phenomenon also.


It is of great value that the importance of decoupling IP addresses management and the domain name system management from a single state actor has been recognized and that currently there is a strong push towards multistakeholderism in this area.

There is, however, a secondary emergent centralization happening on the Internet, that potentially can pose a comparable, or even bigger, threat to the interconnected, open and independent nature of this global network.

This centralization is harder to perceive as dangerous, as it is not being actively supported by any state actor; hence, it falls under the radar for many Internet activists and technologists, that would react immediately had similar process been facilitated by a government. It does, however, have a potential to bring negative effects similar to a state-sponsored centralization of infrastructure.

Another reason for this process to happen unnoticed or for the possible negative effects of it to be depreciated is that it is fluid and emergent on behaviour of many actors, enforced by the network effect.

This process is most visibly exemplified in Facebook gathering over a 1 billion of users, by providing a centrally-controlled walled-garden, and at the same time offering an API to developers willing to tap-into this vast resource, for example to use it as authentication service. Now, many if not most Internet services requiring log-in as one of their options offer Facebook log-in. Some (a growing number) offer Facebook as the only option. Many offer commenting system devised by Facebook, that does not allow anonymous comments – a user has to have a Facebook account to be able to partake in the discussion.

Similarily, Google is forcing Google+ on YouTube users; to a lesser extent, Google Search is being used by a swath of Internet services as their default internal search engine (that is, used to search their own website or service). GMail is also by far the most popular e-mail and XMPP service, which gives Google immense power over both.

These are two examples of services offered by private entities (in this case, Google and Facebook) that had become a de facto public infrastructure, meaning that an immense number other services rely and require them to work.

If we recognize the danger of a single state actor controlling ICANN or IANA, we can surely recognize the danger of a single actor (regardless of whether it is a state actor or not) controlling such an important part of Internet infrastructure.

Regardless of reasons, why this situation emerged (users’ lack of tech-savvy, service operators’ want of easiest and cheapest to implement and integrate solutions, etc), it causes several problems for the free and open Internet:

  • it hurts resillience

If such a large part of services and actors depend on a single service (like Facebook or GMail), this in and of itself introduces a single point of failure. It is not entirely in the realm of the impossible for those companies to fail – who will, then, provide the service? We have also seen both of them (as any other large tech company) have large-scale downtime events, taking services based on them down also.

  • it hurts independence

In the most basic sense, any user of a service based on these de facto infrastructures has to comply with and agree to the underlying service (i.e. Facebook, Google) Terms of Service. If many or most of Internet services have that requirement, users and service operators alike lose independence over what they accept.

  • it hurts openness

Operators of such de facto infrastructures are not obliged to provide their services in an open and standard manner – running mostly in the application layer these services usually any attempts of interoperation. Examples include Twitter changing their API TOS to shut-off certain types of applications, Google announcing the planned shut-off of XMPP server-to-server communication, Facebook using XMPP for the internal chat service with server-to-server shut-off.

  • it hurts accountability and transparency

With such immense and binary (“either use it, or lose it”) control over users’ and other service providers’ data, de facto infrastructure operators do not have any incentives to share information on what is happening with the data they gather. They also have no incentives to be transparent and open about their future plans or protocols used in their services. There is no accountability other than the binary decision to “use it or lose it”, which is always heavily influenced by the network effect and the huge numbers of users of these services.

  • it hurts predictability

With no transparency, no accountability, and lack of standardization, such de facto infrastructure operators can act in ways that maximize their profits, which in turn can be highly unpredictable, and not in line with users’ or the global Internet ecosystem’s best interests. Twitters’ changing of API TOS is a good example here.

  • it hurts interoperability

Such de facto infrastructure operators are strongly incentivised to shut-off any interoperability attempts. The larger the number of users of their service, the stronger the network effect, the more other services use their service, and the bigger the influence they can have on the rest of the Internet ecosystem. Social networks are a good example here – a Twitter user cannot communicate with a Facebook user, unless they also have an account on the other network.

This is obviously not the case with e-mail (I can run my own e-mail server), at least not yet. The more people use a single provider here (i.e. GMail), the stronger that provider becomes, and the easier it would be for its operator to shut-off interoperability with other providers. This is exactly what Google is doing with XMPP.

  • it hurts innovation

Lack of predictability, openness and independence obviously also hurts innovation. What used to be a free and open area of innovation is more and more becoming a set of closed-off walled-gardens controlled by a small number of powerful actors.

It is also worth noting that centralized infrastructure on any level (including the level of de facto infrastructure discussed herein) creates additional problems on human rights level: centralized infrastructure is easy to surveil and censor.

Hence, the first question to be asked is this: when does a private service become de facto public infrastructure?

At this point this question remains unanswered and there is not a single Internet Governance body, or indeed any actor, able to reply to it authoritatively. Nevertheless, we are all in dire need for an answer to this question, and I deem it a challenge for Internet Governance and an important topic that should be included in any Internet Governance Forums now and in the future.

The second question that ever more urgently requires an answer if we are to defend the open and not balkanized Internet is: what should be done about private services that have become de facto public infrastructure?

This question is also as of yet unanswered, but there are several possible proposals that can be made, including treating such situations as monopoly and breaking them up (so handling them outside Internet Governance), requiring public interoperable API available for other implementators, etc. This is perhaps not exactly in the purview of Internet Governance, it is however crucial for the Internet as a whole and I propose it be treated as a challenge to be art least considered at IGFs henceforth.


This is an ancient post, published more than 4 years ago.
As such, it might not anymore reflect the views of the author or the state of the world. It is provided as historical record.

Usually when I rant write about public consultations of some government ideas, there’s not much good I can say. Well, for once this is not the case.

The Ministry of Administration and Digitization is working on their position for upcoming NetMundial Internet stakeholders meeting in Saõ Paulo. To prepare for that, the Ministry has announced a call for comments on a document prepared by the European Commission about Internet governance, and has invited several organisations and companies to weigh-in on the topic on a multistakeholder meeting in meatspace.

The topic is immensely important, and I hope to elaborate on that soon. In the meantime, however, I’d just like to say, that for some time now NGOs that are interested and competent in this area no longer have to knock on Ministries’ doors. Instead, we’re invited along ISPs, telcos, and large Internet companies, and can freely voice our opinions. Sometimes we even get listened-to.

Even better, this time one of the NGOs invited to comment and for the meeting was the Warsaw Hackerspace.

So we got addresses into official ministerial communication, and two hackers into ministerial corridors. Expecting the media to go crazy about it in 3… 2… 1…

Encrypted VoIP that works

This is an ancient post, published more than 4 years ago.
As such, it might not anymore reflect the views of the author or the state of the world. It is provided as historical record.

Some of you might have already noticed (for example via my Diaspora profile) my infatuation with RetroShare. A very interesting communication and file-sharing tool that does deserve a proper, full review – for which I do not, unfortunately, have time.

There are some good things (full peer-to-peer decentralisation, full encryption), there are some less good things (using SHA1 and the daunting GUI). But today RetroShare really shined, and in an area that is constantly a chore for free software…


Now, I know there are many free software projects trying to do VoIP, but none seems to be “there” yet. SIP is hard to set-up; Jitsi works on a single server but for some reason I have never been able to get a working VoIP call via Jitsi with a contact from a different server. One project that was closest to being usable was QuteCom… “was”, as there hasn’t been a single new release for 2 years now.

Enter RetroShare.

Just download the software, install it and have the keys generated (that happens automagically), and download the VoIP plugin if you don’t have it already included (chances are, you have; if not, on Linux retroshare-voip-plugin package is your friend, the other OS users can look here).

Now add a friend, start a chat and voilà, VoIP works. No account on any server needed, no trusting a third party, works behind NATs (tested!). And is already encrypted, so no one can listen-in on your communication.

The amazing part? During testing my lappy suspended to ram. After waking up a few minutes later the call worked as if nothing happened.

So you want to censor the Internet...

This is an ancient post, published more than 4 years ago.
As such, it might not anymore reflect the views of the author or the state of the world. It is provided as historical record.

Internet censorship proposals are tabled with amazing regularity – and usually are completely detached from reality of how the Internet and digital communication works. For the proponents, censorship seems an “easy and effective solution to a problem”, while in fact technical solutions to social problems simply do not work, and have a tendency to break things. Badly.

In preparation to one of the consultation meetings around this subject (even though Polish political climate is rather hostile to censorship ideas at the moment, we still get consultation meetings about it, from time to time) I have prepared a list of questions that have to be asked and answered regarding any central-level parental filter Internet censorship proposals (PDF and ODT available; I’d like to thank Mr Adam Haertle for his suggestion on extending question no.11).

If anybody feels like using this as a base for a checklist, please be my guest! Same goes for additions, suggestions, improvements.

Internet censorship questions

This document attempts at gathering all the relevant questions that need to be asked and answered with regard to any proposal of introducing a central-level Internet porn censorship solution, and can be used as a map of the related issues that would also need to be decided on.

Questions herein are for the most part not deeply technical and do not require an answer containing any concrete technical solutions. They also do not touch economy-related issues.

1. What definition of pornography is to be used in the context of the proposed solution? In particular: i. Are graphic works and animations not created via image recording techniques to be included in that definition? ii. Are textual works describing sexual acts to be included also? iii. Are audio materials to be included? iv. Are works of art containing or presenting nudity to be included? If not, how are they going to be differentiated? v. Are biology and sexual education materials to be included? If not, how are they going to be differentiated?

2. Who is to decide on putting given content on the blocked content list? In particular: i. What oversight measures are proposed to combat instances of putting (willfully or by mistake) non-pornographic content on said list? ii. Will the blocked content list public, or secret? iii. If the list is to be kept secret, what are the reasons for doing so?

3. How is the content to be blocked going to be identified? In particular: i. Is the content identification to be based on textual keywords within content itself? ii. Is it to be based on keywords in URL leading to content? iii. Is it to be based on an explicit blacklist of URLs? iv. Is it to be based on an explicit blacklist of domains? v. Is it to be based on an explicit blacklist of IP addresses? vi. Is it to be based on image recognition? vii. Is it to be based on audio recognition? viii. Is it to be based on checksum comparison? ix. Is it to be based on a combination of methods? If so -- which methods are to be employed?

4. What remedy procedure is considered in case of blocking of content that does not fulfill the definition of pornography? In particular: i. Where and to whom such incidents are to be reported? ii. What would the confirmation or denial procedure for such reports be?

5. What remedy procedure is considered in case of not blocking of content that does fulfill the definition of pornography? In particular: i. Where and to whom such incidents are to be reported? ii. What would the confirmation or denial procedure for such reports be?

6. Are parents/legal guardians/subscribers to have control over the scope of blocking? In particular: i. Will they be able to indicate that given content should be excluded from blocking, even though it does fulfill the definition of pornography? ii. Will they be able to indicate that given content should be blocked, even though it does not fulfill the definition of pornography?

7. Is the blocking solution to be opt-in, opt-out, or is the choice to be presented upon first connection? In particular: i. Is the choice going to apply to all devices using a given connection? ii. Is the choice going to apply only to a particular device on any connection? iii. Is the choice going to apply only to a particular device on a particular connection?

8. Is the choice to enable blocking is to apply also to institutional subscribers and companies? In particular: i. If not, does that that mean no blocking, or mandatory blocking? ii. Is it to apply to libraries? iii. Is it to apply to schools? iv. Is it to apply to universities and other higher education institutions? v. Is it to apply to public hot-spots run by local communities? vi. Is it to apply to public hot-spots run by private service providers? vii. Is it to apply to hot-spots provided only for private service providers' customers? viii. Is it to apply to hot-spots run by private companies for their employees?

9. Will content explaining how to circumvent blocking also be blocked?

10. How is HTTPS or other SSL/TLS-encrypted traffic to be handled? In particular: i. Is HTTPS/TLS/SSL traffic to be ignored altogether? ii. Is HTTPS/TLS/SSL traffic to be blocked? iii. Is HTTPS/TLS/SSL traffic to have encryption layer broken and content filtered?

11. How is private communication to be handled? In particular: i. Is e-mail and Internet messaging communication to be filtered? ii. Are peer-to-peer networks to be filtered? iii. Are MMS messages to be filtered? iv. Is private audio-video (including VoIP) communication to be filtered? v. Is private audio communication via regular and mobile phones to be filtered?

12. How is encrypted private communication to be handled? In particular: i. Is such communication to be blocked? ii. Is such communication to be ignored? iii. Is such communication to have encryption layer broken and content filtered?

13. Are solutions regarding HTTPS/TLS/SSL and private and encrypted private communication to be implemented in networks operated by institutional subscribers and companies, as per question 8. above?

I’d love to see some answers to these questions from each and every person that proposes or supports central-level parental filters Internet censorship.

This is why we can't have nice IRC

This is an ancient post, published more than 4 years ago.
As such, it might not anymore reflect the views of the author or the state of the world. It is provided as historical record.

…or how the FOSS Foundation and the Warsaw Hackerspace got (temporarily) banned from #debian at Freenode.

Asking about Ubuntu on Debian’s IRC channels is not considered nice – and being a lurker there for years I can understand why. These are two different systems, and trying to get Debian people to work on your Ubuntu problem is more often than not wasting their resources and your time. There are better places to get support for Ubuntu.

Having said that, when somebody makes such a misstep, the right way to proceed is inform. Especially when the question is not about Ubuntu itself, but about a tool used by both distros.

I can understand that many people ask such questions in #debian, and that some need a bit more of an incentive to move to the right channel. We wouldn’t want to ban a whole hackerspace because of one user like that, now would we?

“You should know better”

Well, apparently some would. And not only this – every single other person that asked why the whole Warsaw Hackerspace’s network was banned from the channel, also got immediately banned, with a dry explanation in the kick message:

you should know better

Because I asked about the situation, while connecting from the Free and Open Source Software Foundation’s infrastructure, the whole FOSSF got affected:

[00:08:35](02.02.14) <rysiek|pl> abrotman: hey, that's a damn good idea to just ban a whole hackerspace because somebody asked about apt-get in #debian [00:08:55](02.02.14) *** Mode #debian +o abrotman by ChanServ [00:08:56](02.02.14) *** Mode #debian +b *!* by abrotman [00:08:57](02.02.14) <-* abrotman has kicked rysiek|pl from #debian (you should know better)

Now that’s a way to make new friends, abrotman!

“The ban will expire”

Friends or no friends, the whole FOSSF network got banned from #debian. We’re doing a lot on that distro, all our servers are running it, providing stable and safe services for projects we run. Bottom line – if we’re banned from #debian, spreading Free Software in Poland gets this much harder.

So I started to look around for ways to get in contact with people that might be able to help. Posted on Diaspora, asked in #freenode, got sent to #debian-ops. There I (and several other people from the Warsaw Hackerspace) have tried to reason with the op in question:

[00:33:20](02.02.14) <abrotman> and having you both come in and whine doesn't help [00:33:23](02.02.14) <q3k> you jus tbanned a community od ~60 people [00:33:33](02.02.14) <q3k> which is not really excellent. [00:34:15](02.02.14) <rysiek|pl> abrotman: "come and whine"? I'm sorry, but you just banned a host with many users owned by the organisation I represent [00:34:34](02.02.14) <rysiek|pl> abrotman: because I asked about your attitude towards a user in #debian [00:34:35](02.02.14) <abrotman> The ban will expire, folks can ask for a +e

“The ban will expire” was the only real answer we got.

“You had to escalate why?”

Turned out there are many things that “won’t help”:

[01:13:46](02.02.14) <abrotman> Posting on diaspora probably won't help .. [01:15:17](02.02.14) <rysiek|pl> abrotman: and this is my fault... how? [01:15:31](02.02.14) <abrotman> You had to escalate why?

I guess the question about escalating is the real question here. Did it have to escalate to banning the whole because somebody asked a question containing the word “Ubuntu”? Did it have to escalate to banning the whole because I asked about why the Warsaw Hackerspace got banned from #debian?

Being excellent to each other

I understand being an op is a tough cookie, I really do, especially in very popular channels like #debian. And I understand that people get tired, annoyed, frustrated doing that. I appreciate their work, just as I would like people to appreciate the work I do.

But that is no justification for indiscriminately banning whole networks. As Quinn Norton has said at 30C3, “it is time for us to up our game”. I believe we can do better.

The bans have been lifted now, thanks to some other good soul in the #debian channel, and I hope once all parties involved get some well-deserved sleep, we’ll be able to draw conclusions, and then go past this.

Decentralize where your mouth is

This is an ancient post, published more than 4 years ago.
As such, it might not anymore reflect the views of the author or the state of the world. It is provided as historical record.

I have lately come across such a comment:

Hi sorry for barging in, but with all of the projects now based around decentralisation, I thought a common place to exchange ideas would be good.

“Oh”, thought I, “this is gonna be good!” A grand idea: let’s create a place to talk about decentralisation. After all, decentralization is so important and all. Where should we create such a place? Diaspora? Friendica? Any other decentralized, federated service?

I have created a subreddit as a place for as many projects to collaborate and share experiences, research and general comments.


What better place to talk about decentralization than a centralized service, right?..

It’s not technology we need to change, it’s our mentality.

A link cannot be illegal

This is an ancient post, published more than 4 years ago.
As such, it might not anymore reflect the views of the author or the state of the world. It is provided as historical record.

For some time now there are ideas – even in Poland – to criminalize mere linking to copyright infringing materials on the Internet. This idea was also floated recently on the 5th Copyright Forum, by Polish copyright collectives like ZAiKS, and by a private company.

It’s not even the level of absurdity that gets me with ideas like that – it’s the fact that some people are apparently able to maintain a straight face while proposing it. It’s hard to be sure whether it’s a result to ignorance of some very basic concepts (not even regarding electronic communications – but human communication itself), or a conscious attempt at solving “the Internet problem”.

Anyway, let’s go from the top.

Internet hyperlinks fulfil the same role as bibliographic information: they merely inform where a given referenced content resides, without transmitting nor copying any of it in and of themselves. Link itself does not and cannot infringe upon anybody’s rights – it’s just information about where to look for a given content.

That means that a link is also useful information for those trying to combat copyright infringing materials. Thanks to links it’s easier to find content published without proper authorization in order to remove it from the Internet – of course in accordance with due process of law. Why do organisations that claim to work in authors’ interests try to deny themselves such a useful tool?..

Crucially, a person linking to some content has no way of ascertaining the legal status of that content. Even lawyers often have problems with that, due to copyright law’s Byzantine level of complication. Why would it be okay, then, to require such knowledge and insight from any person linking any content on the Internet?.. By the way, I wonder if every single link on ZAiKS website is “legal”…

Interesting, also, is what should the sanctions be for such a heinous crime? Should we expect a Police raid and extradition for a simple act of linking some content on our website?

And what about links to websites with such links? If a website containing “illegal links” would be itself illegal, linking to it would also be illegal, right? And how about links to websites that link to such websites? And so on… We can either stop it at the only sane place – the very beginning – by saying “links can’t be illegal”, or it’s turtles all the way down.

Regardless of possible sanctions, making linking potentially illegal would make Internet users afraid of linking. The very core functionality of the World Wide Web would suddenly become a minefield. The effects would not be very different from Iran’s “halal Internet” – with the small difference that purposeful censorship would be replaced with self-censorship. Maybe that’s the real aim of this proposal?

Proponents of this “solution” claim it is necessary due to ineffectiveness of notice and take down procedures of removing infringing materials. People that employ themselves with intentional commercial copyright infringement can, after all, change addresses and domain names very easily, and very fast. They can register their website or company abroad, making enforcement harder.

For some reason proponents of making linking illegal seem to think that warez-sites (which are the direct target of this proposal) operators can’t do the same thing with link farms as they do with their file hosting. Trying to enforce removal of “illegal” links will face the very same problems enforcement of notice and take down procedures face, and will be similarly ineffective.

It will, however, have one tangible effect: people registering and hosting perfectly legal websites in Poland will be afraid to link to anything – how can they be sure it’s not illegal?..

That’s still not everything, though! Obviously, searching for “infringing links” and sending out take down notices would have to be automated, just as infringing materials are often searched for and handled automatically. Such algorithms make mistakes, leading to removal of perfectly legal content.

For big players that might not seem to be a problem, but small music producers or individual artists – especially those publishing their works under libre licenses – will be severely disadvantaged, due to incomparably fewer resources at their disposal for demanding restoring of their incorrectly removed/blocked content. Considering that for such artists their freely available content might be an important element of their business model, for example helping build their fan base, such removal/blocking of content can cause real financial losses. Would copyright collectives fight for the rights of libre artists in such a situation?..

The same problem that is so clearly visible with notice and take down at the source will only be amplified if some links would be “illegal”. If (as with notice and take down) there are no sanctions for unfounded removal of a non-infringing link, service providers, hosting operators, etc, will prefer to be “rather safe than sorry” and remove everything that gets a notice. Links to libre-licensed content will get removed, and artists publishing them will inevitably suffer concrete financial and non-financial losses.

I do think, sometimes, that maybe such absurd ideas should be supported, as their introduction would help push people out of walled gardens and into decentralised, encrypted networks and tools like RetroShare, TOR or FreeNet.

Tools that would be much more effective in combating censorship and surveillance of their users.


The Court of Justice of the European Union has just ruled on a case with linking as the central issue – the verdict: linking is not illegal.

Now, this does not cover the exact case discussed here (linking to a material that is published illegally), but is nonetheless an extremely important outbreak of common sense.

Copyright reform debate lives on

This is an ancient post, published more than 4 years ago.
As such, it might not anymore reflect the views of the author or the state of the world. It is provided as historical record.

Since Polish and European citizens voiced their opinions on the need of copyright reform so clearly 2 years ago there is a feeling of anticipation in the air – what’s next? Brussels-based politicians hint (or outright state publicly) that everybody is waiting for some Polish move.

Can’t say I blame them. Widespread Anti-ACTA protests started in Warsaw; Polish Prime Minister was the first to admit ACTA was a mistake; politicians from Poland were also the first to grasp that (using the not-exactly-fitting language of Polish MP Gałażewski) ACTA was “passé” and also the first to start asking the right questions.

This past September finally something has happened. At the CopyCamp Polish MEP Paweł Zalewski has shared his ideas for copyright reform in the EU, and about two months later, together with Amelia Andersdotter and Marietje Schaake, announced them officially in the European Parliament.

A month later the European Commission opened-up consultations on the InfoSoc directive reform – a process we should all take part in! Time is of essence, as the deadline is 5th of February, but there are tools that help get involved. Use them!

Soon afterwards the Polish Ministry of Culture opened local consultations in order to create an official Polish stance in the InfoSoc reform consultations.

The Copyright Forum is one of the after-effects of the ACTA debate (the Ministry was responsible for the treaty within the Polish government), and of other situations where Ministry’s decisions and processes seemed less than transparent, so to speak. For which they have been heavily criticised.

The Ministry seems to learn on their mistakes, and does not wish to ever be called “non-transparent”, hence the Copyright Forum was born. Long story short, any and all organisations that are interested in copyright and its reform now have a chance to voice their opinions in an open debate, facilitated by the Ministry. Finally!

New consultations, old misunderstandings

The 5th Copyright Forum was about:

As far as the a sane stance on these issues is concerned, please see Open Education Coalitions’s response in this consultation process. I want to focus here on something else.

This was not the first (nor, hopefully, the last) copyright consultation meeting I partake in. Even though we (“us, opennists”) had been explaining our position for years, we still find basic lack of understanding (as I am not going to assume malicious, conscious mangling) of what we’re trying to say. It was clearly present in statements made on this Forum also. Let’s have a look at the most “interesting” of ideas and misrepresentations, shall we?

“Linking to illegal content should be illegal itself”

The idea here is that mere linking on the Internet to content that is in some way infringing on somebody’s intellectual property copyright should be illegal itself, because notice-and-take-down procedures are slow, complicated and ineffective.

Before we dive into how bad an idea this is, let’s stop for a moment on the “illegal content” part. That’s another of those language constructs that are artificially used in a way to slant the debate before it even starts. “Illegal content” is content that is illegal to share, reproduce, etc, under any circumstances, regardless of whether or not you have a license on the content itself. If Polish copyright collectives claim that the “content” created by the artists they (supposedly) represent is “illegal”, maybe they should call the Police?..

If we’re talking about infringement, we should call it infringement, nothing more, nothing less.

An idea that was also present on the Forum and is closely related to “making linking illegal” strategy is “making search engines remove links to infringing content”. Both ideas are completely absurd, for a number of reasons too long to be put in full here; here’s the skinny:

  • links are purely informational, just as bibliographic notes; penalisation of linking is as absurd as penalisation of bibliographic notes;
  • removing links to infringing content is sweeping the problem under the rug, instead of solving it at the source (e.g. by removing the infringing material);
  • a person that links to a given content has no practical way of ascertaining the legality of said content, not to mention that this legality can change over time;
  • this whole idea is claimed to be “necessary” in the light of “ineffectiveness” of notice and take down; well, if notice and take down is ineffective, what makes the proponents of such a measure think that they will have any more luck with removing links than with removing content itself?
  • regardless of its ineffectiveness, it will cause problems for works published under libre licenses, including free software.

More in-depth arguments are also available.

“Users of culture”

The division between “users” (or “consumers”]) of culture, and “creators” of it is as old as it is outdated. It had, perhaps, some sense in the times of mass-media, with their clear difference between broadcasters and audience. Today all you need to become an artist is a laptop, and all you need to reach your audience is the Internet.

Read-only culture became read-write again, finally. There is no meaningful line of division between “users” and “creators”. Everybody can be one or the other, as they choose.

Users’ responsibility

According to polish copyright law today, if I have access to a given work, I can download it, and use it (including sharing it non-commercially with my family and friends). I do not have to check whether or not that content has been shared with me legally. That’s the sharer’s problem.

Of course that’s something hard to swallow for the copyright collectives and their ilk. Hence the idea to change that, to make the user responsible for downloading and using content that might have been illegally published or shared. A proposal that is burdened with some of the same arguments as “making linking illegal” one above. Namely, how can a user check that, if even courts tend to have problems with it?

Should the illegally-shared works be marked in a certain way? If so, whose responsibility that would be? Artists’? Sharers’ themselves? If the latter. how can one be sure that the content gets marked truthfully? If the former, artists would have to gain control over every single shared copy… while somebody that would still want to share without proper authorisation will do so anyway.

Or maybe the “users of culture” (being creators themselves!) should limit themselves to just a few “kosher” channels? If so, which ones should these be? And who decides, on what grounds? Can I start such a channel myself, for example in the form of a blog, videolog, podcast? If so, how can my audience be sure that it is “legal” itself?

Finally, how should users of infringing content be punished? Are we to assume that copyright collectives are proposing the American model here?

“Everything that potentially allows anybody to make money – is commercial”

That’s an attempt at defining the hard to draw line between “commercial” and “non-commercial” use, and it’s done in a way that makes sure that any use of cultural work on the Internet is in fact commercial. After all, even if I were to completely non-commercially send a private e-mail with a picture attached to my family member, my ISP, their ISP and probably at least one ISP in between makes money in a quite real way.

Does it make sense to use such a broad definition of “commercial use”? After all, it’s legal for me in Poland to watch some movies together with my friends. But in such a situation there are several third parties than can profit from it – a taxi driver, public transport operator company, some local grocery stores where we buy the supplies for the evening… Does that make my watching movies with my friends “commercial”? Or, for that matter, if I am going to watch a movie myself, the electrical company is going to profit a bit. Is that commercial also, then?

And by the way, how about the copyright collectives themselves – after all they employ people, who profit from their activities…

Non-commercial vs. libre-licensed

That one’s a classic, with authors publishing their works under libre licenses being called “creators not planning to make money on their works”.

How many times do we have to repeat ,ad nauseam, that libre-licensed work does not have to be a pro-bono work? There are many ways to make money on digital works – sponsoring, crowdfunding, work-for-hire, adverts are just a few most obvious.

There are big corporations and small firms publishing some of their products on libre licenses – Intel, Google, RedHat, to name just a few best known. Stating contemptuously that publishing something under a libre license means that the author has no intention on profiting from it is either a sign of ignorance, or (much worse) willful attempt at marginalizing such creators.

“Some organizations are only interested in gratis access for users”

That’s also something we hear quite often. Usually from the same person. Mr Dominik Skoczek, once the head of the Intellectual Property and Media Department at the ministry of Culture (he was the person responsible for ACTA topic within the Ministry), today representing the Association of Polish Movie Producers (think: MPAA without the clout), had an abundance of occasions to hear from us that libre licensing is about something other than cash.

And that it’s not about “users”, as everybody can be a creator.

I wouldn’t go as far as to assume malice on part of Mr Skoczek; on the other hand if I assume that after years of our patient explanations of our views he still cannot grasp the not-so-complicated ideas behind them, Mr Skoczek could understandably feel offended…

Instead of pondering the source of such lack of understanding, then, I shall simply explain once more, that libre licenses, free culture, free software, etc, are not about gratis access, but about the possibility of creating and remixing. Creativity and culture are never in a void, all creative work is derivative, making works inaccessible for remixing up to 70 years after author’s death – is a barbaric attack on culture itself.

Authors of libre-licensed works and organisations demanding libre-licensing of works created with public funds demand not “access” for all, but allowing creativity for all.

A Polish lawyer (and author of a well-known blog), Piotr VaGla Waglowski, had asked one of the Polish copyright collectives (the one supposedly representing rights of authors like him) about the possibility of receiving what could most aptly be described as “his money”. The answer he had received was too long to be quoted in full here, but the important part is this:

Due to a very large number of entities entitled to receive these funds there is a danger of considerable atomization of remunerations to which they are entitled. This has direct bearing also on the possibility of such remuneration, namely on acceptable repartitioning model of the received funds. (…) In summary, we have no way of meeting your expectations at this time.

I don’t think I can find a better comment than VaGla himself:

By the way, is not remunerating authors by copyright collectives can itself be considered copyright infringement?

Creators’ inalienable right to financial gratification

That’s another very dangerous idea being passed under the guise of “working in artists’ interests” (by none other than copyright collectives, of course). The proposal is: as authors are often in a much worse negotiating position when discussing their remuneration terms with their publisher or producer, it should be made impossible for authors to abdicate or transfer their right to financial gratification. Thus, every time a work is “exploited”, artists themselves will also have to receive money.

These, of course, would go through copyright collectives. But that’s okay, as “the only chance for money reaching the artist are copyright collectives”, right?

Regardless of who should receive such royalties, the very fact of their introduction would make libre licenses ineffective – each use, even of a libre-licensed work, would mean payable royalties. That means Wikipedia would be (financially) impossible, along with open educational resources and the rest of the libre side of creativity.

Summing it all up

I guess the best summary of the Forum and many of the ideas expressed therein is a quote by one of the attendees (can’t wait for the release of the video recording of the forum to link directly):

Representatives of some of the NGOs here think this is all about some ideals – it’s not, it’s about hard cash!

What could I possibly add to that! We’re all waiting for the official stance of the Ministry (and the Polish government) in the European Commission consultation process – but we need not wait without action on our own part!

Neat HaCSS, or let's de-JS the Web a bit

This is an ancient post, published more than 4 years ago.
As such, it might not anymore reflect the views of the author or the state of the world. It is provided as historical record.

I like playing with technology, and I am particularly fond of playing with CSS/HTML. Not a big fan of JavaScript, though.

Don’t get me wrong, I understand the utility of JavaScript, and the power of jQuery, I really do. However, I believe both are overused and abused – much, if not most, of functionality of “rich Internet applications” (also known simply as “websites”), including transitions, animations and whatnot, can be implemented in HTML5 and CSS3, with JS used for AJAX requests. The advantages being: faster processing, smoother animations, more readable code, better separation of logic and presentation.

Lately I dived into 2 side-projects that allowed me to dust-off my CSS-fu while making a point about how JS is becoming less and less needed as far as presentation layer is concerned.

Sailing in the Cloud

The first little project is Sailors.MD, a simple website for my sailors-and-medical-professionals friends.

The second one is implementing something ownCloud (that great little project letting everyone self-host their own “cloud”) once had had that has later sadly been removed: sharing calendars and events publicly via a link with a token, or “link-sharing”. I know of at least one potential deployment where that was a show-stopper. So after complaining for a few months I decided to implement it myself.


Sailors.MD was a no-brainer – a completely new project, a simple static website (that will grow, some day, but not just yet), no magical mumbo-jumbo, it just needs to look nice and be functional. No JS needed there, period!

Now, ownCloud, on the other hand… JavaScript. JavaScript everywhere! Hacking together a nice CSS/HTML-based interface for internal- and link-sharing of calendars and events, with JS providing just the bare minimum of AJAX-ness, stemmed from the frustration of debugging JavaScript-based interface.

The Problem Challenge

I wanted the interface to retain full functionality – including animations, showing/hiding parts of the interface, etc – even without JavaScript enabled. There were several things that needed implementing in pure CSS/HTML, which seemed hard without JS:

  • sections of the interface collapsing/extending upon click…
  • …that are both directly linkable (i.e. via :target links), and persistent, allowing the user to interact with their contents and sub-sections;
  • showing some controls only when JS is enabled (a reverse-<noscript>, if you will);
  • elegant tooltips.
  • making element’s appearance depend on the state of elements that follow it (remember, there is no CSS selector/operator for being a parent of, and the general sibling operator ~ is one-way only);


Let’s hack!

First of all, some caveats: the below has not been tested on anything other than up-to-date versions of Firefox, Chromium and Rekonq (although IE 10+ should work). If you want to test them in anything else, be my guest, and I would love some feedback.

Secondly, all code in examples linked below is MIT-licensed (info in the examples themselves also).
Why at all? Because it’s good practice to put some license on your (even simple) code so that the rules of the road are clear for other developers.
Why MIT? Well, I’m a staunt supporter of copyleft licenses like the AGPL, but this is just too small a thing, and bulding on too many great ideas of other people for me to feel comfortable with slapping AGPL on it.

Okay, enough of this, on with the code!

The Targetable Displayable

Making sections of a website show/collapse upon click is not that hard once you wrap your head around one beautifully simple hack: CSS Checkbox Tabs. But I wanted more:

  • being able to put the menu in a completely different place in code than the section (i.e. making a menu on top of the site, for example, with sections hidden within the bowels);
  • section targeting via :target, so that they are directly linkable by users.

The first one is rather easy once you get that you can put a <label> wherever you like in the document, regardless where the relevant checkbox is, as long as you set label’s for attribute to checkboxes id.

Enabling :target was more tricky, tough: #target links do not set checkboxes :checked. Also, simply setting the id attribute on the section we want to have #target-linkable will not work: when the user clicks on a checkbox label to choose a different section, that one will still be expanded. Checkbox checking does not change the :target.

We could use:

:checked ~ :target {
   * rules making the :target collapsed

…but that would not work for any situation where the :target element is before the :checked checkbox (the sibling operator ~ is one-way, remember?).

Hey, why not use just :target and forget about checkboxes? Well, then we wouldn’t be able to have sub-sections, as there would be no way of saying “keep this section open even if the user chooses something else (the subsection)”, and there is no “parent of” operator (so there is no way of saying "keep it open if any of its children is a :target). So:

  • :checked on checkboxes/radioboxes keeps state, and that’s a biggie;
  • :target is directly linkable;
  • there is no way to connect the two.

Or is there? If the :target elements are always before the checkboxes, and these are always directly in front of the element that contains our expandable/contractable section, we might be able to get what we want. As long as all :target-able elements are in before of all relevant checkboxes. Tada!

What happens is:

  • from the get-go, no navigational checkbox/radiobox is checked;
  • if there is a #target, CSS rules for the right section container (based on the :target-ed hidden element) kick-in;
  • if now the user selects any sections, these are handled via checkboxes/radioboxes, and because the CSS :target rules for specific elements have the :not(:checked) sibling in chain required, :target rules stop working.



This seems a simple thing, right? Display elements when JavaScript is enabled gets tricky, however, when we’re not allowed to use JS to display them. Now, we all know the <noscript> tag, but here we need to do something exactly opposite, and <script> won’t do, as we’re not going to use JS for that.

Of course we can always use a style-within-noscript:

  #some-element {

…but that’s inelegant. First of all, we’re not supposed to have <style> tags within <body>, just as we’re not supposed to have <noscript> within <head>. Secondly, we might want to have the element gone from the element tree when JS is disabled to pull off other hacks – like a Displayable above, for instance.

Turns out, we can put HTML comments inside <noscript>. Not just that: we can put the start of a comment (<!--) in a different <noscript> element than the end (-->). And apparently these will be interpreted as start/end HTML comment tags only if JS is disabled (they are within <noscript> elements, after all!). That means that this:


…works like a reverse-<noscript> element. The <div> will get shown and included in the document tree only if JS is enabled. Here, check for yourselves by enabling and disabling JS in your browser and visiting the test case.

CSS Tooltips (aka TooltipCSS)

There is a myriad of tooltip JS/jQuery libraries, I’m not even going to link to them. Creating a HTML/CSS tooltip for a given element is also trivially easy (create a child element, use :hover on the parent to show it). Creating a tool-tip on any element matching a selector, without any additional HTML (no additional child elements, etc) – now this is a challenge!

What we need is a way to squeeze a new style-able element or two from any element in the DOM tree, without adding HTML, in pure CSS. Turns out that we have two: ::before and ::after. Yes, they are style-able, yes, we can put whatever we want in them. Unfortunately no, they can’t have child nodes.

But, can we conveniently pass tooltip text to them without making a separate CSS rule for each? Yes, we can.

The content property conveniently accepts attr() form. So we can have a single style stanza saying for example:

 a[title]::before {

…and bam!, all <a> elements with the title attribute set will have ::before pseudo-element containing the value of title attribute.

We can style ::before just like any other element; by making the parent element (the one being “tooltiped”, <a> in example above) relatively positioned and our ::before positioned absolutely we also have pretty good control how and where the tooltip appears.

Because pseudo-elements can’t have child elements (all HTML inside will get rendered as text), it seems we can’t have the small notch that makes a tooltip from a simple bubble. Ah, but we have ::after too, right! We can use it as our notch. If only there was a way to make a pure-CSS triangle from an element

Add a bit of CSS transitions to the mix (I’m using rgba() background/border colours instead of opacity, as opacity for some reason makes elements move just a tiny, annoying bit), remember to hide the elements that constitute our new shiny tooltip so that they won’t obstruct other elements (hint: use visibility:hidden instead of display:none, otherwise transitions won’t work) – et voilà!

A CSS Tooltip appears! It’s super-effective!

Depending on the state of the elements that follow

Well, that is simply impossible, as the ~ is one-way, and there is no parent of element. No, seriously.

But what we can do is play with the order of elements in the mix. Making all the elements (checkboxes, radio controls) the state we want to depend on precede the element we want to style depending on their state is the only way to go. The next logical step is to make that element be displayed in front of them.

That is no rocket surgery and can be achieved in a myriad of ways, for example by enclosing both the depending element and the source elements in a common container, set position: relative on the container and use position:absolute; top:0px on the element we want to be displayed first, like thus – and here’s how it works.

With all their powers combined…

Dry examples are fun and informational (don’t forget to test them with JavaScript disabled, too!), but only once you see it all working together the power of it all gets evident. So, enjoy. This example uses minimal JS to set checkboxes back and forth (in a way that is used in ownCloud calendar sharing interface), but nothing more. With JS disabled it should show the group’s status (“all checked”, “some checked”, “none-checked”).

All examples are valid HTML5 and valid CSS3. Of course, the code is on-line, you can grab it here. I’d love to hear your opinion, or see some other non-JS hacks.


There is some serious magic that can be done with CSS. Once we get an “is parent of” and a truly universal sibling selectors it will open up even more possibilities. JS seems convenient, but more and more people are looking with distrust (or disgust) at JS-infested websites, due to performance and privacy issues involved.

If something can be done in pure CSS/HTML, why not do it that way?