thinking is dangerous — it leads to ideas
thinking is dangerous — it leads to ideas
Member of the Board of the Polish Linux Users Group. Human rights in digital era hacktivist, Free Software advocate, privacy and anonimity evangelist; expert volunteer to the Panoptykon Foundation; co-organizer of SocHack social hackathons; charter member of the Warsaw Hackerspace; and Telecomix co-operator; biker, sailor.
Formerly President of the Board of the Polish Free and Open Source Software Foundation; CTO of BRAMA Mobile Technologies Laboratory on Warsaw University of Technology and a student at Philosophy Institute on Warsaw University.
Another day, another conference on Internet governance, this time close enough to go there on my own dime. Besides, Berlin is always a treat.
As was to be expected of a conference organised in ministerial halls, for the most part when it wasn't objectionable, it was mind-boggingly dull. And yes, WiFi was as good as it gets on such events.
I have a strong policy of going to conferences mainly for the hallway/coffee chit-chat and making new acquaintances, and it was a winner this time around too.
Starting off with a welcoming address by the powers that be, including Neelie Kroes, who deemed the conference so important, she made a video appearance (how about we agree on a rule that when you're a politician wanting to have a point in a conference agenda, you can either come in person, or... pass entirely; no pre-recorded videos, please!), the conference gave no hope for anything of significance to happen within the confines of the programme.
Thankfully, you can always count on activists to bring the gravitas along. And while having Edward Snowden in the panel (or as a keynote speaker) would be the right thing to do, several Edwards Snowdens in the audience were the next best thing.
The first panel focused on lessons learned from NETmundial, and made a good first impression with no chair available for the only female panelist. Were there any civil society participants to the panel? Of course not. Questions from the floor about that fact (asked by the undersigned) and about the glaring gender disproportion in the panel (asked by Mrs. O'Loughlin of Council of Europe) were waved-off as "off-topic".
Representative of the organisers also remarked on how hard it was to find women for the panel. They tried, they just couldn't find any on the right positions.
Let's ponder about this for just a moment, even though I don't even know where to start.
I could say, for instance, that equality (gender, and otherwise) was a big issue on NETmundial, as evidenced in the opening address by Nnenna Nwakanma. I could refer you, Dear Reader, to the concepts like glass ceiling, and note how this is no excuse for not including women in the panel on equal standing. I could, as I have in my question, note the irony of a panel about lessons from NETmundial (y'know, the multistakeholder conference on Internet governance) comprising almost entirely of men, and with no representative of the third sector.
Or I could point out, that including civil society in the panel might have made it easier for the organisers to find female panelists, as while the glass ceiling is indubitably also sadly present in civil society, it doesn't seem to be as prevalent as in government and business sectors.
However, there is always hope. The "When the public sphere became private" workshop proved to be both inspiring and interesting, and the exchange of ideas relevant and much deeper than I would have expected.
It did help that the topic sounded eerily familiar, but the discussion went far and wide, touching on a number of related issues.
There was an important distinction that had to be made, as became apparent in the course of the discussion, between two meanings of the word "private", in the context of communication infrastructures.
First meaning being "pertaining to or supportive of privacy". Here, private communication medium would mean a communication medium that ensures the privacy of the communication between communicating parties.
The second one is, of course, "privately-owned", with private communication medium meaning a medium owned by a private entity.
Obviously, similar distinction has to be made for the word "public" in the same context.
With this in mind it's easy to see how crucial misunderstandings can arise when using these terms without making clear which of the particular meanings we have in mind. Specifically, privately-owned infrastructure can be (and often is) hostile towards privacy of the communicating parties.
When the whole infrastructure is privately-owned, privacy is not the only problem. Public sphere is crucial to democratic processes, but today it is more and more being replaced by privately-owned and controlled fora. Public discourse should not, however, be contingent on rules made unilaterally by private entities. Or, as one of the workshop panelists neatly put it:
Public agora cannot underlie a business model based on surveillance
As always, the first step is admitting that we do have a problem, and I take it we are getting ready for such an admission. Finally. But what's really interesting is the next step — what should we do about it? There is, unfortunately, no clear answer, but several ideas have been floated.
One of these is open standards, or making the operators of such privately-owned fora to at least supply APIs allowing full interoperability between different providers (think Facebook interoperating with Google+). Another (crazy, I give you that!) idea — floated by a friend of mine some time ago — is to have source code of all software available at least for inspection, just like ingredients listing on packaged food.
Yet another would be mandating privacy impact assessment on all lawmaking activities, and on infrastructural decisions made (for instance) on governmental levels.
Finally, there was this gem:
Governments need to pass human rights as technical requirements
That's something that really got my attention, as for some time now I am pondering that we — the technical community, geeks, free-softies, etc. — should start making software with the assumption that if some abuse is possible, it is inevitable. And start designing our software for privacy just as we design it for security. I'll elaborate on that in a separate post.
All of these need further thought and consideration; some might turn out workable, some might turn out impossible, and some combination of them might be the right way to proceed.
But the right questions are apparently finally being asked. Not holding my breath, but maybe next time we're even able to find some less locked-down solution instead of a Twitter wall to bring in the remote participation...