After the violent events at the US Capitol social media monopolists are finally waking up to the reality that centralisation is dangerous; with power over daily communication of hundreds of millions of users comes responsibility perhaps too big even for Big Tech.
For years Facebook and Twitter were unwilling to enforce their own rules against those inciting violence, in fear of upsetting a substantial part of their userbase. Now, by banning the accounts of Donald Trump and peddlers of QAnon conspiracy theory they are hoping to put the genie back in the bottle, and go back to business as usual.
Not only is this too little too late, but needs to be understood as an admission of complicity.
After all, nothing really changed in President Trump’s rhetoric, or in the wild substance of QAnon conspiracy theories. Social media monopolists were warned for years that promoting this kind of content will lead to bloodshed (and it has in the past already).
A “difficult position”¶
I have participated in many a public forum on Internet governance, and whenever anyone pointed out that social platforms like Facebook need to do more as far as content moderation is concerned, Facebook would complain that it’s difficult in their huge network, since regulation and cultures are so different across the world.
They’re not wrong! But while their goal was to stifle further regulation, they were in fact making a very good argument for decentralisation.
After all the very reason they are in this “difficult position” is their business decision to insist on providing centrally-controlled global social media platforms, trying to push the round peg of a myriad of cultures into a square hole of a single moderation policy.
Social media behemoths argued for years that democratically elected governments should not regulate them according to the will of the people, because it is incompatible with their business models!
Damage done to the social fabric itself is, unsurprisingly, just an externality.
Damned if you do, damned if you don’t¶
Of course, major social media platforms banning anyone immediately raise concerns about censorship (and those abusing those social networks to spread a message of hate and division know how to use this argument well). Do we want to live in a world where a handful of corporate execs control the de-facto online public space for political and social debate?
Obviously we don’t. This is too much power, and power corrupts. But the question isn’t really about how these platforms should wield their power — the question is whether these platforms should have such power in the first place.
And the answer is a resounding “no”.
Universe of alternatives¶
There is another way. The Fediverse is a decentralised social network.
Imagine if Twitter and Facebook worked the way e-mail providers do: you can have an account on any instance (as servers are called on the Fediverse), and different instances talk to each other — If you have an account on, say,
mastodon.social, you can still talk to users over at
pleroma.soykaf.com or almost any other compatible instance.
Individual instances are run by different people or communities, using different software, and each has their own rules.
These rules are enforced using moderation tools, some of which are simply not possible in a centralised network. Not only are moderators able to block or silence particular accounts, but also block (or, “defederate from”) whole instances which cater to abusive users — which is inconceivable if the whole network is a single “instance”.
Additionally, each user has the ability to block or silence threads, abusive users, or whole instances, too. All this means that the response to abusive users can be fine-tuned. Because Fediverse communities run their own instances, they care about keeping any abuse or discrimination at bay, and they have the agency to do just that.
Local rules instead of global censorship¶
White supremacy and alt-right trolling were a problem also on the Fediverse. Services like Gab tried to become part of it, and individual bad actors were setting up accounts on other instances.
They were, however, decisively repudiated by a combination of better moderation tools, communitiesbeing clear about what is and what is not acceptable on their instances, and moderators and admins being unapologetic about blocking abusive users or defederating from instances that are problematic.
Now, alt-right trolls and white supremacists are all but limited to a corner of the Fediverse almost nobody else talks to. While it does not prevent a dedicated group from talking hatefully among themselves on their own instance (like Gab), it does isolate them, makes radicalising new users harder, and protects others from potential abuse. They are also, of course, welcome to create accounts on other instances, provided that they behave themselves.
All that despite there not being a central authority to enforce the rules. Turns out not many people like talking to or platforming fascists.
Instead of trying to come up with a single centrally-mandated set of rules — forcing it on everyone and acting surprised when that inevitably fails — it is time to recognise that different communities have different sensibilities, and members of these communities better understand the context and can best enforce their rules.
On an individual level, you can join the Fediverse. Collectively, we should break down the walls of mainstream social media, regulate them, and make monetising toxic engagement spilling into public discourse as onerous as dumping toxic waste into a river.
In the end even the monopolists are slowly recognising moderation in a global centralised network is impossible and that there is a need for more regulation. Perhaps everyone else should too.