Skip to main content

Songs on the Security of Networks
a blog by Michał "rysiek" Woźniak

Things I'd like more people to understand in 2024

We find ourselves in a peculiar place. We are more interconnected, yet more misinformed. At ease with more advanced technologies, but more easily mislead by them. “Doing our own research”, but ending up deeper in conspiratorial rabbit holes.

When discussing complex topics — pandemic, war, the housing crisis, or some thorny family affairs — it is surprisingly easy to jump to conclusions, to oversimplify, ignore crucial nuance, and thus get untethered from reality. To label someone as “evil”, “unethical”, fall back on tribalism. Our brains are always looking for a shortcut, and many of these shortcuts lead us astray. Sometimes we get fooled, sometimes we fool others. Neither helps in the long run.

I am as guilty of this as anyone else. But I also feel the only way we can deal with problems we’re facing, on any level, is by talking them through. Here’s a list of a few rules of thumb I find particularly helpful to keep in mind when thinking about and discussing complex politics- and society-adjacent topics.

They are not absolutes, and do not always apply, but they can help avoid some pitfalls we fall into all too often.

Explanation is not a justification

The fact that there exists an explanation of an action or decision does not automatically mean that the action or decision was justified. Explanation is only about being able to understand why somebody did something. Justification is about the moral judgment over that person and what they did.

It is chillingly easy to fall into the trap of assuming that a person is justifying an unethical act of someone’s just because that person is trying to understand or explain it. Making such an assumption easily leads to dismissing that person as a “supporter” of that unethical act, and thus unethical themselves. This in turn makes it very difficult to talk about causes of a given situation, and about making it less likely it happens in the future.

We have to be able to discuss reasons behind a specific decision or action, regardless of how we  feel about the morality of it. If we want to make sure something bad does not happen again, understanding the reasons it happened is often more important than passing moral judgment.

The flip-side of this is that providing an explanation of something is not the same as providing a justification for it. “This is why I did it” is not the same as “this is why I was in the right doing it”. If by explaining an action somebody is be trying to deflect blame — they probably should get called out on that.

Of course, this is not to say that an explanation can never be an important element of valid justification.  It can, and it often is. But explanation and justification are different, even if one can support the other to some degree.

Hanlon’s razor

We humans are great at ascribing agency and intentionality where there is none. We love to make things about ourselves. We see faces in the clouds, deity’s wrath in volcanic eruptions, and targeted, premeditated malice in somebody else’s decisions or actions — especially ones that affect us in a bad way.

Hanlon’s razor states:

Never attribute to malice that which is adequately explained by stupidity.

I personally expand it to also include incompetence, laziness, and other lesser vices. It is, basically, a tool for assessing explanations of a given set of actions or decisions. In many cases, there is no need to assume malice in order to explain a problematic action or decision. In some cases assuming malice is actually counter-productive.

We don’t need to assume maliciousness on part of civil servants in the Netherlands who deployed the (as it turns out) racist system for flagging “suspicious” use of childcare benefits to know this was unacceptable. Pondering whether that was malicious on their part or not is in this case moot, and can distract from a broader and more immediately important question of: how to fix the broader system such that this never happens again, regardless of malice or incompetence?

That’s not to say that there is never malice, of course. Sometimes there very much is. But in the end, in a lot of cases it might not matter much — bad outcomes are bad regardless of whether they are caused by malice, or by incompetence. Important systems, especially ones on which our livelihoods or health and well-being depends on, should be resilient to either.

Or, as Grey’s law puts it:

Any sufficiently advanced incompetence is indistinguishable from malice

Which is closely related to…

A system’s purpose is what it does

Let’s say we have a complex system — technical, political, social, whatever the kind. And let’s say that it keeps having certain bad outcomes. Everyone involved in creating and maintaining it keeps insisting that these bad outcomes are accidental, and keep promising this can be fixed, but somehow it never is. At some point it just makes sense to treat these bad outcomes as the actual purpose of the system. If they really were not, surely the system would have been fixed already!

Coined by Stafford Beer, of Cybersyn fame, this rule is an great way of cutting through elaborate excuses given about any unacceptable outcomes of a system.

For example: if a government policy supposedly meant to fight the housing crisis (say, by guaranteeing low-interest loans to prospective buyers) ends up raising apartment prices but not causing actual improvement in the overall housing availability, at some point it’s reasonable to say that the purpose of this policy is not to fight the housing crisis — but to funnel free money to real estate developers.

Or: if a policy intended to combat drug abuse ends up predominantly incarcerating only a specific part of the population (say, young Black men), but in no real reduction in overall drug use, then it is reasonable to say that the purpose of the policy is not reduction of drug use — but persecution of a specific group.

Mind you, this doesn’t necessarily mean that the system in question was deliberately designed to be like this! It doesn’t necessarily mean its designers and maintainers are intentionally lying about what its purpose is or was supposed to be, maliciously hiding the fact that the purpose was different (see Hanlon’s razor above). It might be accidental, or related to incompetence, or to the fact that we’re all a product of the society we grew up in and the circumstances we inhabit.

In the end it doesn’t really matter what the original idea for that system was. If a system is allowed to stay in place even though it is clearly ineffective in its stated purpose, then it is fair to say that the actual purpose has to be something else.

Life is not a zero-sum game

There are situations which are a zero-sum game. Trying to get tickets to a popular concert is an example: if you get your tickets, I might not get mine. The resource is strictly limited and we are competing for it. Your win is my loss.

But in a lot of cases, things that are talked about as if they were a zero-sum game — are not. Take immigration: it is often talked about in “us vs. them” terms, with an implied assumption that there is some kind of resource that is strictly limited, and that the migrants, once let into the country, will compete over it with its current residents.

This is simply not the case. Yes, people coming into the country might need education, healthcare, social services — but they will also create more demand for local goods and services, strengthening the economy. Often they might be willing to work jobs that nobody else wants to take. They will pay taxes. They will bring their culture and cuisine with them, enriching the lives of everyone.

This is true for a lot of thorny political and social issues that are portrayed publicly or talked about as if they were a zero-sum game. Sometimes this becomes outright absurd and almost self-parodying, as with the so-called “Schrödinger’s immigrant”, who supposedly “steals our jobs” and simultaneously is “too lazy” to get one, hanging on unemployment benefits instead.

Two things can be true at the same time

In a way, truth is also often not a zero-sum game. For example, it is true I work a lot, but it is also true that I am quite a lazy person. It is true that Titanic’s captain’s actions can be considered reckless by today’s standards, and had contributed to the catastrophe, but it is also true they probably did not appear reckless to him or his peers at the time.

This perhaps sounds obvious, but becomes much less so when strong emotions come into play.

Are COVID vaccines a miracle of science, developed and tested in impossibly short time and saving countless lives? Or are they another vestige of Big Pharma’s flavor of neo-colonialism, based on who gets easy access to them and who doesn’t; who gets to manufacture them and who doesn’t; and who gets to profit from them? Both are true. We should be able to admire the former while insisting the latter is outright unacceptable.

This became particularly stark (and somewhat personal) to me when Putin’s Russia launched a full-scale invasion against Ukraine in February 2022. A lot of left-leaning, anarchist-y people seemed to defend Russian aggression by pointing out atrocities committed by US and NATO in Iraq or Afghanistan. How dare I “take side of NATO” here, have they not done enough evil?

But two things can be true at the same time — the US and NATO should be rightfully made accountable for their actions, of course, but that does not make Russia’s invasion and the atrocities it brought on civilians in Ukraine acceptable or justifiable in any sense.

This is a form of a false dichotomy, making it seem as if we have to “choose a side” out of a limited set of options. But the world is more complex than that. We have to be able to walk and chew gum.

These are not absolutes

All of these are guidelines, not absolute and unshakable rules. In some cases they might even run against one another. That’s okay.

An explanation can be an important part of a justification of some action — it’s just that it should not automatically, always be assumed so. An action or a decision can be underpinned by malice, and in some cases it is important to establish if it is — it’s just that it’s not necessarily always so, and it’s not always worthwhile to get stuck on that question.

A system’s outcomes might misalign with its stated purpose temporarily, and a fix might be on the way — question is, how long has the system been allowed to remain broken, and will it actually get fixed? Even if some problem is not a zero-sum game, resources are rarely truly unlimited and it might still make sense to ask about how they get allocated. And sometimes we do have to choose a side.

To me, these guidelines act as useful safety valves when thinking and discussing complex subjects. They help me notice when an argument might be going astray.

Bringing it all together

I find it startling how easily, how eagerly we retreat into tribalism when discussing important, complex, emotionally charged subjects. How quickly we decide there surely is malice involved, how quickly we can be manipulated into thinking something is a zero-sum game and we better, in our own interest, deny somebody’s access to some perceived “limited resource.”

And once we do, we gleefully dismiss “the other side” — suddenly there’s an “other side”, as if every problem only ever had two possible solutions! — as unethical, outright malicious or at least woefully misinformed. Then we don’t have to consider arguments that go against our strongly-held convictions anymore, we don’t have to deal with the fact that the world is more complex than “us vs. them.” After all we are “us”, and if “they” are not with us, they’re clearly against us.

The complexity, however, does not go away, regardless of how hard we try to ignore or hide it.

Mastodon monoculture problem

Recent moves by Eugen Rochko (known as Gargron on fedi), the CEO of Mastodon-the-non-profit and lead developer of Mastodon-the-software, got some people worried about the outsized influence Mastodon (the software project and the non-profit) has on the rest of the Fediverse.

Good. We should be worried.

Mastodon-the-software is used by far by the most people on fedi. The biggest instance, mastodon.social, is home to over 200.000 active accounts as of this writing. This is roughly 1/10th of the whole Fediverse, on a single instance. Worse, Mastodon-the-software is often identified as the whole social network, obscuring the fact that Fediverse is a much broader system comprised of a much more diverse software.

This has poor consequences now, and it might have worse consequences later. What also really bothers me is that I have seen some of this before.

As seen on OStatus-verse

Years ago, I had an account on a precursor to the Fediverse. It was based mainly around StatusNet-the-software (since renamed as GNU social) and the OStatus protocol. The biggest instance by far was identi.ca — where I had my account. There was also a bunch of other instances, and there were other software projects that also implemented OStatus — notably, Friendica.

For the purpose of this blogpost, let’s call that social network “OStatus-verse”.

Compared to the Fediverse today, OStatus-verse was miniscule. I do not have specific numbers, but my pull-numbers-out-of-thin-air rough estimate is, say, ~100.000 to ~200.000 active accounts on a very good day (if you have the actual numbers, do tell and I will gladly update this blogpost). I do not have exact the numbers for identi.ca either, but my rough estimate is that it had between 10.000 and 20.000 active accounts.

So, around 1/10th of the entire social network.

OStatus-verse was small but lively. There were discussions, threads, and hashtags. It had groups a decade before Mastodon-the-software-project implemented groups. It had (desktop) apps — I still miss the usability of Choqok! And after a bit of nagging I was even able to convince a Polish ministry to have official presence there. As far as I know this is the earliest example of a government-level institution having an official account on a free-software-run, decentralized social network.

Identipocalypse

Then one day, Evan Prodromou, the administrator of identi.ca (and the original creator of StatusNet-the-software), decided to redeploy it as a new service, runningpump.io. The new software was supposed to be better and leaner. A new protocol was created because OStatus had very real limitations.

There was just one snag: that new protocol was incompatible with the rest of OStatus-verse. It tore the heart out of that social network.

People with identi.ca accounts lost their connections on all OStatus-compatible instances. People with accounts on other instances lost contact with people on identi.ca, some of whom were pretty popular in OStatus-verse (sounds familiar?..).

It turned out that if an instance is 1/10th of the whole social network, a lot of social connections lead through it. Even though other instances existed, suddenly a huge chunk of active users just vanished. Many groups fell mostly silent. Even if one had an account on a different instance, and contacts on other instances, a lot of familiar faces just disappeared. I stopped using it soon after that.

From my perspective, this single action set us back at least five if not ten years as far as promoting decentralized social media is concerned. Redeployment of identi.ca fractured the OStatus-verse not just in the social connections sense, but also in the protocol and developer community sense. As pettter, a fellow OStatus-verse veteran put it:

I think a bit of nuance on the huge-blow thing is that it didn’t only impact by cutting social connections, but also in protocol fragmentation, and in fragmenting developer efforts into rebuilding basic blocks of a federated social web time and again. Perhaps it was a necessary step to them come back together in designing AP, but personally I don’t think so.

Of course, Evan had all the right to do that. It was a service he ran, pro bono, on his own terms, with his own money. But that does not change the fact that it crippled the OStatus-verse.

I believe we need to learn from this history. Once we do, we should be worried about the sheer size ofmastodon.social. We should be worried by the apparent monoculture of Mastodon-the-software on the Fediverse. And we should also be worried about identifying all of Fediverse with just “Mastodon”.

Cost of going big

There are real costs and real risks related to going as big as mastodon.social has. Those costs and especially those risks are both to that instance itself, and to the broader Fediverse.

Moderation on the Fediverse is largely instance-centric. A single gigantic instance is difficult to moderate effectively, especially if it has registrations open (as mastodon.social currently does). As the flagship instance, promoted directly in official mobile apps, it draws a lot of new registrations — including quite a few problematic ones.

At the same time, this also makes it more difficult for admins and moderators of other instances to make moderation decisions about mastodon.social.

If an admin of a different instance decides mastodon.social’s moderation is lacking for whatever reason, should they silence it or even defederate from it (as some already have, apparently), thus denying members of their instance access to a lot of popular people who have accounts there? Or should they keep that access, risking exposing their own community to potentially harmful actions?

The sheer size of mastodon.social makes any such decision of another instance immediately a huge deal. This is a form of power: “sure, you can defederate from us if you don’t like how we moderate, but it would be a shame if people on your instance lost access to 1/10th of the whole fedi!” As GoToSocial’s site puts it:

We also don’t believe that flagship instances with thousands and thousands of users are very good for the Fediverse, since they tend towards centralization and can easily become ‘too big to block’.

Mind you, I am not saying this power dynamic is consciously and purposefully exploited! But it undeniably exists.

Being a gigantic flagship instance also means mastodon.social is more likely to be a target of malicious actions. On multiple occasions over the last few months it found itself under DDoS, for example. A couple of times it went down because of it. Resilience of a federated system relies on removing large points of failure, and mastodon.social is a huge one today.

The size of that instance and it being a juicy target also means that certain hard choices need to be made. For example, due to being a likely target of DDoS, it is now behind Fastly. This is a problem from the privacy perspective, and from the perspective of centralization of Internet infrastructure. It is also a problem that smaller instances avoid completely by simply being smaller and thus less interesting targets for anyone to take down with a DDoS.

Apparent monoculture

While the Fediverse is not exactly a monoculture, it is too close to being one for comfort. Mastodon-the-non-profit has outsized influence on all of fedi. This makes things tense for people using the social network, developers of Mastodon-the-software and other instance software projects, and instance admins.

Mastodon is neither the only instance software project on fedi, nor the first. For example, Friendica has been around for a decade and a half, long before Mastodon-the-software got it’s first git commit. There are Friendica instances (e.g. pirati.ca) operating today within Fediverse which had been part of the OStatus-verse a decade ago!

But calling all of Fediverse “Mastodon” makes it seem as if only Mastodon-the-software exists on the Fediverse. This leads people to demand features to be added to Mastodon and to ask for changes that have sometimes already been implemented by other instance software. Calckey already has quote-toots. Friendica has threaded conversations and text formatting.

Identifying Mastodon with the whole fedi is also bad for Mastodon-the-software developers. They find themselves under pressure to implement features that might not entirely fit with Mastodon-the-software. Or, they find themselves dealing with two groups of vocal users, one demanding a certain feature, other insisting it does not get implemented as too big of a change. Many of such situations could probably be more easily dealt with by clearly drawing a line, and pointing people to other instance software that might fit their use-case better.

Finally, Mastodon is currently by far (measured by active users, and by number of instances) the most popular implementation of the ActivityPub protocol. Every implementation has its quirks. With time, and with new features being implemented, Mastodon’s implementation might have to drift further away from the strict spec. It’s tempting, after all: why go through an arduous process of standardizing any protocol extensions if you’re the biggest kid on the block anyway?

If that happens, will every other implementation have to follow it, thus drifting along with it but without actual agency in what changes to the de facto spec are implemented? Will that create more tensions between Mastodon-the-software developers and developers of other instance software projects?

The best solution to “Mastodon misses feature X” is not always “Mastodon should implement feature X.” Often it might be better to just use a different instance software, better suited for a particular task or community. Or to work on a protocol extension that would allow a particularly popular feature to be reliably implemented by as many instances as possible.

But that can only work if it’s clear to everyone that Mastodon is only a part of a bigger social network: the Fediverse. And that we already do have a lot of choice as far as instance software is concerned, and as far as individual instances are concerned, and as far as mobile apps are concerned.

Sadly, that seems to go against recent decisions by Eugen, which go towards a pretty top-down (not quite vertically integrated, but gravitating towards that) model of official Mastodon mobile apps promoting the flagship mastodon.social instance. And that is something to worry about, in my opinion.

A better way

I want to be clear I am not arguing here for freezing Mastodon development and never implementing any new features. I also agree that the signup process needs to be better and more streamlined than it had been before, and that plenty of UI/UX changes need be implemented. But all this can and should be done in a way that improves resilience of the Fediverse, instead of undermining it.

Broader changes

My laundry list for broader needed changes to Mastodon and the Fediverse would be:

  1. Close registrations on mastodon.social, now
    It is already too big and too much of a risk for the rest of the Fediverse.
  2. Make profile migration even easier, also across different instance types
    On Mastodon, profile migration currently only moves followers. Who you follow, bookmarks, block and mute lists can be moved manually. Posts and lists cannot be moved — and that’s a big problem for a lot of people, keeping them tied to the first instance they signed-up for. It’s not insurmountable — I had moved my profile twice and found it perfectly fine. But it is too much friction. Some other instance software projects are working on allowing post migrations too, thankfully. But it’s not going to be a quick and easy fix, as ActivityPub design makes it very hard to move posts between instances.
  3. By default, official apps should offer new people a random instance out of a small list of verified ones
    At least some of these promoted instances should not be controlled by Mastodon-the-non-profit. Ideally, some instances should run different instance software as long as it uses compatible client API.

What can I do myself?

And here are things we ourselves can do, as people using the Fediverse:

  1. Consider moving off of mastodon.social if you have an account there.
    That’s admittedly a big step, but also something you can do that most directly helps fix the situation. I had migrated frommastodon.social years ago, and never looked back.
  2. Consider using an instance based on a different software project
    The more people migrate to instances using other instance software than Mastodon-the-software, the more balanced and resilient Fediverse we get. Hearing a lot of positive opinions about Calckey, for example. GoToSocial is also looking interesting.
  3. Remember that Fediverse is more than just Mastodon
    Language matters. When talking about the Fediverse, calling it “Mastodon” is only making the issues I mention above more difficult to deal with.
  4. If you can, support projects other than the official Mastodon ones
    At this point Mastodon-the-software project has a lot of contributors, a stable development team, and enough solid funding to continue safely for a long while. That’s great! But same cannot be said about other fedi-adjacent projects, including independent mobile apps or instance software. In order to have a diverse, resilient Fediverse, we need to make sure these projects are also supported, including financially.

Closing thoughts

First of all, the Fediverse is a much more resilient, more long-term viable, safer, and more democratized social network than any centralized walled garden. Even with its Mastodon monoculture problem, it is still not (and can’t be) owned or controlled by any single company or person. I also feel that it is a better, safer choice than social networks that only cosplay decentralization and pay lip service to it, like BlueSky.

In a very meaningful way, OStatus-verse can be said to have been an early version of the Fediverse; as noted before, some instances that had been part of it then are still running and part of the Fediverse today. In other words, Fediverse had been around for a decade and a half by now, and survived the Identipocalypse even as it got badly hurt by it, while observing both the birth and the untimely passing of Google+.

I do believe Fediverse is leaps and bounds more resilient today than OStatus-verse had been before the identi.ca redeploy. It’s an order of magnitude (at least) larger in terms of user base. There are dozens of different instance software projects and tens of thousands active instances. There are also serious institutions invested in its future. We should not be panicking over all I wrote above. But I do think we should be worried.

I do not attribute malice to recent actions of Eugen (like making official Mastodon apps funnel new people towards mastodon.social), nor to past actions of Evan (redeploying identi.ca on pump.io). And I don’t think anyone should. This stuff is hard, and we’re all learning as we go, trying to do our best with the limited time we have available and restricted resources in our hands.

Evan went on to be one of the main creators of ActivityPub, the protocol the Fediverse runs on. Eugen had started Mastodon-the-software project in the first place which I strongly believe allowed Fediverse to flourish into what it is today. I really appreciate their work, and recognize that it’s impossible to do anything in social media space without someone having opinions on it.

That does not mean, however, we cannot scrutinize these decisions and should not have these opinions.


Update: I did a silly; mastodon.social is behind Fastly, not CloudFlare, of course. Fixed, thank you to those who poked me about it!

Update 2: Heartfelt thanks to Jorge Maldonado Ventura for providing a Spanish translation of this blogpost, published under CC BY-SA 4.0. ¡Gracias!

BlueSky is cosplaying decentralization

Almost exactly six months after Twitter got taken over by a petulant edge lord, people seem to be done with grieving the communities this disrupted and connections they lost, and are ready, eager even, to jump head-first into another toxic relationship. This time with BlueSky.

BlueSky’s faux-decentralization

BlueSky differentiates itself from Hive, Post, and other centralized social media newcommers by being ostensibly decentralized. It differentiates itself from the Fediverse by not being the Fediverse, and by being funded by *checks notes* Twitter. Oh, and by being built by Silicon Valley techbros, instead of weirdos who understand consent and how important moderation is.

I say “ostensibly decentralized”, because BlueSky’s (henceforth referred to as “BS” here) decentralization is a similar kind of decentralization as with cryptocurrencies: sure, you can run your own node (in BS case: “personal data servers”), but that does not give you basically any meaningful agency in the system. Quoting the protocol docs:

Account portability is the major reason why we chose to build a separate protocol. We consider portability to be crucial because it protects users from sudden bans, server shutdowns, and policy disagreements.

And here:

ATP’s model is that speech and reach should be two separate layers, built to work with each other. The “speech” layer should remain neutral, distributing authority and designed to ensure everyone has a voice. The “reach” layer lives on top, built for flexibility and designed to scale.

So the storage layer is “neutral”, accounts are “portable”. That to me means that node operators will have no agency in the system. Discoverability/search/recommendations are done in a separate layer, and the way the system seems to be designed (nodes have no say, they just provide the data) effectively places all the power with these “reach” algorithms.

Secondary centralization in “reach” layer

The rule of thumb with search and recommendation algorithms is: the bigger, the better. The more data you have and the more compute you get to throw at it, the better your recommendations will be. So it’s a winner-takes-all system that strongly avantages whoever starts building their dataset early and can throw as much money at it as possible.

And once you’re the biggest game in town, people will optimize for you (just look at SEO and Google Search). It won’t matter much that people using the network can freely choose a different algorithm, just as it doesn’t matter much on the Web that people can choose a different search engine. And the more I read about BS’s protocol, the more I think this is done on purpose.

Why? Because it allows BS to pay lip service to decentralization, without actually giving away the power in the system. After all, BlueSky-the-company will definitely be the first to start indexing BS-the-social-network posts, and you can bet Jack has enough money to throw at this to get the needed compute. I guess decentralization is a big thing lately and there are investors to scam if you can farm enough users and build enough hype fast enough!

Another pretty good sign that BS’s decentralization is actually b.s. is the fact that the Decentralized Identifiers (DIDs) used by BlueSky are currently “temporarily” not actually decentralized. The protocol uses something imaginatively called “DID Placeholder”. If I were a betting man I would bet that in five years it will keep on using the centralized DID Placeholder, and that that will be a root cause of a lot of shenanigans.

Externalizing the work

Finally, as a good friend of mine, tomasino, noticed:

it decentralizes the cost to the central authority by pushing data load onto volunteers

A similar observation was made by mekka okoreke, too. To which I can only add: very much this, while planning to keep control by being the biggest kid on the “reach” block.

Of course, fedi could also have some search and discovery algorithms built on top. Operators of such algorithms (there had been a few attempts already) would also benefit from being first and going big. But their potential power is balanced by the power fedi instance admins and moderators have (blocking and defederating) and by the fact that fedi is perfectly usable without such algorithms. And by strong hostility of a lot of people using fedi towards non-consensual indexing.

Jack’s BS

BS is the brainchild of Jack Dorsey, which is no surprise to anyone who’s been paying any attention to BS. Jack Dorsey is of course the former CEO of Twitter, who famously said:

Elon is the singular solution I trust. I trust his mission to extend the light of consciousness.

This aged roughly as well as fresh milk out in the midday July sun in Portugal.

Jack also heavily promoted cryptocurrencies, scammed people using NFTs, and donated a bunch of BTC to Nostr, a “censorship-resistant” social media platform, because of course.

And finally, there’s this comment of his (posted on Nostr; BlueSky not good enough for Jack, it seems). Crucial bit:

Likes are superficial and exist only to inform an algorithm. Relevance algorithms have their place, but they are best informed by a truly costly action.

No, you stockholder-value-optimizing-robot, likes exist to inform the author that you liked their post. They exist to infuse some warm emotions into the cold machine. They exist so that we can connect on a human level without trivializing it by putting into words. You know, as us humans do.

With all this considered, let’s just say I question Jack’s judgement and his motives in anything related to social networks. And since, as I said, BS is his brainchild, I would be very suspicious of it.

Modeled after Twitter

In a pretty meaningful way, “speech and reach” is the model of Twitter today. You just don’t get to choose your recommendation/discovery algorithm.

Elon Musk, the self-described “free speech absolutist” (unless it’s criticism of him) has re-platformed a lot of nasty people with the idea that anyone should have a Twitter account. But only those who pay get to play with any meaningful reach.

What actual difference would being able to choose between different recommendation/discoverability algorithms make for at-risk folks who are constantly harassed on Twitter? There is no way to opt-out from “reach” algorithms indexing one’s posts, as far as I can see in the ATproto and BS documentation. So fash/harassers would be able to choose an algorithm that basically recommends targets to them.

On the other hand, harassment victims could choose an algo that does not recommend harassers to them — but the problem for them is not that they are recommended to follow harassers’ accounts. It’s that harassers get to jump into their replies and pile-on using quote-posts and so on. Aided and abetted by recommendation algorithms that one cannot opt out of being indexed by in order to protect oneself.

The only way to effectively fight harassment in a social network is effective, contextual moderation. The Fediverse showed that having communities, which embody that context and whose admins and moderators focus on protecting their members, is pretty damn effective here. This is exactly what BS is not doing. And I do not see much mention of moderation at all in its documentation.

In other words, “neutrality” and “speech” and “voice” and “protection from bans” is mentioned right there, front and center, in BS’s overview and FAQ. At the same time moderation and anti-harassment features are, at best, an afterthought. As fedi user dr2chase put it:

I’m getting a techno-Libertarian aroma from all this, i.e., these guys won’t kick the Nazi out of the bar.

People like shiny!

Of course the sad reality is that people will buy the hype, build communities under the everloving watchful eye of Jack “Musk is the singular solution I trust, likes are superficial if not paid for” Dorsey. And then do a surprised picachu face when inevitably, sooner or later, some surveillance capitalist robber baron enshittifies it to a point of complete unusefulness.

It fascinates me how quickly people forget lessons from the whole Twitter kerfuffle, and just fall for another Silicon Valley silly con. Without even skipping a beat.

Twitter Unverified

The following is probably mostly obvious to anyone who had been using Twitter for the last decade. But for those who are just confused what the whole hubbub around Twitter “verified checkmark” is all about, here goes!

A long while ago Twitter started making certain accounts “verified”. This was supposed to combat impersonation (and in that it was even quite effective, apparently). But since the “verified” checkmark was given by Twitter on Twitter’s sole discretion, and since it mostly went to Large, Important Accounts (celebrities, politicians, and so on), it quickly became a status symbol of sorts. “Witness me, for I have the Verified Checkmark, therefore I am considered Important!”, that kind of thing.

When Melon took over in November, this was an easy target.

First of all, a lot of these blue checkmarks were “left wing media” people and so on, or otherwise people who could be claimed by his twisted alt-right peanut of a brain to be “establishment” — as opposed, of course, to the Apartheid Clyde himself, who in no way can be said to be an “establishment” person, nuh-huh!

Secondly, as it’s a status symbol, maybe people could get cajolled to pay for this?

And third, as a security measure — impersonating a well known person or a trusted organization is great when you’re trying to phish someone or send them a malwared link… — maybe large organizations will want to pay even more for the priviledge of being more difficult to impersonate on Twitter?

Turns out, no. The backlash was so strong, that Twitter even started letting those who did pay for Twitter Blue to hide the checkmark, because it started being associated with supporting Musk. Meanwhile organizations started coming out strong with “hell no we won’t pay your protection racket money”, as to them it seemed (pretty on-point I’d say) as if Chief Twit was basically saying: “fancy Twitter profile for an organization you have there; it would be a shame if someone impersonated it!”

Today is the day (4.20; yes, the Toddler King made another marijuana joke here) where legacy “verified” checkmarks are finally disappearing and only the paid ones remain. In other words, the Fediverse has a better verification system than Twitter now.

Does ChatGPT gablergh?

Imagine coming across, on a reasonably serious site, an article that starts along the lines of:

After observing the generative AI space for a while, I feel I have to ask: does ChatGPT (and other LLM-based chatbots)… actually gablergh? And if I am honest with myself, I cannot but conclude that it sure does seem so, to some extent!

I know this sounds sensationalist. It does undermine some of our strongly held assumptions and beliefs about what does “to gablergh” actually mean — and what classes of entities can, in fact, be said to gablergh at all. Since gablerghing is such a crucial part of what many feel it means to be human, this is also certainly going to ruffle some feathers!

But here’s the thing: so far, after thousands of years of philosophical thought and scientific research, we have not been able to clearly define “gablerghing”. Thus, we simply cannot say for certain that some simpler animals, like ants, do not gablergh in some relevant sense. Gablerghing happens on a spectrum, from clearly gablerghing organisms like humans and dolphins, through animals like dogs or cats who I think we would mostly agree do gablergh, down to ants where this is maybe more fraught a statement.

So why couldn’t “a set of scripts running on top of a corpus of statistically analyzed internet content” be said to, in some sense, gablergh?

Naturally, your immediate reaction would not be to make a serious thinking face and consider deeply whether or not GPT indeed “gablerghs”, and if so to what degree. Instead, you would first expect the author to define the term “gablergh” and provide some relevant criteria for establishing whether or not something “gablerghs”.

Yet somehow when hype-peddlers claim that LLMs (and tools built around them, like ChatGPT) “think”, nobody demands of them clarification of what they actually mean by that, and what criteria they might possibly use (beyond “the output seems human-made”). This allows them to weaponize the complexity of defining the term “to think”, with all its emotional and philosophical baggage, and using it to their advantage.

“Well you can’t say it doesn’t think” — the argument goes — “since it’s so hard to define and delineate! Even ants can be said to think in some sense!”

This is preposterous. Of course this does not in any way prove that GPT can “think”; as one person pointed out on fedi it’s a case of the motte-and-bailey fallacy. Instead of accepting the premise, we should fire right back: “you don’t get to claim that GPT ‘thinks’ unless you first define that term clearly, and provide relevant criteria”. And these criteria need to be substantially better than the quack-like-a-duck of “output seems human-like, also it told me it thinks”.

After all, “at the very least you need to be able to define a quality Q you claim X has” is a much stronger stance than “I claim X has quality Q and you can’t prove I am wrong because Q is hard to define.”

No idea why we all collectively keep getting tripped over this, and fail to recognize it for what it is — thinly veiled hype-generation attempt that uses badly defined terms for marketing.

In the end, what it means to “think”, to be “conscious”, to “have intentionality”, is a matter for philosophers. Not for AI-techbros with stock to pump and chatbots to sell.

I want a fridge that won't join a botnet

I remember trying to buy a TV that does not have “smart” functionality a few years ago. It was a chore. Today it seems nigh-impossible.

By the way, we need a nice way of referring to non-smart devices. I propose: “safe”.

And not just TVs: ovens; refrigerators; dishwashers — all are now “smart”. In fact, it seems that more and more the available non-smart, err, I mean safe models are only the simpler ones, less performant in ways that are not related to any smart functionality.

Safe TVs but without the fancy backlight. Safe refrigerators but without the de-icing system. My Safe TV was available only with lower resolutions than “smart” models of the same brand.

This really annoys me. I am too well aware of security implications of smart devices. I do not want to have to manage regular software updates for whatever number of appliances I have at home, or risk somebody using them in a botnet (or worse).

And no, I don’t trust their “disable WiFi” menu options either. Seen this setting get enabled without my consent too many times. And a lot of participants to my little completely unscientific fedi poll seem to have similar experiences. Plus, there is valid concern that some devices will just try to connect to any open WiFi network; I would much rather not

I could put such devices on a special VLAN, or behind a Pi Hole, but 99% of people can’t. Plus, it’s work. Plus, most importantly, you can bet that “smart” devices will start coming with SIM cards and 4g/5g modems very soon — cars already do. Why does my fridge need Internet connectivity in the first place?

In 2016 an IoT-based Mirai botnet took down Dyn, one of the biggest online infrastructure companies, and many well known websites with it.

As early as 2018 there were already botnets that… used CCTV cameras. But of course the predominant media narrative was “hackers attack” instead of “vendors put us at risk.”

Sidenote: if you’re using the word “hacker” to mean “cybercriminal”, you are making it worse. Please stop.

With all this in mind, I started thinking of how could this be solved? Not in the sense of “how can I, a techy person, secure my network and devices”, but in the sense of “how can we as a society manage the Internet of Shit problem?”

Consider a regulatory requirement for IoT / smart-appliance vendors to provide either (vendor’s choice):

  • similarly-priced safe models, physically without the smart functionality, but with other metrics and functionality on-par with the smart version; or…
  • reliable, verifiable, physical way of disabling smart functionality (or perhaps just networking) in their smart-devices.

Additionally, the packaging or other forms of information available before purchase should state clearly:

  • does the device require Internet connectivity to set-up?
  • does the device require a mobile app to set-up?
  • does the device require agreeing to an EULA/TOS/privacy policy to set-up?
  • which functions require Internet connectivity?
  • which functions require data processing on external servers (that is, outside of the device)?
  • does the device have a microphone or a camera or other sensors?
  • does information from such sensors ever leave the device (for example, voice command data to be processed on external servers)?

I just want to be able to buy a damn refrigerator without worrying about it joining a botnet. Is that too much to ask?


This blogpost started off as a fedi thread. It got a bunch of interesting responses, and links to news about absolutely bonkers IoT stuff galore. Might be worth checking it out!

Chaotic speaker vote two years after attempted coup in oil-rich North American country

This week in the United States of America, a former British colony on the North American continent, long-brewing political and social problems culminated in a messy speaker election in the lower chamber of the bicameral national parliament.

The Republican party, by far the more conservative of the two major parties in what effectively is a two-party political oligopoly, gained narrow majority in the chamber in November elections, but was unable to effectively execute on its new-found power. A small far-right splinter group within the party blocked the election of the speaker — a procedural position that has gradually become heavily politicized — demanding political favors in return for their votes. This resulted in four days of heated and often chaotic proceedings, at one point devolving into a brawl.

The Speaker of the House, as the post is officially called, has finally been elected on the fifteenth try, the largest number since before the country’s bloody civil war. Last time election of the speaker — which is largely a formality — required more than one ballot was in 1923.

To placate the hold-outs, the now-newly elected speaker had to first agree to a long list of concessions, potentially going as far as giving the far-right hardliner minority control over which legislative proposals are even put up for a vote. This raises the possibility of a government shut-down due to running out of funds later this year; government shut-downs have become more frequent recently in the heavily politically polarized North American nation. There are also concerns that the country might default on its debt, bringing more political and economic instability to the region.

The troubled vote comes exactly two years after a violent coup attempt, supported by then-President, who refused to accept defeat in his bid for re-election. Armed militia storming the building where the legislative branch of the country’s government deliberates — United States Capitol — tried to stop the formal certification of the election’s result. The crisis was enabled partially by an outdated electoral system that often relies heavily on norms and custom in place of strict regulations.

Members of both chambers of the legislative branch, as well as then-Vice President of the country, had to be evacuated from the building. The attack resulted in several deaths.

Certain right-wing political figures, aligned with the former President, who had voiced their strong support for the armed insurgency perpetrating the coup and defended the organization (designated as “terrorist” in some countries) involved in organizing it have now been sworn-in as elected members of the lower legislative chamber, House of Representatives. They formed the core of the splinter group, leader of which had been accused of being involved in child sex trafficking and prostitution of minors.

The former President, who strives to maintain a strongman persona, is named in a number of investigations and criminal cases, ranging from tax evasion to stealing classified documents to inciting the coup attempt. He had also drawn accusations of nepotism after having appointed his daughter and son-in-law to important positions in his administration, and was embroiled in scandals involving, among others, paying off a porn star over an alleged affair.

The oil-rich North American country is struggling with high rate of gun violence (among the 20 most heavily affected countries in the world, according to a 2016 ranking), high cost of and difficult access to healthcare, lowest adult literacy rate in the region, and one of the highest incarceration rates in the world. These issues disproportionately affect communities of people of color, in no small part due to country’s economy having relied heavily on slavery in the past.


This post is inspired by Joshua Keating’s “If It Happened There” column in Slate, which I found to be as hilarious as it is illuminating. I can only wish someone would continue this kind of lighthearted yet much-needed work.

Why I quit Twitter… a decade ago

And so it has come to this. I finally quit Twitter… almost exactly a decade ago.

I could spin yarn and claim it was some major feat of clairvoyance, of course. That I foresaw all that happened lately with Twitter and decided to bail early. But it wasn’t that, really. I just felt strongly that centralized services are dangerous and unethical, and I decided to stop using them. Back then, Twitter was the last one on the chopping block for me.

The “Why”

Why did I feel so strongly about centralized services? Had you asked me ten years ago, it would have been difficult for me to explain. But even then I would have said it boils down to control and power dynamics. At the time I was a Free Software advocate, working for a Polish version of the FSFE. Software freedom was (and remains) important to me and it seemed obvious that one cannot have software freedom in a walled garden.

Hardly a strong, concrete argument, I know!

But it did turn out to be correct, didn’t it? It is about control and power dynamics. Anyone who migrated from Twitter to the Fediverse lately can attest to this. Anyone who read the Facebook Papers can understand this. Today, Twitter is run by an abuser, and Facebook is an abuser.

A bit of history

At the time I was using Diaspora and StatusNet (later renamed to GNU social), precursors to the Fediverse. Both were tiny, but I could see the value in the basic idea of decentralized social media: no single point of control, no single point of failure. I was also promoting that idea. Somewhat successfully, might I add, as I was able to convince the Polish Ministry of Administration and Digitization to create a StatusNet account. As far as I know this was the first official decentralized social media profile of a government institution — if that’s incorrect, I’d love to hear it!

One could claim that there is uninterrupted continuity between these older networks and the Fediverse. Friendica implements ActivityPub, the protocol that underpins the Fediverse today. It also implements Diaspora’s protocol, and the OStatus protocol that StatusNet used back in the day. Some Friendica instances had been running continuously for over a decade. They serve as a connection between the modern-day Fediverse and the decentralized social networks of years past.

This broader decentralized social network that morphed over the years into what we today call the Fediverse predated and outlived Google+. Diaspora was launched in November 2010. StatusNet is even older: it’s first and biggest public instance, identi.ca went live in July 2008. Google+ was launched in June 2014 and shuttered in April 2019.

Let’s stop here and ponder this extraordinary fact for a moment. Google, one of the largest and most influential tech companies in the world, threw all its weight behind Google+ — even going as far as outright forcing all YouTube users to use it. Yet, within five years from its inception Google+ was no more. And with it, all communities established and connections made on that platform.

Meanwhile, an open, decentralized social network, having none of the resources nor clout that Google+ had behind it, with no business model and no monetization scheme, happily carries on into its 15th year.

A Pet Peeve

Over the years I blogged a few times about decentralization of social media. Had some ideas on how to bring people over, suggested building decentralized protocols into the blogosphere (something that is now a reality with the WordPress ActivityPub plugin, for example).

A year after I had quit Twitter, in December 2013, I gave a talk at 30C3, where I called Twitter and Facebook “monopolies”. Back then that was a tough sell. Today, things seem different: the idea that social media platforms operated by Twitter and Meta are at least “monopoly-shaped” is acceptable and often accepted. That was not the last time I presented about the problem of centralization of the Internet.

Two years later, at the FSFE assembly during 32C3, I gave another talk about the insanity of having over fifty(!) different, incompatible open decentralized social networking protocols. Some of these slides did not age well, but the core point remains valid — compatibility is important, otherwise open decentralized social networks compete against each other and never get the chance to reach a size where the Network effect really kicks in for them.

Thankfully, now ActivityPub provides that common layer of compatibility for a lot of different projects. It’s important to keep that in mind; Mastodon might be the poster child of the Fediverse today, but software projects come and go. A healthy ecosystem of different but compatible software makes the whole network more resilient.

Privilege

I do recognize that my ability to quit Twitter cold turkey and still be able to find employment in my area of expertise, and to have a support network, is a form of privilege that not everyone has. But since I did have that privilege, even though I did not fully understand it at the time, I felt it was imperative I use it to do what I thought to be right: stop supporting centralized platforms like Twitter.

That’s the crux of it. By being on Twitter or Facebook, we support those platforms. And by not being anywhere else but on centralized platforms, we make it harder for other people to leave them — as we ourselves become one more person that those who do want to leave cannot find on the alternative networks, and thus one more reason to not stray outside the garden walls.

Not everyone has the privilege to quit Twitter. Not everyone has the spoons to set up an account on the Fediverse in addition to their Twitter presence. But many of us do, if we’re being honest with ourselves. The Twitter debacle shows why we should try to break out of the silos, if we do have that privilege and these spoons — in part for the sake of those who don’t.

Decentralized social networks in general, and the Fediverse in particular, are far from perfect. We should and will make them better.

By focusing our efforts on them, instead of on centrally controlled walled gardens, we can at least make sure that if we build something of value and import, if we create communities and connections there, if we invest the time and effort into setting up our presence, it will not all potentially disappear one day because a particular service got bought out or a megacorp got bored of running it.

Fighting Disinformation: We're Solving The Wrong Problems

This post was written for and originally published by the Institute of Network Cultures as part of the Dispatches from Ukraine: Tactical Media Reflections and Responses publication. It also benefited from copy editing by Chloë Arkenbout, and proofreading by Laurence Scherz.


Tackling disinformation and misinformation is a problem that is important, timely, hard… and, in no way new. Throughout history, different forms of propaganda, manipulation, and biased reporting have been present and deployed — consciously or not; maliciously or not — to steer political discourse and to goad public outrage. The issue has admittedly become more urgent lately and we do need to do something about it. I believe, however, that so far we’ve been focusing on the wrong parts of it.

Consider the term “fake news” itself. It feels like a new invention even though its literal use was first recorded in 1890. On its face it means “news that is untrue”, but of course, it has been twisted and abused to claim that certain factual reporting is false or manufactured — to a point where its very use might suggest that a person using it not being entirely forthright.

That’s the crux of it; in a way, “fake” is in the eye of the beholder.

Matter of trust

While it is possible to define misinformation and disinformation, any such definition necessarily relies on things that are not easy (or possible) to quickly verify: a news item’s relation to truth, and its authors’ or distributors’ intent.

This is especially valid within any domain that deals with complex knowledge that is highly nuanced, especially when stakes are high and emotions heat up. Public debate around COVID-19 is a chilling example. Regardless of how much “own research” anyone has done, for those without an advanced medical and scientific background it eventually boiled down to the question of “who do you trust”. Some trusted medical professionals, some didn’t (and still don’t).

As the world continues to assess the harrowing consequences of the pandemic, it is clear that the misinformation around and disinformation campaigns about it had a real cost, expressed in needless human suffering and lives lost.

It is tempting, therefore, to call for censorship or other sanctions against misinformation and disinformation peddlers. And indeed, in many places legislation is already in place that punishes them with fines or jail time. These places include Turkey and Russia, and it will surprise no one that media organizations are sounding alarms about them.

The Russian case is especially relevant here. On the one hand, the Russian state insists on calling their war of aggression against Ukraine a “special military operation” and blatantly lies about losses sustained by the Russian armed forces, and about war crimes committed by them. On the other hand, Kremlin appoints itself the arbiter of truth and demands that any news organizations in Russia propagate these lies on its behalf — using “anti-fake news” laws as leverage.

Disinformation peddlers are not just trying to push specific narratives. The broader aim is to discredit the very idea that there can at all exist any reliable, trustworthy information source. After all, if nothing is trustworthy, the disinformation peddlers themselves are as trustworthy as it gets. The target is trust itself.

And so we apparently find ourselves in an impossible position:

On one hand, the global pandemic, a war in Eastern Europe, and the climate crisis are all complex, emotionally charged high-stakes issues that can easily be exploited by peddlers of misinformation and disinformation, which thus become existential threats that urgently need to be dealt with.

On the other hand, in many ways, the cure might be worse than the disease. “Anti-fake news” laws can, just like libel laws, enable malicious actors to stifle truthful but inconvenient reporting, to the detriment of the public debate, and the debating public. Employing censorship to fight disinformation and misinformation is fraught with peril.

I believe that we are looking for solutions to the wrong aspects of the problem. Instead of trying to legislate misinformation and disinformation away, we should instead be looking closely at how is it possible that it spreads so fast (and who benefits from this). We should be finding ways to fix the media funding crisis; and we should be making sure that future generations receive the mental tools that would allow them to cut through biases, hoaxes, rhetorical tricks, and logical fallacies weaponized to wage information wars.

Compounding the problem

The reason why misinformation and disinformation spread so fast is that our most commonly used communication tools had been built in a way that promotes that kind of content over fact-checked, long-form, nuanced reporting.

According to Washington Post, “Facebook programmed the algorithm that decides what people see in their news feeds to use the reaction emoji as signals to push more emotional and provocative content — including content likely to make them angry.”

When this is combined with the fact that “[Facebook’s] data scientists confirmed in 2019 that posts that sparked [the] angry reaction emoji were disproportionately likely to include misinformation, toxicity and low-quality news”, you get a tool fine-tuned to spread misinformation and disinformation. What’s worse, the more people get angry at a particular post, the more it spreads. The more angry commenters point out how false it is, the more the algorithm promotes it to others.

One could call this the “outrage dividend”, and disinformation benefits especially handsomely from it. It is related to “yellow journalism”, the type of journalism where newspapers present little or no legitimate, well-researched news while instead using eye-catching headlines for increased sales, of course. The difference is that tabloids of the early 20th century didn’t get the additional boost from a global communication system effectively designed to promote this kind of content.

I am not saying Facebook intentionally designed its platform to become the best tool a malicious disinformation actor could dream of. This might have been (and probably was) an innocent mistake, an unintended consequence of the way the post-promoting algorithm was supposed to work.

But in large systems, even tiny mistakes compound to become huge problems, especially over time. And Facebook happens to be a gigantic system that has been with us for almost two decades. In the immortal words of fictional Senator Soaper: “To err is human, but to really foul things up you need a computer.”

Of course, the solution is not as simple as just telling Facebook and other social media platforms not to do this. What we need (among other things) is algorithmic transparency, so that we can reason about how and why exactly a particular piece of content gets promoted.

More importantly, we also need to decentralize our online areas of public debate. The current situation in which we consume (and publish) most of our news through two or three global companies, who effectively have full control over our feeds and over our ability to reach our audiences, is untenable. Monopolized, centralized social media is a monoculture where mind viruses can spread unchecked.

It’s worth noting that these monopolistic monocultures (in both the policy and software sense) are a very enticing target for anyone who would be inclined to maliciously exploit the algorithm’s weaknesses. The post-promoting algorithm is, after all, just software, and all software has bugs. If you find a way to game the system, you get to reach incredibly numerous audiences. It should then come as no surprise that most vaccine hoaxes on social media can be traced back to only 12 people.

Centralization obviously also relates to the ability of billionaires to just buy a social network wholesale or the inability (or unwillingness) of mainstream social media platforms to deal with abuse and extremism. They all stem from the fact that a handful of for-profit companies control the daily communication of several billion people. This is too few companies to wield that kind of power, especially when they demonstrably wield it so badly.

Alternatives do already exist. Fediverse, a decentralized social network, does not have a single company controlling it (and no shady algorithm deciding who gets to see which posts), and does not have to come up with a single set of rules for everyone on it (an impossible task, as former Twitter CEO, Jack Dorsey, admits). Its decentralized nature (there are thousands of servers run by different people and groups, with different rules) means that it’s easier to deal with abuse. And since it’s not controlled by a single for-profit company there is no incentive to keep bad actors in so as not to risk an outflow of users (and thus a drop in stock prices).

So we can start by at least setting up a presence in the Fediverse right now (following thousands of users who migrated there after Elon Musk’s Twitter bid). And, we can push for centralized social media walled gardens to be forced to open their protocols, so that their owners no longer can keep us hostage. Just like the ability to move a number between mobile providers makes it easier for us to switch while keeping in touch with our contacts, the ability to communicate across different social networks would make it easier to transition out of the walled gardens without losing our audience.

Media funding

As far as funding is concerned, entities spreading disinformation have at least three advantages over reliable media and fact-checking organizations.

First, they can be bank-rolled by actors who do not care if they turn a profit. Secondly, they don’t have to spend any money on actual reporting, research, fact-checking, and everything else that is both required and costly in an honest news outlet. Third, as opposed to a lot of nuanced long-form journalism, disinformation benefits greatly from the aforementioned “outrage dividend” — it is easier for disinformation to get the clicks, and create ad revenues.

Meanwhile, honest media organizations are squeezed from every possible side. Not the least by the very platforms that gate-keep their reach, or provide (and pay for) ads on their websites.

Many organizations, including small public grant-funded outlets, find themselves in a position where they feel they have to pay Facebook for “reach”; to promote their posts on its platform. They don’t benefit from the outrage dividend, after all.

In other words, money that would otherwise go into paying journalists working for a small, often embattled media organization, gets funneled to one of the biggest tech companies in the world, which consciously built their system as a “roach motel” — easy to get in, very hard to get out once you start using it — and now exploits that to extract payments for “reach”. An economist might call it “monopolistic rent-seeking”.

Meanwhile, the biggest ad network operator, Google, uses their similar near-monopoly position to extract an ever larger share of ad revenues, leaving less and less on the table for media organizations that rely on them for their ads.

All this means that as time goes by it gets progressively harder to publish quality fact-checked news. This is again tied to centralization giving a few Big Tech companies the ability to control global information flow and extract rents from that.

A move to non-targeted, contextual ads might be worth a shot — some studies show that targeted advertising offers quite limited gains compared to other forms of advertising. At the same time, cutting out the rent-seeking middle man leaves a larger slice of the pie on the table for publishers. More public funding (perhaps funded by a tax levied on the mega-platforms) is also an idea worth considering.

Media education

Finally, we need to make sure our audiences can understand what they’re reading, along with the fact that somebody might have vested interests in writing a post or an article in a particular way. We cannot have that without robust media literacy education in schools.

Logic and rhetoric have long been banished from most public schools as, apparently, they are not useful for finding a job. Logical fallacies are barely (if at all) covered. At the same time both misinformation and disinformation rely heavily on logical fallacies. I will not be at all original when I say that school curricula need to emphasize critical thinking, but it still needs to be said.

We also need to update the way we teach, to fit the current world. Education is still largely built around the idea that information is scarce and the main difficulty is acquiring it (hence its focus on memorizing facts and figures). Meanwhile, for at least a decade if not more, information is plentiful, and the difficulty lies in filtering it and figuring out which information sources to trust.

Solving the right problem, together

“Every complex problem has a solution which is simple, direct, plausible — and wrong”, observed H. L. Mencken. This describes the push for seemingly simple solutions to the misinformation and disinformation crisis, like legislation making disinformation (however defined) “illegal”, well.

News and fact-checking communities have limited resources. We cannot afford to spend them on ineffective solutions — and much less on in-fighting about proposals that are both highly controversial and recognized broadly as dangerous.

To really deal with this crisis we need to recognize centralization — of social media, of ad networks, of media ownership, of power over our daily communication, and in many other areas related to news publishing — and poor media literacy among the public as crucial underlying causes that need to be tackled.

Once we do, we have options. Those mentioned in this text are just rough ideas; there are bound to be many more. But we need to start by focusing on the right parts of the problem.

Dealing with SEO Link Spam E-mails

Disclaimer: I am not a lawyer. I am not your lawyer. None of this is legal advice. All of this might also be a horribly bad idea.

Ah, SEO link spam e-mails. If you have a blog that’s been online longer than, say, three years, you know what I’m talking about:

Hey,

I read your article at <link-to-a-blogpost-of-mine> talking about <actually-not-the-topic-of-the-blogpost>. I think your readers would benefit from a link to <link-to-an-irrelevant-or-trivial-piece>.

Would you consider linking to our article?

For a long time I just ignored these, flagging as spam and moving on. Obviously I am not going to link to some marketing crap that’s there only to drive up SEO of some random site.

But then that one spammer showed up in my mailbox, and he was persistent. Several e-mails and follow-ups within a month. I decided I needed a better strategy.

What if I told them to pay for a link being placed on my blog?

I asked for input on fedi, and after quite a few useful suggestions and comments, I drafted what is now my standard template to deal with these kinds of requests.

The Template

Hey,

thanks for reaching out. My going rate for a link placed on my blog is $500USD; I get to decide where and how I place it, and within what content. It will be placed in a regular blogpost, reachable by search engines, on the blog in question. It will stay up for at least a year. No other guarantees are made.

I require payment of half of the sum ($250, non-refundable) before I prepare the specific placement offer, for you to accept or reject. The placement, context and meaning of the link in the placement offer shall be determined at my sole and absolute discretion. There is no representation or warranty whatsoever as to whether the link is placed in a way that would imply an endorsement, or even fail to be an explicit or implied disparagement.

Once provided, the placement offer is final, and if rejected, I understand you are no longer interested in placing a link on my blog. At that point the initial payment is considered payment for my time and expertise in preparing the placement offer.

Once you accept the placement offer, I will put the link on-line within 10 business days, and I will expect payment in full at the latest 20 business days from it went online. After that period interest will accrue at 12% p.a., calculated annually.

Please be advised that any further communication from anyone at <company-name-or-domain-spam-e-mail-was-sent-from> or in relation to <domain-of-the-link-being-peddled> that is neither a clear rejection of this deal nor acceptance of the terms as outlined herein (and discussion about invoicing or accounting technicalities) will accrue a $50 processing fee. Any further communication from anyone at <company-name-or-domain-spam-e-mail-was-sent-from> or in relation to <domain-of-the-link-being-peddled>, including apparently unrelated to the matter at hand, amounts to acceptance of these terms, regardless of when it takes place and who the sender is. Any and all disputes must be subject only to the law of my jurisdiction (Iceland) and handled solely in the courts herein.

Do let me know if you have any specific invoicing/accounting requirements. I am looking forward to doing business with you.

The Point

The point, obviously, is to limit the amount of SEO link spam e-mails I have to deal with. But of course if somebody decides to take me up on the offer, I am happy to pocket the $500 to publish a blogpost about how they just paid $500 for the privilege of being made fun of, by me.

Yes, I will link to where they ask, yes it will be reachable by search engines, but also: yes, the link might have rel="sponsored nofollow" attribute set.

This is also somewhat the point of this very blogpost. Each and every SEO link spam e-mail claims that the sender “has read my site”. Well, if they did, they are now surely aware what’s in stock.

Finally, most SEO link spam e-mails mention you can “unsubscribe” by replying to them. I never “subscribed” to any of them in the first place, so that just feels wrong. More importantly though, I simply don’t trust the spammers to actually respect my request to be removed from their contacts database.

I do however trust that once they are informed that any further communication would cost them $50, they might not want to communicate further.

The Outcome

I have used the template several times over the last few months. I have not once heard back from any of the spammers that got served with it, and the overall amount of SEO link spam e-mails I receive seems to have gone down measurably — which might or might not be related to my use of the template, of course.

The Future

I would love to be able to charge SEO link spam e-mail senders even for the first e-mail they send me. So I am thinking of adding some kind of EULA to that effect to my blog.

I hate EULAs; I find the assumption that some terms are binding even if the visitor has not explicitly agreed to them (nor read them) to be asinine. But if that’s the world we live in, I might as well use it to make SEO link spam a bit more costly.