Yesterday, I had a lengthy discussion with a proponent of big centralised social media platforms. Not because they have a particular love for big companies, but because moderation is actually one of these issues that are hard to do right.
The numbers I could find say that about 20% of all content posted in social media needs to get removed from moderation. Most of this is probably automated spam and similar, but there is also a fair amount of graphic violence, outright porn and, because humans are terrible, abuse and hate.
Moderators who have to sift through all this have the worst life, not few of them have to get counseling after a while.
So if you do moderation, you have to have the infrastructure in place to deal with large volume of content, the wellbeing of your staff, all the hassle of dealing with complaints about your moderation plus whatever regulatory requirements are needed.
Typically, this calls for a large scale operation.
Now, one of the reasons this happens is because people behave differently in a large‐scale corporate environment than within their smaller circle of friends and acquaintances. If your social media pod is run by someone closer to you, you tend not to shit the bed so to speak. Because you know that your behaviour will possibly reflect poorly on your host.
If you federate the system, good moderation will still be needed, but it is entirely possible that one won’t have to deal with that many bad things, especially if there is an option to cut off whole pods from the federation if they behave too badly.
Of course, that last bit needs to be very carefully tuned, lest it results in censorship.