A central problem with social media discourse is, in my opinion, the rise of the bots. You can see the magnitude of the problem in the current crisis, where researchers found that bots may account for around half of the Twitter accounts discussing covid-19! Many of those accounts were created quite recently and have been amplifying misinformation, including false medical advice and conspiracy theories about the origin of the virus while pushing to “reopen America”.
The problem is now quite serious. There are vast armies of bots out there trying to spread disinformation, foment division and fracture communities. Elections and the democratic process itself are under attack. It almost makes me nostalgic for the good old days when hackers were just trying to steal credit card numbers instead of trying to interfere with elections!
This came up in my “identity stroll” with Sudan Sethuramalingam, Head of Scaled Operation at Twitter. I was asking him about the threats to online content integrity (and what kind of actions we might take to counter them). He has a background in heavy duty know-your-customer (KYC) and anti-money laundering (AML) with banks as well as social media platforms, so he really understands the magnitude of the problem. We were discussing the complexity of managing social media content and algorithms give the vast scale of the networks and their importance to society.
After listening to him, I reflected that it is probably best to focus on the issue of the bots rather than the content, which seems to me to be impossible to police. If people could just set their social media feeds to ignore content from bots, we might improve the quality of the conversation there significantly. So maybe that’s where we should focus: not trying to flag up contents, but flagging up the users so that social media users can ignore them.
The way forward is surely not for Twitter et al to try and figure out who is a disinformation bot and whether they should be banned (after all, there are plenty of good bots out there) but for Twitter et al to give their users the information they need to make a choice. Why can’t I tell Twitter that I only want to see tweets from real people that can be identified? I don’t want to know the identities — it’s none of my business who a person actually is and it’s none of Twitter’s business either — I just want to know if I’m following a person or not! I know that I’m on the right track here, by the way, because noted entrepreneur Elon Musk agrees with this prescription, having reportedly told Jack Dorsey, the head of Twitter that “I think it would be helpful to differentiate’ between real and fake users… Is this a real person or is this a bot net or a sort of troll army or something like that?”.
If we could just go so far as to tell real people from fake people, we’d be well on the way to solving the problem. Fortunately, this is where INSTINCT can make a difference, so I’m very happy to be here helping Au10tix to launch a product that will make a real difference in the fight against the fake people.