SINGAPORE (Apr 2): Is it time for Facebook, Twitter and Google to be held responsible for the news and information that they carry on their platforms? Media companies that have watched these platforms eat their lunch over the past decade might be inclined to argue that it is, of course. Yet, governments are beginning to recognise the risk posed by false information and fake news on their economies and societies too.
Over the past three weeks, Singapore held a series of parliamentary committee hearings on how it should deal with “deliberate online falsehoods”. The high point of these hearings was arguably on March 22, when representatives of tech giants Facebook, Twitter and Google as well as industry association Asia Internet Coalition turned up to testify. As it happened, their testimonies came just as news broke that Cambridge Analytica had harvested private information from Facebook profiles of more than 50 million people and tried to use the data to influence the outcome of the 2016 US election. In fact, only the night before their testimonies, Facebook founder and CEO Mark Zuckerberg had made a statement on the whole affair that left many observers unimpressed.
Simon Milner, vice-president of public policy for Asia-Pacific at Facebook, found himself being grilled for three hours by Home Affairs and Law Minister K Shanmugam. Among other things, Milner was pressed on why the company had not revealed more information about the data breach earlier. Shanmugam also went on to ask Milner if Facebook’s policies require it to take down content that is shown to be false. Milner responded to say that Facebook would respect a court order. Shanmugam subsequently said, “Now, of course, the courts can only act based on legislation. You realise that?” Milner replied, “I do realise that and I understand where you’re heading.”
Any legislation to curb deliberate online falsehoods should be crafted carefully in order to be effective, though. Facebook (which owns Instagram and Whatsapp), Google (the main entity within Alphabet, which also owns YouTube) and Twitter are key channels through which the world consumes online content. But they are not traditional media companies. While they can be made to take down content deemed unacceptable under the law, they are not in a position to judge whether content posted and shared on their platforms is true or false.
In the first place, the content they carry is created and shared by other parties. More generally, in this divided world, it is all too easy for people to label anything they do not agree with as false. Indeed, people’s social media connections are often an echo chamber of their own biases and prejudices. There is, however, something that social media giants can do: Ensure that the number of likes a post may have, or the number of times something is shared, accurately represents the views of its users.
Myla Pilao, director of core technology marketing at Trend Micro, said in her submission to the parliamentary committee that fake news is spread with the use of “tools and services commonly found in underground or even grey markets”. The submission went on to explain how “click farms” can be used to influence the outcome of online polls, and “automation bots” are often deployed to amplify the popularity of a fake news story.
Pilao provided a price list of sorts: 10,000 Facebook auto-likes per month for US$800 ($1,047); 1,000 Instagram subscribers for US$3 to US$15; 100 Twitter followers, likes or retweets for just 34 US cents.
How prevalent is the use of these “tools”? The New York Times published a story in January about how an obscure US company called Devumi had collected millions of dollars selling Twitter followers and retweets to celebrities, businesses and “anyone who wants to appear more popular or exert influence online”. Using a stock of at least 3.5 million automated accounts, each sold many times over, the company provided its customers with more than 200 million Twitter followers.
Interestingly, many of these automated accounts used the names, profile pictures and other personal information of real Twitter users, according to The Times report. By some calculations, as many as 48 million of Twitter’s reported active users — nearly 15% — are automated accounts designed to simulate real people, the report said.
Shortly after The Times story was published, more than a million followers disappeared from the accounts of dozens of prominent Twitter users, the newspaper subsequently reported. US lawmakers began calling for an investigation into Devumi, and pressing for social media companies to link every account to a human being. Twitter said it would take action against Devumi’s practices.
While the extent to which automated bots and click farms can sway the opinions of social media users is debatable, they clearly can create the impression that something is more popular than it really is. Twitter and Facebook may be in no position to judge whether content on their platforms is true or false, but they should not have any problems determining if that content is being promoted by fraudulent accounts.