A tripartite approach to tackling fake news - media tech companies, State, consumers
Straits Times, 7 May 2019, A tripartite approach to tackling fake news - media tech companies, State, consumers
Lim Sun Sun For The Straits Times
Lim Sun Sun is head of humanities, arts and social sciences at the Singapore University of Technology and Design and a Nominated Member of Parliament.
Among the many wonderful museums in Singapore, one of the most evocative must be the Former Ford Factory. Walking down the museum's quiet driveway, you feel like you might come face to face with General Arthur Percival and his troops, trudging in resolutely as they bear a flag of surrender, soon to capitulate to the Japanese forces.
This museum poignantly captures the scars of war, colonialism and occupation, and indeed, is also a repository of fake news. Yes, fake news.
The museum's well-curated displays include an issue of the Illustrated London News from December 1941 claiming that the Japanese Army was backward, ill-equipped and close to being vanquished by the British forces in South-east Asia.
Another fascinating showcase features photographs and captions from the Japan Photo Almanac of 1943, proclaiming that Syonan-To (as Singapore was known during the Japanese Occupation) had blossomed into a booming economy under Japanese rule.
In fact, the British forces in Singapore succumbed quickly to the Japanese; and the Syonan-To period was one of hardship and torment for the Singapore people.
Clearly, the fabrication of falsehoods with a view to manipulating the audience is by no means a novel enterprise.
What is new, of course, are the dizzying speeds and diverse means by which falsehoods can be produced and disseminated, as well as the richness and complexity of our media landscape. Today, it would be difficult for any state to get away with spreading falsehoods of the nature the British and Japanese manufactured.
The current media environment is simply too expansive and inclusive to permit the dominance of just one voice. Instead, there exists a plethora of voices to provide variegated accounts of reality, offering elaborations, validations or contradictions. And we are all the richer for it.
Such is the democratised media landscape we live in today. Building knowledge for and about our society is a collective endeavour, shaped by governments, scaffolded by technology companies and media producers, and supported by media consumers. How does the Protection from Online Falsehoods and Manipulation Bill (Pofma) then fit into or atop such a media environment?
Singapore's approach of using legislation to empower ministers to issue correction and take-down orders for online factual falsehoods seems to address the need for immediacy. The proposed key measures of targeted and general corrections also rest on evidence that by presenting consumers with both the original falsehood and the correction, we give them the opportunity to consider the information in its totality, and to draw their own conclusions. However, the relentless proliferation of new communication platforms, many of which increasingly offer encryption to safeguard consumer privacy, means that the execution of targeted or general corrections will differ from platform to platform, translating into varying degrees of consumer exposure to the corrections, with varying levels of effectiveness.
While Pofma is meant to be platform-agnostic, the evidence we currently have on the effects of corrections is platform-, context-and even demographic group-specific.
This raises several questions. Will targeted and general corrections counter-productively amplify the original falsehood to a wider audience, without a guarantee of the original or new audience reading the accompanying correction?
Will we perversely and inadvertently create a cognitive shortcut where people seeing an official correction presume that the original falsehood must be true precisely because it is being vehemently debunked? Will falsehoods that do not come with corrections be summarily assumed to be true? Such unintended outcomes of a correction strategy cannot be ruled out, despite our best efforts and intentions.
TECH COMPANIES' CODES OF PRACTICE
Furthermore, the Bill will also set out legally enforceable Codes of Practice for technology companies that will cover fake online accounts and bots, digital advertising transparency and de-prioritising falsehoods.
Might there be companies that choose not to offer their services to consumers in Singapore to avoid being subjected to a potentially costly and administratively burdensome regime in what is ultimately a small media market?
Will we awake to news that apps such as Telegram and WhatsApp are no longer offered in Singapore? Such developments could inflict considerable damage on Singapore's image as a pro-innovation environment that is internationally renowned for attracting companies seeking to thrive in our liberal regulatory sandbox.
Indeed, corrections are but one possibility in an ever-evolving arsenal of technological remedies for online falsehoods including, for example, blockchain technology.
One concern is whether Pofma's requirement for technology companies to devise and execute ways to issue corrections will divert their resources away from developing measures that may yet be more potent in tackling online falsehoods.
To be sure, technology companies must not be given a free pass when it comes to online falsehoods.
As the Government has repeatedly and rightly articulated, technology companies must bear a burden of responsibility in helping to address the spread of online falsehoods. However, introducing legislation such as Pofma is not the ideal way to engage technology companies.
A TRIPARTITE WAY OF ENGAGEMENT
Instead, the problem of online falsehoods is an intractable one that requires a more creative and collaborative solution. We should take a leaf from the book of our labour movement with its tripartite model comprising the Government, employers and workers.
As a pro-innovation society known for its Smart Nation ambitions, Singapore can break new ground by creating a tripartite model of tackling online falsehoods that comprises the Government, media technology companies and consumers.
As currently crafted, Pofma creates a relationship of antagonism with technology companies, where they must reactively respond to the directives of the state rather than proactively develop other strategies to nip falsehoods in the bud. Instead, this proposed collaborative partnership will pave the way for a process of mutual education, where the state can be apprised of the latest technological trends and solutions, and technology companies can be schooled in the societal values they must uphold, and strive towards industry best practices.
Although we are already mounting a multi-pronged strategy for online falsehoods that includes media literacy education, we should further concretise this tripartite approach to help overcome the perceived shortcomings of Pofma.
Since being tabled, Pofma has drawn criticism for vesting individual ministers with the power to determine falsehoods. Such views reflect the visceral resistance to granting exclusive authority to a small sliver of individuals in a democratised media landscape.
To better align with the pluralistic nature of our media environment, we should establish an Information Integrity Institute, or ICube, that is funded by contributions from all media and technology companies. Their contributions can be pegged to a graduated scale commensurate with, for example, their annual returns from advertising. ICube should supplement Pofma and its correction regime. Indeed, if properly developed, it can help to reduce reliance on legislative solutions to online falsehoods.
First, ICube will play a critical fact-checking role and develop deep expertise to do so swiftly and effectively. Second, it should host an online repository of verifications of falsehoods and, over time, will develop a reputation as a reliable third party fact-checker that is the first port of call for consumers in need of news authentication. Third, it can also play a capacity-building role to professionalise newsmakers in the production of quality information, and raise ethical standards within the media and technology industries. Fourth, it should advance the research agenda by offering grants to investigate the problem of online falsehoods and provide access to critical data for sharpened analysis.
Due to its shared funding, ICube will be less liable to attract allegations of partisanship, and therefore be more likely to win trust for the verifications and refutations that it issues. With robust fact-checking accessible to all consumers, we can nurture greater individual discernment and reduce our reliance on state-issued corrections.
Communication platforms are evolving rapidly and, along with them, the media habits, skills and even biases of consumers. Given these shifting complexities, legislation should not be the primary weapon in our battle against online falsehoods.