Human-centred design of automated labelling interventions to mitigate misinformation on social media

Human-centred design of automated labelling interventions to mitigate misinformation on social media

EVENT DATE
10 Jan 2025
Please refer to specific dates for varied timings
TIME
3:00 pm 5:00 pm
LOCATION
SUTD Think Tank 22 (Building 2, Level 3, Room 2.311)

Abstract

Misinformation is an issue on social media platforms and a mitigation strategy is to label problematic content. An assortment of labels has been introduced over the years, which are typically applied by professional human fact-checkers. However, the speed and volume of information exchange on social media has made scalability a challenge for such a practice, calling to consideration the use of automated fact-checkers where machine learning models and natural language processing techniques are used to detect misinformation.

 

The use of automated fact-check labelling invokes several considerations. On one end, there is the technology where the question is whether the models are capable for the task. This requires technical expertise and comprehensive evaluation, and is challenged by the constantly changing content and context of misinformation. On the other end, there are the humans, people who will be exposed to the labels on social media, where the question is how they will perceive and react to them.

 

I investigate the use of various labelling interventions that incorporate automated fact-checking elements for the latter, examining people’s perceptions of the labels and their attitudes to the labelled content. I explore with what regard people hold automated fact-checkers, and how having automated labels affect their perceived veracity of the content, and their intents to engage with and verify them. I also look at additional scaffolds for automated labels to enhance their information evaluation process. I explore these in different platforms: instant messaging apps, social media and online forums, and in diverse sociopolitical contexts: Singapore and the United States.

 

Findings point toward the viability of automated labelling interventions with various caveats in their design and implementation that require due consideration. A set of implications is put forward on the precautions and recommendations for a human-centred approach to adopting automated fact-checkers on social media.

Speaker’s Profile

Gionnieve Lim is a PhD candidate at the Singapore University of Technology and Design. Her research lies at the intersection of human-computer interaction and misinformation where she studies the use of automated misinformation interventions on social platforms.

ISTD PhD Oral Defense Seminar by Gionnieve Lim - Human-centred design of automated labelling interventions to mitigate misinformation on social media
ADD TO CALENDAR
Google Calendar
Apple Calendar