talk: Users’ Preferences for Enhanced Misinformation Warnings on Twitter


The UMBC Cyber Defense Lab presents

Context, a Red Flag, or Both? Users’ Preferences for Enhanced Misinformation Warnings on Twitter

Prof. Filipo Sharevski
Adversarial Cybersecurity Automation Lab
DePaul University

12–1pm ET Friday, 4 Feb. 2022, WebEx


Warning users about hazardous information on social media is far from a simple usability task. The so-called soft moderation must balance between debunking falsehoods and avoiding moderation bias while avoiding disrupting the social media consumption flow. Platforms thus employ visually indistinguishable warning tags with generic text under a suspected misinformation content. This approach resulted in an unfavorable outcome where the warnings “backfired” and users believed the misinformation more, not less. To address this predicament, we developed enhancements to the misinformation warnings where users are advised on the context of the information hazard and exposed to standard warning iconography.

Balancing for comprehensibility, the enhanced warning tags provide context in regards to (1) fabricated facts; and (2) improbable interpretations of facts. Instead of the generic “Get the facts about the COVID-19 vaccine” warning, users in the first case are warned about “Strange, Potentially, Adverse Misinformation (SPAM): If this were an email, this would have ended up in your spam folder” and in the second case about “For Facts Sake (FFS): In this tweet, facts are missing, out of context, manipulated, or missing a source.” The SPAM warning tag contextualizes misinformation with an analogy to an already known phenomenon of spam email, while the FFS warning tag as an acronym blends with the characteristic communication Twitter behavior with compact language due to the tweets’ length restriction. The text-only warning tags were then paired with the hereto ignored usable security intervention when it comes to misinformation: red flags as watermarks over the suspected misinformation tweets. The tag-and-flag variant provided an option for us also to test user receptivity to warnings that incorporate contrast (red), gestalt iconography for general warnings (flag), and actionable advice for inspection (watermark).

We ran an A/B evaluation with Twitter’s original warnings in a usability study with 337 participants. The majority of the participants preferred the enhancements as a nudge towards recognizing and avoiding misinformation. The enhanced warnings were most favored by the politically left-leaning and to a lesser degree moderate participants, but they also appealed to roughly a third of the right-leaning participants. The education level was the only demographic factor shaping participants’ preferences for the proposed enhancements. Through this work, we are the first to perform an A/B evaluation of enhanced social media warnings providing context and introducing visual design frictions in interacting with hazardous information. Our sentiment analysis towards soft moderation in general, and enhanced warning tags in particular from a political and demographic perspective, provides the basis for our recommendations about future refinements, frictions, and adaptations of soft moderation towards secure and safe behavior on social media.

About the Speaker. Dr. Filipo Sharevski () is an assistant professor of cybersecurity and director of the Adversarial Cybersecurity Automation Lab (https://acal.cdm.depaul.edu). His main research interest is adversarial cybersecurity automation, m/disinformation, usable security, and social engineering. Sharevski earned the PhD degree in interdisciplinary information security at Purdue University, CERIAS in 2015.

Host: Alan T. Sherman, . Support for this event was provided in part by the National Science Foundation under SFS grant DGE-1753681. The UMBC Cyber Defense Lab meets biweekly Fridays 12-1 pm. All meetings are open to the public.


Posted

in

, , ,

by

Tags: