The Student News Site of University of Arizona

The Daily Wildcat

79° Tucson, AZ

The Daily Wildcat

The Daily Wildcat

 

UA scholars develop model of types of fake news and potential solutions to combat it

UA+scholars+develop+model+of+types+of+fake+news+and+potential+solutions+to+combat+it
James E. Rogers College of Law

With fake news propagating through the national media landscape, a team of UA James E. Rogers College of Law researchers have developed a model to reduce the spread of “alternate facts.”

The report, titled “Identifying and Countering Fake News,” aims to clarify the different types of fake news, help enable conversation around the subject and propose solutions to combat the phenomenon.

Law professors Derek and Jane Bambauer co-authored the report with lead author Mark Verstraete, a fellow at the law college researching free expression at the Center for Digital Society and Data Studies.

“The main thing that inspired us is we’d seen a lot of these different types of misinformation clustered under the heading of fake news,” Verstraete said. “We saw that they had different motivations and different social harms and we just wanted to break them out to create a clearer road map to start devising solutions.”

The report identifies two main characteristics of fake news: 1) an intent to deceive its audience and 2) financial interests in spreading phony information.

Since not all fake news is created equal, the team put together a matrix based on intent and payoff to describe different type of news from hoaxes to humor.

“[The report] creates more granular distinctions among the types of misinformation,” Verstraete said. “If anything, it just brings a lot of clarity to these discussions.”

The report does not include a classification for news stories stemming from journalistic publications that occasionally get the facts wrong or articles with which people simply disagree.

RELATED: Panel discusses line between free speech and hateful statements

Additionally, the report provides potential solutions through four different approaches: law, code, markets and norms.

Verstraete said the team looked at past attempts at solutions for harms in the online ecosystem. The four categories come from internet scholar Larry Lessig, who developed the modes as ways that can constrain behavior.

“We really don’t endorse one [solution] specifically,” Verstraete said. “We think there are good points and bad points with each of them, so this is just a first cut at putting solutions on the table and seeing how they interact with other values we have.”

A legal approach would likely run into conflicts with free speech. Much of expression in America is protected by the First Amendment, but law covering defamation and libel, and authority under the Federal Trade Commission, could provide some relief to the spread of dubious data on the internet.

An economic approach through market-based solutions could discourage fake news sources by hitting purveyors in the pocketbooks. The report cites Google’s mandate following the 2016 presidential election that it wouldn’t allow fake news websites to use its advertising infrastructure. 

However, the report says methods like this would only target some sources of specious statements, leaving those without financial motives to continue their false claims.

RELATED: Panel of journalists discuss fake news at SPJ event

With the vast majority of fake news spreading through internet channels, content managers could create algorithms that sort through news stories and decide which are fact and which are fiction. 

Facebook attempted this model, according to the report, but a human element led to claims of biased selection of stories. When Facebook decreased human involvement, the integrity of the code diminished, allowing hoaxes to once against dominate users’ news feeds.

Verstraete found particular interest in Facebook’s motivation behind spreading fake news. Part of the report details how Facebook stands to benefit financially from fake news posts. 

“Fake news is often times inflammatory and causes people to click it and like it and share it with their friends,” Verstraete said. “When they do that, Facebook collects a lot of data that they can then aggregate to service ads to people.”

Finally, a social approach would apply pressure on individuals against spreading fake news. The problem with this solution, the report says, is that it’s difficult to engineer social norms as they usually have natural origins.

“I hope the report is used as the beginnings of a larger discussion about speech on the internet and kinda just a first cut at how solutions could and should work,” Verstraete said.

While the authors don’t claim to have the definitive solution to the fake news problem, they say the report could provide the context and a starting point from which to discuss way to curb the influence of invented information.


Follow Nick Meyers on Twitter.


More to Discover
Activate Search