In a post on Facebook Friday, Mark Zuckerberg explained the social network's newest plan to fight the proliferation of disinformation and fake news: community rating.
I've asked our product teams to make sure we prioritize news that is trustworthy, informative, and local. And we're starting next week with trusted sources.
(...)
We decided that having the community determine which sources are broadly trusted would be most objective.
Here's how this will work. As part of our ongoing quality surveys, we will now ask people whether they're familiar with a news source and, if so, whether they trust that source. The idea is that some news organizations are only trusted by their readers or watchers, and others are broadly trusted across society even by those who don't follow them directly. (We eliminate from the sample those who aren't familiar with a source, so the output is a ratio of those who trust the source to those who are familiar with it.)
This update will not change the amount of news you see on Facebook. It will only shift the balance of news you see towards sources that are determined to be trusted by the community.
The bigger question for Facebook is how effective its new system will be in determining what are "trusted sources" and what is "trustworthy" and "informative" content.
NYT:
For publishers, Facebook’s new ranking system raised immediate concerns, including whether crowdsourcing users’ opinions on trustworthiness might be open to manipulation.
“It is absolutely a positive move to start to try to separate the wheat from the chaff in terms of reputation and use brands as proxies for trust,” said Jason Kint, the chief executive of Digital Content Next, a trade group that represents entertainment and news organizations, including The New York Times. “But the devil’s in the details on how they’re going to actually execute on that.”
He continued, “How does that get hacked or gamed? How do we trust the ranking system? There’s a slew of questions at this point.”
The new system could also potentially favor publishers who are partisan. Facebook users, asked to rank which news they most trust, could choose sites that speak most clearly to their personal beliefs, in effect reducing the prominence publishers who try to maintain an objective tone.
Also Friday, Twitter posted an update to its ongoing work in examining its role in spreading disinformation during the 2016 campaign season:
As previously announced, we identified and suspended a number of accounts that were potentially connected to a propaganda effort by a Russian government-linked organization known as the Internet Research Agency (IRA).
Consistent with our commitment to transparency, we are emailing notifications to 677,775 people in the United States who followed one of these accounts or retweeted or liked a Tweet from these accounts during the election period. Because we have already suspended these accounts, the relevant content on Twitter is no longer publicly available.
Twitter adds it has identified even more Russian election-related activity, only some of which came from IRA-linked accounts, and has taken steps to improve the quality of tweeted information overall:
In total, during the time period we investigated, the 3,814 identified IRA-linked accounts posted 175,993 Tweets, approximately 8.4% of which were election-related.
We have also provided Congress with the results of our supplemental analysis into activity believed to be automated, election-related activity originating out of Russia during the election period. Through our supplemental analysis, we have identified 13,512 additional accounts, for a total of 50,258 automated accounts that we identified as Russian-linked and Tweeting election-related content during the election period ...
(...)
In December 2017, our systems identified and challenged more than 6.4 million suspicious accounts globally per week— a 60% increase in our detection rate from October 2017. We have developed new techniques for identifying malicious automation (such as near-instantaneous replies to Tweets, non-random Tweet timing, and coordinated engagement).
Twitter says it also is putting plans in place so as not to get caught off-guard again later this year.
As part of our preparations for the U.S. midterm elections, our teams are organizing to:
- Verify major party candidates for all statewide and federal elective offices, and major national party accounts, as a hedge against impersonation;
- Maintain open lines of communication to federal and state election officials to quickly escalate issues that arise;
- Address escalations of account issues with respect to violations of Twitter Rules or applicable laws;
- Continually improve and apply our anti-spam technology to address networks of malicious automation targeting election-related matters; and
- Monitor trends and spikes in conversations relating to the 2018 elections for potential manipulation activity.
Facebook to rank news quality as part of fake news fight (Axios)
Facebook to Let Users Rank Credibility of News (NYT)
Update on Twitter’s Review of the 2016 U.S. Election (Twitter)
Zuckerberg Post (Facebook)