Why using Facebook should require a media literacy test

We don’t let folks start working motor automobiles till they’ve taken driver’s training after which a test for a superb cause: Autos are harmful to drivers, passengers, and pedestrians. Social networks and the deceptive and dangerous content material they flow into are dangerous for society too, so some quantity of media literacy training—and a test—should be a situation of using them.

Social media corporations like Facebook and Twitter would certainly object to such an thought, calling it onerous and excessive. However they willfully misunderstand the enormity of the risk that misinformation poses to democratic societies.

The Capitol riot gave us a glimpse of the type of America misinformation helped create—and illustrates why it’s so harmful. On January 6, the nation witnessed an unprecedented assault on our seat of presidency that resulted in seven deaths and lawmakers fearing for the lives. The rioters who prompted this mayhem deliberate their march on the Capitol on social networks, together with in Facebook Groups, and had been stirred to violent motion by months of disinformation and conspiracy theories concerning the presidential election, which they believed had been “stolen” from Donald Trump.


Whereas the massive social networks have made important investments in countering misinformation, eradicating all of it and even most of it could be unattainable. That’s why it’s time to shift the main target from efforts to curb misinformation and its unfold to giving folks instruments to acknowledge and reject it.

Media literacy should actually be taught in faculties, however any such coaching should even be made out there on the place the place folks really encounter misinformation—on social networks. Massive social networks that distribute information and knowledge should require customers to take a quick media literacy course, after which a quiz, earlier than logging in. The social networks, if mandatory, should be compelled to do that by drive of legislation.

Moderation is difficult

Thus far we’ve relied on the massive social networks to guard their customers from misinformation. They use AI to find and delete, label, or cut back the unfold of the deceptive content material. The legislation even offers social networks safety from being sued for content material moderation choices they make.

However counting on social networks to regulate misinformation clearly isn’t sufficient.

Initially, the tech corporations that run social networks typically have a monetary incentive to let misinformation stay. The content-serving algorithms they use favor hyper-partisan and sometimes half-true or unfaithful content material as a result of it persistently will get probably the most engagement within the type of likes, shares, and feedback by customers. It creates advert views. It’s good for enterprise.

Second, massive social networks are being compelled into an infinite means of increasing censorship as propagandists and conspiracy idea believers discover extra methods to unfold false content material. Facebook and different corporations (like Parler) have discovered that taking a purist strategy to free speech—i.e. permitting any speech that isn’t unlawful beneath U.S. legislation—isn’t sensible in digital areas. Censorship of some sorts of content material is accountable and good. In its newest capitulation, Facebook introduced Monday it will bar any posts of debunked theories about vaccines (together with ones for COVID-19), resembling that they trigger autism. Nevertheless it’s unattainable for even well-meaning censors to maintain up with the infinite ingenuity of disinformation’s purveyors.

There are logistical and technical causes for that. Facebook depends on 15,000 (largely contract) content material moderators to police the posts of its 2.7 billion customers worldwide. And it’s more and more turning to AI fashions to seek out and reasonable dangerous or false posts, however the firm itself admits that these AI fashions can’t even comprehend some forms of dangerous speech, resembling inside memes or video.


That’s why it could be higher to assist shoppers of social content material detect and reject misinformation, and chorus from spreading it.

“I’ve advisable that the platforms do media literacy coaching instantly, on their websites,” says disinformation and content material moderation researcher Paul Barrett, deputy director of the New York College (NYU) Stern Middle for Enterprise and Human Rights. “There’s additionally the query of should there be a media literacy button on the location, staring you within the face, in order that a person can entry media literacy information at any time.”

A fast primer

Social media customers younger and previous desperately want instruments to acknowledge each misinformation (false content material unfold innocently, out of ignorance of information) and disinformation (false content material knowingly unfold for political or monetary causes), together with the talents to uncover who created a piece of content material and analyze why.

These are essential parts of media literacy, which additionally includes the flexibility to cross-check info with further sources, consider the credibility of authors and sources, acknowledge the presence or lack of rigorous journalistic requirements, and create and/or share media in a method reflective of its credibility, in accordance with the United Nations Instructional, Scientific, and Cultural Group (UNESCO).

Packaging a toolkit of basic media literacy tools—maybe particular to “information literacy”—and presenting them instantly on social media websites serves two functions. It arms social media customers with sensible media literacy instruments to investigate what they’re seeing, and likewise places them on alert that they’re more likely to encounter biased or deceptive info on the opposite aspect of the login display.

That’s essential as a result of not solely do social networks make deceptive or unfaithful content material out there, they serve it up in a method that may disarm a person’s bullshit detector. The algorithms utilized by the likes of Facebook and YouTube favor content material that’s more likely to elicit an emotional, typically partisan, response from the person. And if a member of Social gathering A encounters a information story about a shameful act dedicated by a chief in Social gathering B, they might consider it after which share it with out noticing that the final word supply of the data is Social gathering A. Typically the creators of such content material bend (or fully break) the reality to maximise the emotional or partisan response.

This works rather well on social networks: A 2018 Massachusetts Institute of Expertise study of Twitter content material discovered that falsehoods are 70% extra more likely to get retweeted than reality, and falsehoods unfold to succeed in 1,500 folks about six occasions quicker than reality does.

However media literacy coaching additionally works. The Rand Company performed a review of available research on the efficacy of media literacy training, and located ample proof throughout quite a few research that analysis topics grew to become less likely to fall for false content material after numerous quantities of media literacy coaching. Different organizations together with the American Academy of Pediatrics, the Facilities for Illness Management and Prevention, and the European Fee have reached comparable conclusions and have strongly advisable media literacy coaching in faculties.

Facebook has already taken some steps to embrace media literacy. It has partnered with the Poynter Institute to develop media literacy coaching instruments for youths, millennials, and seniors. The corporate additionally donated $1 million to the Information Literacy Challenge, which teaches college students to scrutinize the sourcing of an article, make and critique information judgments, detect and dissect viral rumors, and acknowledge affirmation bias. Facebook additionally hosts a “media literacy library” at its website.

Nevertheless it’s all voluntary. Requiring a coaching course and a quiz as a situation of admittance to the location is one thing completely different. “The platforms could be very hesitant to try this as a result of they’d fear about turning away customers and reducing down on engagement,” says NYU’s Barrett.

If the social networks received’t act voluntarily, they is perhaps compelled to require media literacy training by a regulatory physique just like the Federal Commerce Fee. From a regulatory perspective, this is perhaps simpler to perform than transferring Congress to require media literacy training in public faculties. It may also be a extra targeted method of mitigating the true dangers posed by Facebook, in comparison with different proposals resembling breaking up the corporate or removing its shield in opposition to lawsuits stemming from person content material.


Individuals grew to become conscious of misinformation when the Russians weaponized Facebook to intrude within the 2016 election. However whereas Robert Mueller’s report proved that the Russians unfold misinformation, the road of causality between that and precise voting choices remained blurry. For a lot of Individuals, January 6 made disinformation’s risk to our democracy actual.

As extra tangible hurt is instantly attributable to misinformation on social networks, it’ll turn into much more clear that folks want some assist fine-tuning their bullshit detectors earlier than logging on.