In his 2012 letter to investors, Mark Zuckerberg perfectly summed up the main benefit of social media as “the power to share.” What happens, however, when anyone is allowed to share whatever they would like, whenever it is convenient, without any restrictions?
On one hand, social media has brought many benefits to society, including shared understanding, efficiency, and interconnectedness. Digital communication platforms allow for the dispersion of messages to large audiences without the limitations of physical communication or hierarchy. An example of this social equalization is information shared acrosssocial media platforms such as X, Instagram, YouTube, and TikTok. In a 2022 Atlantic article, written by NYU social psychologist Jonathan Haidt, he states, “social media has given voice to some people who had little previously,” and “has made it easier to hold powerful people accountable for their misdeeds.” Another important advantage of unregulated information exchange is efficiency. This is because the process of mutual communication is optimized on a global platform that is freely accessible 24 hours a day. Finally, the last major convenience of social media platforms is interconnectedness. Technological optimists argue that digital communication helps ensure that society is not dominatedby a single opinion, since no “regime [can] build a wall to keep out the internet” (Haidt).
Despite the benefits of social media, unregulated information is dangerous to users, and the fake news problem is further amplified by algorithmic “filter bubbles.” The term is coined by author, activist, and entrepreneur Eli Pariser. He explains the concept in his book The Filter Bubble: What the Internet is Hiding From You, describing that a "filter bubble" is a state in which we are "not exposed to information that might challenge or expand our worldviews." This is harmful to individual users as it limits their thinking and makes them less accepting of the opinions that run contrary to their carefully curated feed. Personally, I have experienced the detrimental effects of unchecked misinformation and “filter bubbles” through my parents' obsession with QAnon. Their feeds are filled with related political posts that confirm their confirmation bias and further entrench them into progressively wilder conspiracy theories. Unfortunately, since the pandemic, they have become so invested in their new belief system, that any opposition to QAnon theories is viewed as “fake news,” when fake news itself was what sent them down the rabbit hole initially.
Beyond the dangers posed to the individual user, fake news causes serious problems for society, including the spread of misinformation, the dismantling of trust in longstanding institutions, and cancel culture. The openness of social media channels and viral algorithms allows misinformation to flourish. After the introduction of "like" and “share”buttons, people started sharing posts to go viral, the more outlandish and click-bate-worthy the better. Research shows that "posts that evoke emotion, especially anger toward an outgroup, are most likely to be shared" (Haidt). This has been most prevalent in American politics, with extremist ideas gaining the most attention and leading to distrust of establishedinstitutions. Furthermore, people are repeatedly "canceled" on social media for expressing controversial opinions, creatingeven greater divisions between ideological groups.
What then can be done by social media giants like Meta and X to regulate and control misinformation and fake news? Or should they at all? While it is agreeable that the unrestrained nature of social media makes for seamless worldwide communication, the dangers of allowing anyone to share whatever they would like, whenever it is convenient, and without any restrictions is calamitous to the individual user and society. Therefore, social media giants should seek to regulate misinformation and fake news out of a moral and ethical obligation. However, digital media companies must also be careful to pursue these changes in a way that is not viewed as censorship or an infringement on free speech. According to Haidt, the “share” and “like” features allow “fake and out-rage-inducing content” to “attain a level of reach and influence.” Instead of removing specific posts and flagging others as “fact-checked,” changes must happen at the algorithmic level. Broadening a user’s viewed content will naturally allow them to escape their “filter bubbles,” and look at information with a discerning eye. Another solution to combat the virality issue of misinformation is content-neutral reforms, such as those proposed by Facebook whistleblower Francis Haugen. These changes include limits on the numberof automatic “shares” and third-party verification systems for users. In short, it is disastrous to the populace for social media companies to stand idly by while misinformation runs rampant, and thus, the algorithms must be re-engineered to better serve users and decrease the popularity of sensationalist content.
Milena D’Andrea, a student in Jon Pfeiffer’s media law class at Pepperdine University, wrote the above essay in response to the following question: "Misinformation and fake news spread rapidly on social media. If you were in charge of X (formerly Twitter), Instagram, YouTube and TikTok would you regulate misinformation and fake news? If so, how? If not, why not?" Milena is an Integrated Marketing Communications and Multimedia Design major.
Contact Jon and his team today.