Digital Misinformation Crisis

Home > Blog > Digital Misinformation Crisis
Digital Misinformation Crisis

Oct 04, 2024

The world is at risk of losing the power to know what is real. With the emergence of new technologies and their relevance such as artificial intelligence, deep fakes, and articles circulating the internet without any credible sources, humanity has entered a dangerous path towards what seems like an endless black hole of confusion. As the race to innovate yet protect society from technology progresses, it is becoming more difficult to identify what is true, and it is important to recognize social media’s presence in this phenomenon. After doing some investigation on social media’s role in the spread of misinformation, a few solutions will be proposed to hopefully create an alternate ending to what seems like a dystopian future film.

When Facebook was first created in 2004–then called “TheFacebook”–the intention was to create an opinion-based online platform where people could share their ideas and create connections virtually. Although the motivation behind it can be argued as inherently good, this ability and freedom to post regardless of having zero factual sources was unfortunately foreshadowing a future of misinformation. With the introduction of reposting, or often remembered as “retweeting” on Twitter (now X), any of these opinions or false facts could be shared in a matter of seconds to millions of users globally. Until relatively recently, there were not any major filters used by the big social media companies to restrict what people could post, including sensitive political information. Another reason that social media breeds misinformation so easily is due to its convenient and simple nature; users are very comfortable and content during their time spent on the app and are often too lazy to fact-check information they come across. A major change to Instagram’s algorithm in 2016 also played a crucial role in the ease of spreading misinformation when they switched their chronological timeline to one that was based on user interests. This comes with a variety of different negative side effects, but one of the major ones was polarization. The more a user would look at a certain type of content or specific belief, the more it would be pushed onto their feed to keep them using the app longer–a smart but detrimental move from companies to generate more ad revenue. This polarizing content and culture is very dangerous for the mental health of society, as they become less open-minded to views outside of what is being pushed onto their feed and less reasonable and rational. Since content that creates the feeling of anger for users tends to create more engagement than a post that is simple informative or purely positive content, creators have more incentive to twist facts or exaggerate information. With more engagement, these types of posts are pushed onto more feeds across platforms and result in a mess of both true and untrue facts and assumptions. It is also important to acknowledge the power that the major social media platforms hold. As restrictions have become more prevalent following violence and chaos sparked by online polarization, it has become apparent that they truly have the power to silence whoever they disagree with. Society needs to remember that these companies are always biased, whether or not it is intentional.

Even though it often seems hopeless to watch what the world becomes as social media becomes more integrated with daily life, there are several strategies that can at least tame the situation. With the election coming up, the information circulating, both true and false, has increased exponentially and most likely won’t stop until months after the voting has finished. However, social media platforms still have a lot of power to make positive change. Instead of choosing what they want to restrict and subjecting their platforms to unfair bias, they can create a program to identify what is presenting any sort of public information. Once this is identified, filtering out personal posts with friends or dog photos, there can be a request to have the creator submit their source of evidence. Even without this, posts can still be shared and seen by all, but there will just be a disclaimer saying the post does not have submitted or verified evidence so it is subject to falsity. There are still certain posts that should be restricted or banned that fall into more of the sensitive or violent categories, especially for younger users, but this would be a very different topic. Another possible positive source of hope for the future of the digital world is actually “Web 3” or the integration of personal ownership to an internet that was previously just “view and share” (often referred to as Web 2). With the decentralization of power and the ownership given back to users from these borderline-monopolist companies, there is major potential to minimize the bias of these platforms, although it also might make it more difficult to consistently track what is false information. Ultimately, even with these changes, there is always going to be misinformation online, it is just a matter of society staying educated on not immediately believing whatever they see and taking time to do research on their own as well.


Ellie Kaiser, a student in Jon Pfeiffer’s media law class at Pepperdine University, wrote the above essay in response to the following prompt: “The Spread of Misinformation: Investigate the role of social media in the spread of misinformation and potential strategies to combat it.” Ellie is an Advertising, Multimedia Design Major.

Sign Up for Pfeiffer Law's Monthly Newsletter

Contact Jon and his team today.

Subscribe