Navigating Misinformation in a Digitally Connected World

By Jordan Bogle

In a world of increasing crises, from new and ongoing wars to an intensifying climate situation, accurate information is necessary to create the foundation of effective responses and public resilience. However, a digitally interconnected era and the emergence of artificial intelligence (AI) make finding reliable information difficult. With so many false narratives, AI and social media usage circulating misinformation and orchestrated disinformation campaigns in political realms has become more and more evident. Examining mitigation strategies, including traditional methods and the ethical considerations tied to AI deployment and social media, becomes imperative. There is an urgent call for collective, informed action to safeguard societies.  

Misleading Campaigns and Human Vulnerability  

Amidst global crises, the spread of misinformation surges within digitally connected societies. Social media serves as a platform where misleading stories rapidly circulate. Advances in digital technology, such as algorithmic content distribution and the speed of information sharing, have further amplified the spread of these stories, sharing them widely among the masses during times of uncertainty. The struggle to separate fact from fiction did not start with AI, but it has intensified with higher levels of generated content. According to Timothy Schoup, a Senior Adviser to the Copenhagen Institute for Future Studies, 99% of content on the internet could be AI-generated by 2030. This possibility will only increase the need for content moderation systems, which are already overwhelmed.   

Misinformation significantly impacts how the public perceives and responds to crises, often forcing individuals and politicians to take actions down misleading paths. Take the COVID-19 pandemic as an example, where false information shaped much of the public’s response and led to the undermining of public health protocols, the overloading of healthcare systems, and the sowing of political and social divisions. Today, a similar problem is on display in a world at war. Misinformation is distorting historical accounts and manipulating public opinion. AI has become a tool for spreading false information by producing fake content and deep fake images to the point where individuals can no longer immediately believe even what they see with their eyes.  

The psychology behind this vulnerability to inaccurate information shows that constructed realities and fear-driven responses often serve as the guiding light of behavior during crises. People tend to prefer information that aligns with their existing beliefs, and the influx of ‘facts’ allows them to do so without any actual factual basis. A 2016 study that analyzed the interactions of over 300 million Facebook users with over 900 news outlets showed that people limit their social media exposure to a small number of idea-aligning sites, despite the widespread availability of content. This selective exposure reinforces preconceived notions and further polarizes communities. As individuals are sharing false narratives faster than real ones, understanding these psychological inclinations is vital in creating strategies to guide more informed and rational responses in critical times. 

However, AI and social media users are not the only forces that drive the spread of misinformation. Deliberate misleading campaigns by both state and non-state actors not only have a significant impact on public perceptions and institutions during times of crisis but can also be the cause of crises themselves. Through creating false narratives, manipulating the media, and amplifying divisive content, these efforts often seek to influence political landscapes, cause confusion among the public, and achieve political objectives. During the movement for Catalonian independence, for example, many accused Russia of spreading disinformation within the region, hoping to exploit weaknesses in democratic states

The Quest for Proactive Strategies: An Uneasy, but Possible Task   

The consequences of such campaigns extend far beyond immediate political outcomes. They erode public trust in institutions and create skepticism and polarization among populations. As seen following the 2020 United States presidential elections, many members of government, including former President Trump, embarked on a disinformation campaign intending to overturn the results of a legitimate democratic election. By manipulating public opinion, these campaigns jeopardize the foundation of democratic institutions, casting doubt on the legitimacy of elections, governance, and the media. The erosion of trust threatens societal cohesion and, ultimately, the fundamental tenets of democracy, emphasizing the critical need for measures to counter their impact on public perception.  

In the ongoing battle against misinformation, proactive strategies are necessary to improve our defenses. With no gatekeeper for determining what is true or false, strengthening fact-checking initiatives and expanding the education of media literacy and critical thinking skills is more crucial than ever. Enhancing the accessibility and reach of these programs could empower individuals with the skills to differentiate accurate and deceptive information, bolstering their resilience against false narratives. Everybody on the internet can take steps such as verifying, cross-checking, and comparing content they find online before sharing to do their part in reducing the influx of false information.  

The Crucial Need for Collaborative Efforts

However, individuals cannot be the only ones responsible for combating fake news. The role of those who seek to spread the truth has their most important job yet, and collaborative efforts among governments, tech companies, journalists, and civil society are instrumental. A multifaceted attempt is needed to create a more comprehensive response to misinformation. Governments should enact legislation incentivizing responsible platform moderation, and it is past time to hold tech companies accountable for the mass spread of misinformation, as they must ensure transparency and content control. Some companies argue that platform moderation infringes on free speech; however, it is crucial to address this concern while finding a balance that safeguards against misinformation. These efforts can create a more resilient information landscape capable of combating misinformation.

Indeed, in a world where misinformation ends up easily taking the lead, combating false narratives has become imperative to ensure effective responses to crises. The surge in AI-generated content intensifies the challenge by influencing public perception during pivotal moments, from pandemics to political upheavals and conflicts. This escalation, coupled with the psychological vulnerability to misinformation, proves the necessity of intervention. Strengthening fact-checking, promoting the education of media literacy and critical thinking skills, and fostering collaboration among governments, tech entities, and civil society is essential.

Jordan Bogle is a Political Science and Public Affairs Master of Arts student at SLU-Madrid from Atlanta, Georgia, USA. 

To quote this article or video, please use the following reference:  Jordan Bogle (2023), “Navigating Misinformation in a Digitally Connected World”,

The OCC publishes a wide range of opinions that are meant to help our readers think of International Relations. This publication reflects the views only of the author, and neither the OCC nor Saint Louis University can be held responsible for any use which may be made of the opinion of the author and/or the information contained therein.

Featured Image Credit: AI Image Generated by Dall-E