Artificial Intelligence (AI) has been making waves in several sectors across the globe, and now, its influence is being felt heavily in the social media landscape. The 2024 U.S Presidential elections could possibly see a rise in AI applications aimed at manipulating voters’ sentiments. This comes amid concerns that AI influence on social media can pose substantial threats to democratic elections. How so you may ask?
Imagine the technology falling into the wrong hands—those wishing to sow the seeds of discord and polarize voters along particular lines. Already, the Microsoft Threat Analysis Center (MTAC) has observed so-called “China-affiliated actors” taking advantage of this. Utilizing AI-generated visual media, these individuals have reportedly launched large-scale campaigns on politically divisive topics such as gun violence and the denigration of U.S. political figures.
On the one hand, we’re witnessing AI being used as a tool for potential propaganda and manipulation. Yet, on the other hand, in efforts to combat such threats, AI is also being deployed to detect and counter the spread of disinformation. Accrete AI, for instance, has developed AI software that provides real-time predictions of disinformation threats emanating from social media platforms. However, the upshot is still somewhat dystopian—requiring the use of AI to protect us from harmful uses of AI.
Furthermore, the concerning trend of “troll farms” gaining ground in the online world contributes to the pressing threat. MIT reported that during the 2020 U.S. election, troll farms—an institutionalized group of internet trolls intending to disrupt political opinions and decisions—had reached a staggering 140 million Americans each month.
And while AI provides the opportunity for more in-depth, personalized interaction, the potential drawbacks are evident. The U.S. Federal Election Commission (FEC) has unanimously agreed to advance a petition regulating political ads using AI, citing deep fakes as a significant threat to democracy. Even tech giant Google has declared changes in its political content policy, making disclosure mandatory for AI-employed political campaign ads.
Our experiences and personal opinions shape us and guide our voting decision processes. An unregulated interference, such as AI usage on social media platforms, can pose serious threats to these processes and ultimately sway public opinion on a massive scale.
In conclusion, as we approach the future, it’s imperative to understand the implications and manage the potential risk and reward that AI can bring to our sociopolitical landscape. As with many advanced technologies, AI holds the potential to greatly benefit or severely harm society—it’s upon us as a collective to determine its trajectory.
Source: Cointelegraph