ChatGPT has given many people their first chance to experiment with artificial intelligence (AI), whether they were looking for cooking tips or assistance with a speech.
OpenAI's cutting-edge language processing technology is the foundation of ChatGPT.
The internet's text databases, which included books, magazines, and Wikipedia entries, were used to train the artificial intelligence (AI). 300 billion words total were entered into the system.
A Chatbot with encyclopedic knowledge emerges as a result, sometimes appearing uncannily human.
You can get a recipe by telling ChatGPT what ingredients are in your kitchen cabinet. Need a catchy introduction for a lengthy presentation? No problem.
But is it too good? Its convincing simulation of human responses could be a potent weapon in the hands of malicious individuals.
Experts in academia, cybersecurity, and AI have expressed concern about the possibility that bad actors could use ChatGPT to foment discord and disinformation on social media.
Up until now, disinformation production required a significant amount of labor. But according to a report from Georgetown University, Stanford Internet Observatory, and OpenAI, which was released in January, an AI like ChatGPT would make it much simpler for so-called troll armies to expand their operations.
Complex language processing tools like ChatGPT may have an impact on so-called influence campaigns on social media.
Such campaigns can support or oppose policies as well as deflect criticism and promote the image of a politician or party in power. Additionally, they spread false information on social media by using fake accounts.
Before the 2016 US election, one such campaign was started.
The Senate Intelligence Committee reported in 2019 that thousands of Twitter, Facebook, Instagram, and YouTube accounts set up by the St. Petersburg-based Internet Research Agency were dedicated to undermining Hillary Clinton's campaign and promoting Donald Trump.
Future elections, however, might have to deal with an even greater flood of false information.
According to the AI report published in January, "the potential of language models to rival human-written content at low cost suggests that these models, like any powerful technology, may provide distinct advantages to propagandists who choose to use them.".
The advantages "could expand access to a greater number of actors, enable new tactics of influence, and make a campaign's messaging much more tailored and potentially effective," the report cautions.
The quality of false information may also increase in addition to its quantity.
According to Josh Goldstein, a co-author of the paper and research fellow at Georgetown's Center for Security and Emerging Technology, where he works on the CyberAI Project, AI systems could enhance the persuasive quality of content and make those messages challenging for regular Internet users to identify as parts of coordinated disinformation campaigns.
"Generative language models could generate a significant amount of consistently original content. and make it so that propagandists are not forced to duplicate their text on different news websites or social media accounts, says the author.
additional business technology.
According to Mr. Goldstein, it will be harder for the public to determine what is true if a platform is overrun with false information or propaganda. Often, those bad actors participating in influence operations may have that as their primary objective.
His report also mentions how access to these systems may no longer be restricted to a small number of organizations.
Currently, only a few businesses or governments are in possession of top-tier language models, and these models are constrained in the tasks they can reliably complete and the languages they can output.
According to his report, the likelihood that propagandists will have access to cutting-edge generative models could rise if more actors make investments in them.
According to Gary Marcus, an AI expert and the founder of Geometric Intelligence, an AI business that Uber acquired in 2016, malicious groups may perceive content created by AI as spam.
"Those who disseminate spam rely on the most credulous individuals to click on their links in an effort to spread it to as many people as possible. However, artificial intelligence (AI) can make that squirt gun the biggest Super Soaker ever. ".
Additionally, Mr. Marcus notes that "there is still at least 10 times as much content as before that can still aim to mislead people online" even if social media sites like Twitter and Facebook remove 75 percent of what those offenders posted on their networks.
The rapid development of language model systems today will only add to the number of phony profiles that are already present on Twitter and Facebook due to the surge in fake social media accounts.
According to Carnegie Mellon University computer science professor Vincent Conitzer, "Things like ChatGPT can scale that spread of fake accounts on a level we haven't seen before, and it can become harder to distinguish each of those accounts from human beings.". ".
Both the January 2023 paper, which Mr. Goldstein co-authored, and a related report from the security company WithSecure Intelligence warn of how generative language models can quickly and effectively produce fake news articles that could spread across social media, adding to the deluge of false narratives that could influence voters before a crucial election.
But should social media platforms be as proactive as possible, or will they be lax to enforce any of those kinds of posts if misinformation and fake news emerge as an even bigger threat because of AI systems like Chat-GPT?
Lus A Nunes Amaral, co-director of the Northwestern Institute on Complex Systems, asserts that "Facebook and other platforms should be flagging fake content, but Facebook has been spectacularly failing that test.".
The cost of monitoring every single post and the knowledge that these fake posts are intended to enrage and divide people, which increases engagement, are some of the reasons for that inaction. Facebook will profit from that.
. "