The parent company of Facebook and Instagram found that so far AI-powered tactics "provide only incremental productivity and content-generation gains" for bad actors and Meta has been able to disrupt deceptive influence operations.
Meta's efforts to combat "coordinated inauthentic behavior" on its platforms come as fears mount that generative AI will be used to trick or confuse people in elections in the United States and other countries.
Facebook has been accused for years of being used as a powerful platform for election disinformation.
Russian operatives used Facebook and other US-based social media to stir political tensions in the 2016 election won by Donald Trump.
Experts fear an unprecedented deluge of disinformation from bad actors on social networks because of the ease of using generative AI tools such as ChatGPT or the Dall-E image generator to make content on demand and in seconds.
AI has been used to create images and videos, and to translate or generate text along with crafting fake news stories or summaries, according to the report.
Russia remains the top source of "coordinated inauthentic behavior" using bogus Facebook and Instagram accounts, Meta security policy director David Agranovich told reporters.
Since Russia's invasion of Ukraine in 2022, those efforts have been concentrated on undermining Ukraine and its allies, according to the report.
As the US election approaches, Meta expects Russia-backed online deception campaigns to attack political candidates who support Ukraine.
- Behavior based -
When Meta scouts for deception, it looks at how accounts act rather than the content they post.
Influence campaigns tend to span an array of online platforms, and Meta has noticed posts on X, formerly Twitter, used to make fabricated content seem more credible.
Meta shares its findings with X and other internet firms and says a coordinated defense is needed to thwart misinformation.
"As far as Twitter (X) is concerned, they are still going through a transition," Agranovich said when asked whether Meta sees X acting on deception tips.
"A lot of the people we've dealt with in the past there have moved on."
X has gutted trust and safety teams and scaled back content moderation efforts once used to tame misinformation, making it what researchers call a haven for disinformation.
False or misleading US election claims posted on X by Musk have amassed nearly 1.2 billion views this year, a watchdog reported last week, highlighting the billionaire's potential influence on the highly polarized White House race.
Researchers have raised alarm that X is a hotbed of political misinformation.
They have also flagged that Musk, who purchased the platform in 2022 and is a vocal backer of Donald Trump, appears to be swaying voters by spreading falsehoods on his personal account.
"Elon Musk is abusing his privileged position as owner of a... politically influential social media platform to sow disinformation that generates discord and distrust," warned Imran Ahmed, CEO of the Center for Countering Digital Hate.
Musk recently faced a firehose of criticism for sharing with his followers an AI deepfake video featuring Trump's Democratic rival, Vice President Kamala Harris.
Related Links
Cyberwar - Internet Security News - Systems and Policy Issues
Subscribe Free To Our Daily Newsletters |
Subscribe Free To Our Daily Newsletters |