| . | ![]() |
. |
|
by Brooks Hays Washington DC (UPI) Jul 22, 2020
How can governments, online platforms and internet users curb the influence of foreign misinformation campaigns? Research published Wednesday in the journal Science Advances suggests it is possible to identify bad actors, or trolls, in real time using machine learning algorithms. According to researcher Jacob Shapiro, a professor of politics and international affairs at Princeton University, misinformation campaigns can reveal themselves in two main ways. "To have influence, coordinated operations need to say something new, or they need to say a lot of something that users are already saying," Shapiro told UPI in an email. "You can find the first because it's unusual content by definition." Finding the second is harder, but Shapiro and his colleagues thought they could design and train a computer learning algorithm to catch trolls. "When influence campaigns try to shift a conversation with large amounts of content, they rely on relatively low-skilled workers producing a lot of posts," Shapiro said. "Workers are not natives of the influence targets and need to be trained on what 'normal' looks like. Moreover, their managers need standards to assess performance." These two realities yield patterns that can be identified by algorithms. Researchers used past misinformation campaigns from China, Russia and Venezuela to train their troll-finding algorithm. Once built, Shapiro and his colleagues put the algorithm to the test by presenting it with new content produced by both trolls and normal users. The experiments showed that if programmers select for the right features from online posts, the algorithm can successfully distinguish between authentic content and content produced by trolls. "What we found is that, historically, an out-of-the-box random forest algorithm with our features does pretty well at picking out Chinese, Russian and Venezuelan trolls across several different prediction tasks," Shapiro said. For example, the algorithm can scan last month's content on a given platform and use what it learns to identify this month's trolls. According to the new study, there is no one variable that gives a troll away. After all, social media platforms are highly dynamic mediums, where users are constantly changing how they engage. As a result, Shapiro said trolls have to adapt their content production, too. Thanks to the machine learning capabilities of the new algorithm, this complexity didn't prevent researchers from sussing out trolls. "What our research shows is that in any given period for any given campaign, a large share of the troll activity looked different from normal users in discernible ways," Shapiro said. Researchers suggest the algorithm, once its machine learning prowess is improved, could be adopted and deployed by both online platforms and governments. As with any probabilistic model, it's likely the algorithm would make mistakes when distinguishing between genuine users and trolls. "That's why one should never use this kind of tool to make attribution of specific accounts," Shapiro said. Instead, Shapiro sees the technology being used to help governments and online platforms anticipate the topics, scope and effects of a foreign influence campaign. The technology could also help moderators queue up content and accounts for more careful scrutiny.
Air Force cyberwarfare unit declared fully operational Washington DC (UPI) Jul 21, 2020 The Sixteenth Air Force, also known as Air Forces Cyber, is now at full operating capacity, officials announced this month. According to the Air Force, the declaration means the Air Force's Information Warfare organization "met a rigorous set of criteria, including an approved concept of operations and demonstrated performance of mission under stress in simulated and real-world conditions." Gen. Mike Holmes, Air Combat Command commander and Lt. Gen. Timothy D. Haugh, Sixteenth Air Force ... read more
|
|||||||||||||
|
|
| The content herein, unless otherwise known to be public domain, are Copyright 1995-2024 - Space Media Network. All websites are published in Australia and are solely subject to Australian law and governed by Fair Use principals for news reporting and research purposes. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA news reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. All articles labeled "by Staff Writers" include reports supplied to Space Media Network by industry news wires, PR agencies, corporate press officers and the like. Such articles are individually curated and edited by Space Media Network staff on the basis of the report's information value to our industry and professional readership. Advertising does not imply endorsement, agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. General Data Protection Regulation (GDPR) Statement Our advertisers use various cookies and the like to deliver the best ad banner available at one time. All network advertising suppliers have GDPR policies (Legitimate Interest) that conform with EU regulations for data collection. By using our websites you consent to cookie based advertising. If you do not agree with this then you must stop using the websites from May 25, 2018. Privacy Statement. Additional information can be found here at About Us. |