Military Space News
ROBO SPACE
New Study Confirms Large Language Models Pose No Existential Risk
illustration only
New Study Confirms Large Language Models Pose No Existential Risk
by Sophie Jenkins
London, UK (SPX) Aug 13, 2024
ChatGPT and other large language models (LLMs) do not have the capability to learn independently or develop new skills, meaning they pose no existential threat to humanity, according to recent research conducted by the University of Bath and the Technical University of Darmstadt in Germany.

Published as part of the proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024), the study reveals that while LLMs are proficient in language and capable of following instructions, they lack the ability to master new skills without direct guidance. As a result, they remain inherently controllable, predictable, and safe.

The researchers concluded that despite LLMs being trained on increasingly large datasets, they can continue to be used without significant safety concerns, though the potential for misuse still exists.

As these models evolve, they are expected to generate more sophisticated language and improve in responding to explicit prompts. However, it is highly unlikely that they will develop complex reasoning skills.

"The prevailing narrative that this type of AI is a threat to humanity prevents the widespread adoption and development of these technologies, and also diverts attention from the genuine issues that require our focus," said Dr. Harish Tayyar Madabushi, a computer scientist at the University of Bath and co-author of the study on the 'emergent abilities' of LLMs.

Led by Professor Iryna Gurevych at the Technical University of Darmstadt, the collaborative research team conducted experiments to evaluate LLMs' ability to tackle tasks they had not previously encountered-often referred to as emergent abilities.

For example, LLMs can answer questions about social situations without having been explicitly trained to do so. While earlier research suggested this capability stemmed from models 'knowing' about social situations, the researchers demonstrated that it is actually a result of LLMs' proficiency in a process known as in-context learning (ICL), where they complete tasks based on examples provided.

Through extensive experimentation, the team showed that the combination of LLMs' abilities to follow instructions (ICL), their memory, and their linguistic proficiency can account for both their capabilities and limitations.

Dr. Tayyar Madabushi explained, "The fear has been that as models grow larger, they will solve new problems that we cannot currently predict, potentially acquiring hazardous abilities like reasoning and planning. This concern was discussed extensively, such as at the AI Safety Summit last year at Bletchley Park, for which we were asked to provide commentary. However, our study shows that the fear of a model going rogue and doing something unexpected, innovative, and dangerous is unfounded."

He further emphasized, "Concerns over the existential threat posed by LLMs are not limited to non-experts and have been expressed by some leading AI researchers worldwide. However, our tests clearly demonstrate that these fears about emergent complex reasoning abilities in LLMs are not supported by evidence."

While acknowledging the need to address existing risks like AI misuse for creating fake news or facilitating fraud, Dr. Tayyar Madabushi argued that it would be premature to regulate AI based on unproven existential threats.

He noted, "For end users, relying on LLMs to interpret and execute complex tasks requiring advanced reasoning without explicit instructions is likely to lead to errors. Instead, users will benefit from clearly specifying their requirements and providing examples whenever possible, except for the simplest tasks."

Professor Gurevych added, "Our findings do not suggest that AI poses no threat at all. Rather, we demonstrate that the supposed emergence of complex thinking skills linked to specific threats is unsupported by evidence, and that we can effectively control the learning process of LLMs. Future research should, therefore, focus on other potential risks, such as the misuse of these models for generating fake news."

Research Report:Are Emergent Abilities in Large Language Models just In-Context Learning?

Related Links
University of Bath
All about the robots on Earth and beyond!

Subscribe Free To Our Daily Newsletters
Tweet

RELATED CONTENT
The following news reports may link to other Space Media Network websites.
ROBO SPACE
OpenAI worries its AI voice may charm users
San Francisco (AFP) Aug 9, 2024
OpenAI says it is concerned that a realistic voice feature for its artificial intelligence might cause people to bond with the bot at the cost of human interactions. The San Francisco-based company cited literature which it said indicates that chatting with AI as one might with a person can result in misplaced trust and that the high quality of the GPT-4o voice may exacerbate that effect. "Anthropomorphization involves attributing human-like behaviors and characteristics to nonhuman entities, su ... read more

ROBO SPACE
Poland inks deal for 48 Patriot air-defence launchers

NSPA Grants Raytheon $478 Million Contract to Supply Patriot GEM-T Missiles

Turkey plans to build its own anti-missile defence system

Ukraine deploys air defence as Russia targets Kyiv

ROBO SPACE
Poland buys hundreds of US air-to-air missiles

Iran says Guards navy gets 'large number' of new missiles, drones

Two killed in Russian missile attack on Kyiv

Netherlands to stock up on JASSM-ER missiles

ROBO SPACE
Tengden Completes Test Flight of China's Largest Cargo Drone

ELTA North America Excels in Pentagon Drone Defense Swarm Test

Russia says drones, missiles shot down over Kursk region

Russia says destroyed 76 Ukrainian drones

ROBO SPACE
GMV Secures GBP 2 Million Contract for Quantum-Enabled White Rabbit Switch to Safeguard UK Infrastructure

Reticulate Micro delivers advanced video tech VAST to US Army

Northrop Grumman completes PDR for SDA Data Transport Satellites

SES Space and Defense secures US Air Force Air Combat Command contract

ROBO SPACE
Czech army to receive German tanks in Ukraine aid compensation

What we know about Hezbollah's weapons arsenal

US announces $1.7 bn in new security assistance for Ukraine

GAO finds another $2B in military aid for Ukraine

ROBO SPACE
US approves $20 billion weapons package for Israel

US lifts restrictions on Saudi weapons, with eye on resolving Gaza

U.S. tightens sanctions on Belarus, Lukashenko regime

Two Russian officials linked to defence ministry detained for fraud

ROBO SPACE
China FM to meet Myanmar junta chief on SE Asia trip: military official

Indian FM on first visit to Maldives since troops sent packing

Philippines' Marcos condemns China's 'illegal and reckless' actions over disputed reef

India and Maldives look to reset ties after troops expelled

ROBO SPACE
Subscribe Free To Our Daily Newsletters




The content herein, unless otherwise known to be public domain, are Copyright 1995-2026 - Space Media Network. All websites are published in Australia and are solely subject to Australian law and governed by Fair Use principals for news reporting and research purposes. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA news reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. All articles labeled "by Staff Writers" include reports supplied to Space Media Network by industry news wires, PR agencies, corporate press officers and the like. Such articles are individually curated and edited by Space Media Network staff on the basis of the report's information value to our industry and professional readership. Advertising does not imply endorsement, agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. General Data Protection Regulation (GDPR) Statement Our advertisers use various cookies and the like to deliver the best ad banner available at one time. All network advertising suppliers have GDPR policies (Legitimate Interest) that conform with EU regulations for data collection. By using our websites you consent to cookie based advertising. If you do not agree with this then you must stop using the websites from May 25, 2018. Privacy Statement. Additional information can be found here at About Us.