“Killer AI” is real. This is how we stay safe, healthy and strong in a brave new world

NEWYou can now listen to Fox News articles!

The rapid progress of artificial intelligence (AI) is nothing short of remarkable. From healthcare to finance, AI is transforming industries and has the potential to take human productivity to unprecedented levels. However, with this exciting promise comes a looming concern among the public and some pundits: the emergence of a “killer AI.” In a world where innovation has already transformed society in unexpected ways, how do we separate legitimate fears from those that should remain fiction?

To answer questions like this, we recently published one policy brief for the Mercatus Center at George Mason University entitled “On Defining ‘Killer AI'”. In it, we provide a novel framework for assessing AI systems for their potential to cause harm, an important step in addressing the challenges that AI poses and ensuring its potential for harm Responsible integration into society.

AI has already demonstrated its transformative power, offering solutions to some of society’s most pressing problems. It improves medical diagnosis, accelerates scientific research, and streamlines processes throughout the business world. By automating repetitive tasks, AI frees human talent to focus on higher-level tasks and creativity.

Terminator prop

FILE – A character from the film ‘Terminator’ is on display at the house where Austrian actor, former champion bodybuilder and former California governor Arnold Schwarzenegger was born October 7, 2011 in the southern Austrian village of Thal. (REUTERS/Herwig Prammer)

The potential for good is limitless. While optimistic, it’s not particularly unreasonable to envision an AI-powered economy where, after an adjustment period, people are significantly healthier and wealthier while working far less than they do today.

WHAT IS ARTIFICIAL INTELLIGENCE (AI)?

However, it is important to ensure that this potential is safely realised. As far as we know, our attempt also attempts to assess the real-world security risks of AI marks the first attempt to comprehensively define the phenomenon of “killer AI”.

We define it as AI systems that directly cause physical harm or death, whether intentionally or due to unforeseen consequences. Importantly, the definition encompasses and distinguishes between both physical and virtual AI systems, recognizing that there is potential harm from different forms of AI.

Though their examples are complex to understand, sci-fi can at least help illustrate the concept of physical and virtual AI systems resulting in tangible physical damage. The Terminator character has long been cited as an example of the risks of physical AI systems. However, potentially more dangerous are virtual AI systems, an extreme example of which can be found in the latest film “Mission Impossible”. It’s realistic to say that our world is increasingly connected and our critical infrastructure is no exception.

ASK A DOC: 25 BURNING QUESTIONS ABOUT AI AND HEALTH CARE ANSWERED BY AN EXPERT

Our proposed framework offers a systematic approach to assessing AI systems, with a focus on prioritizing the well-being of the many over the interests of a few. By considering not only the possibility of harm, but also its severity, we enable a rigorous assessment of the safety and risk factors of AI systems. It has the potential to uncover previously unnoticed threats and improve our ability to mitigate risks associated with AI.

Our framework enables this by requiring a deeper consideration and understanding of the possibility of repurposing or misusing an AI system and the potential impact of using an AI system. Furthermore, we emphasize the importance of an interdisciplinary stakeholder assessment when approaching these considerations. This will allow for a more balanced perspective on the development and deployment of these systems.

CLICK HERE FOR MORE OPINIONS ON FOX NEWS

This assessment can serve as a basis for comprehensive legislation, proper regulation, and ethical discussions about killer AI. Our focus on preserving human life and ensuring the well-being of many can help ensure legislative efforts address and prioritize the most pressing concerns raised by potential killer AIs.

Emphasizing the importance of involving multiple, interdisciplinary stakeholders could encourage people from different backgrounds to become more involved in the ongoing discussion. In this way we hope that future legislation can be more comprehensive and the related discussion can be made more informed.

CLICK HERE TO GET THE FOX NEWS APP

While the framework is a potentially important tool for policymakers, industry leaders, researchers, and other stakeholders to rigorously assess AI systems, it also underscores the urgency of further research, testing, and proactivity in the area of ​​AI security. That will be a challenge in such a fast-moving field. Fortunately, researchers will be motivated by the numerous opportunities to learn from technology.

AI should be a force for good – one that enriches people’s lives, not one that puts them at risk. By developing effective policies and approaches to address the challenges of AI security, society can harness the full potential of this emerging technology while protecting itself from potential harm. The framework presented here is a valuable tool in this mission. Whether or not fears about AI turn out to be true or unfounded, we will be better off if we can push past this exciting frontier while avoiding its unintended consequences.

Nathan Summers is an LTS Research Analyst. He is co-author of a recent study from the Mercatus Center at George Mason University: “On the definition of “killer AI”..'”

Rick Schindler

Rick Schindler is a Worldtimetodays U.S. News Reporter based in Canada. His focus is on U.S. politics and the environment. He has covered climate change extensively, as well as healthcare and crime. Rick Schindler joined Worldtimetodays in 2023 from the Daily Express and previously worked for Chemist and Druggist and the Jewish Chronicle. He is a graduate of Cambridge University. Languages: English. You can get in touch with me by emailing: RickSchindler@worldtimetodays.com.

Related Articles

Back to top button