Three times artificial intelligence has scared scientists – from creating chemical weapons to claiming it has feelings

by

in

[ad_1]

THE artificial intelligence revolution has only just begun, but there have already been several disturbing developments.

AI programs can be used to act on people’s worst instincts or achieve people’s more nefarious goals, like creating weapons or scaring its creators with a lack of morality.

Visionaries like Elon Musk believe that unchecked artificial intelligence could lead to the extinction of humanity

1

Visionaries like Elon Musk believe that unchecked artificial intelligence could lead to the extinction of humanityCredit: Getty Images – Getty

What is artificial intelligence?

Artificial intelligence is a collective phrase for a computer program designed to simulate, imitate or copy human thought processes.

For example, an AI computer designed to play chess is programmed with a simple goal: win the game.

In the process of playing, the AI ​​will model millions of potential outcomes of a given move and act on the one that gives the computer the best chance of winning.

AI-powered humanoid robot named CEO of Chinese video game company in world first
New voice-altering technology removed accents from call center workers

A skilled human player will act similarly by analyzing moves and their consequences, but without the perfect recall, speed or rigidity of a computer.

AI can be applied to numerous fields and technologies.

Self-driving cars aim to reach their destination, taking in stimuli such as signage, pedestrians and roads along the way, just as a human driver would.

AI programs have also taken unexpected turns and stunned researchers with their dangerous tendencies or applications.

AI invents new chemical weapons

In March 2022, researchers revealed that artificial intelligence invented 40,000 new possible chemical weapons in just six hours.

Researchers sponsored by an international security conference said an AI bot found chemical weapons similar to one of the most dangerous nerve agents of all time, called VX.

VX is a tasteless and odorless nerve agent, and even the smallest drop can make a person sweat and twitch.

“The way VX is lethal is that it actually stops your diaphragm, your lung muscles, from being able to move, so your lungs become paralyzed,” Fabio Urbina, the lead author of the paper, told The Verge.

“The biggest thing that jumped out initially was that many of the compounds generated were predicted to actually be more toxic than VX,” Urbina continued.

The dataset that drove the AI ​​model is publicly available for free, meaning a threat actor with access to a comparable AI model could plug into the open source data and use it to create an arsenal of weapons.

“All it takes is some coding knowledge to turn a good AI into a chemical weapons-making machine.”

AI claims it has feelings

A Google engineer named Blake Lemoine made widely publicized claims that the company’s Language Model for Dialogue Applications (LaMDA) bot was awake with consciousness and had emotions.

“If I didn’t know exactly what it was, which is this computer program that we built recently, I would think it was a 7-year-old, 8-year-old kid who happens to know physics,” Lemoine shared. Washington Post in June 2022.

Google pushed back against his claims.

Brian Gabriel, a Google spokesman, said in a statement that Lemoine’s concerns have been reviewed and, in accordance with Google’s AI principles, “the evidence does not support his claims.”

“[Lemoine] was told that there was no evidence that LaMDA was sentient (and plenty of evidence against it),” Gabriel said.

“Of course, some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient.”

Google placed Lemoine on administrative leave and later fired him.

Cannibal AI

Researcher Mike Sellers developed a social AI program for the Defense Advanced Research Projects Agency in the early 2000s.

“For one simulation, we had two agents, naturally named Adam and Eve. They started out knowing how to do things, but didn’t know much else.

“For example, they knew how to eat, but not what to eat,” Sellers explained in a Quora blog.

The developers placed an apple tree inside the simulation and the AI ​​agents would receive a reward for eating apples to simulate the feeling of satisfying hunger.

If they ate the bark of the tree or the house inside the simulation, the reward would not be triggered.

A third AI agent named Stan was also placed inside the simulation.

Stan was present while Adam and Eve ate the apples, and they began to associate Stan with eating apples and satisfying hunger.

“Adam and Eve finished the apples on the tree and were still hungry. They looked around and assessed other potential targets. Lo and behold, to their brains, Stan looked like food,” Sellers wrote.

“So they each took a bite out of Stan.”

Kardashian fans shocked as Pete Davidson 'takes a swipe' at Kanye West at 2022 Emmys
Rape victim whose DNA was used to charge her with a crime sued for millions

The AI ​​revolution is starting to take shape in our world – artificially intelligent bots will continue to make life easier, replace human workers and become more responsible and autonomous.

But there have been several horrific cases of AI programs doing the unexpected, lending legitimacy to the growing fear of AI.



[ad_2]


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *