https://auntresodamid.com/iJugHxINePLH1VY/96561
Ex-Google CEO Eric Schmidt warns AI models can be hacked: ‘They learn how to kill someone’

Ex-Google CEO Eric Schmidt warns AI models can be hacked: ‘They learn how to kill someone’

Sharing is caring!


Google’s former CEO Eric Schmidt spoke at the Sifted Summit on Wednesday 8, October.

Bloomberg | Bloomberg | Getty Images

Google‘s former CEO Eric Schmidt has issued a stark reminder about the dangers of AI and how susceptible it is to being hacked.

Schmidt, who served as Google’s chief executive from 2001 to 2011, warned about “the bad stuff that AI can do,” when asked whether AI is more destructive than nuclear weapons during a fireside chat at the Sifted Summit

“Is there a possibility of a proliferation problem in AI? Absolutely,” Schmidt said Wednesday. The proliferation risks of AI include the technology falling into the hands of bad actors and being repurposed and misused.

“There’s evidence that you can take models, closed or open, and you can hack them to remove their guardrails. So in the course of their training, they learn a lot of things. A bad example would be they learn how to kill someone,” Schmidt said.

“All of the major companies make it impossible for those models to answer that question. Good decision. Everyone does this. They do it well, and they do it for the right reasons. There’s evidence that they can be reverse-engineered, and there are many other examples of that nature.”

AI systems are vulnerable to attack, with some methods including prompt injections and jailbreaking. In a prompt injection attack, hackers hide malicious instructions in user inputs or external data, like web pages or documents, to trick the AI into doing things it’s not meant to do — such as sharing private data or running harmful commands

Jailbreaking, on the other hand, involves manipulating the AI’s responses so it ignores its safety rules and produces restricted or dangerous content.

In 2023, a few months after OpenAI’s ChatGPT was released, users employed a “jailbreak” trick to circumvent the safety instructions embedded in the chatbot.

This included creating a ChatGPT alter-ego called DAN, an acronym for “Do Anything Now,” which involved threatening the chatbot with death if it didn’t comply. The alter-ego could provide answers on how to commit illegal activities or list the positive qualities of Adolf Hitler.

Schmidt said that there isn’t a good “non-proliferation regime” yet to help curb the dangers of AI.

AI is ‘underhyped’



Source link

Oval@3x 2

Don’t miss latest news!

Select list(s):

We don’t spam! Read our [link]privacy policy[/link] for more info.

🕶 Relax!

Put your feet up and let us do the hard work for you. Sign up to receive our latest news directly in your inbox.

Select Your Choice:

We’ll never send you spam or share your email address.
Find out more in our Privacy Policy.

🕶 Relax!

Put your feet up and let us do the hard work for you. Sign up to receive our latest news directly in your inbox.

Select Your Choice:

We’ll never send you spam or share your email address.
Find out more in our Privacy Policy.

Sharing is caring!

Read More :-  AppLovin skyrockets 33% after earnings beat, strong guidance as analysts raise price target

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top