Ex-Google CEO says evildoers could use AI to ‘harm or kill many, many, many, many people’

Former Google CEO Eric Schmidt has warned that artificial intelligence (AI) could be “misused by evil people” to cause harm, even death.

Speaking this week at The Wall Street Journal’s CEO Council Summit in London, Schmidt specifically warned that AI is an “existential risk” to humanity.

“And existential risk is defined as many, many, many, many people harmed or killed,” he bluntly said.

“There are scenarios not today, but reasonably soon, where these systems will be able to find zero-day exploits in cyber issues or discover new kinds of biology. Now, this is fiction today, but its reasoning is likely to be true. And when that happens, we want to be ready to know how to make sure these things are not misused by evil people,” he added.

Zero-day exploits refer to security vulnerabilities or software flaws that are unknown to the maker of the particular software or application. These vulnerabilities face the risk of being exploited by malicious people to gain unauthorized access to computer systems, launch malicious activities, or bypass security measures.

Schmidt isn’t the first person to warn about the existential threat that AI poses.

“Artificial intelligence poses ‘an existential threat to humanity’ akin to nuclear weapons in the 1980s and should be reined in until it can be properly regulated, an international group of doctors and public health experts warned,” Axios reported earlier this month.

Writing for BMJ Global Health, an online journal, the doctors and “experts” noted, “With exponential growth in AI research and development, the window of opportunity to avoid serious and potentially existential harms is closing.”

In other words, the technology is expanding too fast, leaving people unprepared for its more sinister and potentially deadly side effects.

The doctors and “experts” also warned that AI’s ability to quickly analyze huge quantities of data could also be misused to “further undermine democracy by causing a general breakdown in trust or by driving social division and conflict, with ensuing public health impacts.”

They also “raised concerns about the development of future weapons systems which could be capable of locating, selecting and killing “at an industrial scale” without the need for human supervision,” according to Axios.

And they warned about AI’s effect on jobs.

“While there would be many benefits from ending work that is repetitive, dangerous, and unpleasant, we already know that unemployment is strongly associated with adverse health outcomes and behavior,” they wrote.

All this comes amid the release just this week of a particularly fascinating deepfake video of President Joe Biden as a “transgender woman” promoting Bud Light.

Watch:

“Experts” spoken to by the Daily Mail said that while it’s pretty easy to spot these deepfakes at the moment, it may become virtually impossible in the near future.

“It is becoming increasingly difficult to identify disinformation, particularly sophisticated AI-generated deepfake,” Cayce Myers, a professor in Virginia Tech’s School of Communication, said.

“Spotting this disinformation is going to require users to have more media literacy and savvy in examining the truth of any claim. ‘The cost barrier for generative AI is also so low that now almost anyone with a computer and internet has access to AI,” he added.

Myers further emphasized the role that everybody — including tech companies and everyday citizens — will have to play to prevent deepfakes from disrupting the 2024 presidential election, not to mention future elections.

“Examining sources, understanding warning signs of disinformation, and being diligent in what we share online is one personal way to combat the spread of disinformation. However, that is not going to be enough,” he said.

“Companies that produce AI content and social media companies where disinformation is spread will need to implement some level of guardrails to prevent the widespread disinformation from being spread,” he added.

Otherwise, there could be problems — much like what happened this week, in fact.

“A falsified photograph of an explosion near the Pentagon spread widely on social media Monday morning, briefly sending US stocks lower in possibly the first instance of an AI-generated image moving the market,” according to Bloomberg.

“It soon spread on Twitter accounts that reach millions of followers, including the Russian state-controlled news network RT and the financial news site ZeroHedge, a participant in the social-media company’s new Twitter Blue verification system.”

If a simple deepfake image — not even a video — could cause the stock market to briefly crash, imagine what else AI technology could conceivably do …

Vivek Saxena

Comment

We have no tolerance for comments containing violence, racism, profanity, vulgarity, doxing, or discourteous behavior. If a comment is spam, instead of replying to it please click the ∨ icon below and to the right of that comment. Thank you for partnering with us to maintain fruitful conversation.

Latest Articles