Founder of OpenAI warns that ‘superintelligence’ needs to be controlled or risk ‘human extinction’

A co-founder of OpenAI, the company behind ChatGPT, warned in a recent blog post about the dangers of super-intelligent artificial intelligence.

“Superintelligence will be the most impactful technology humanity has ever invented, and could help us solve many of the world’s most important problems,” OpenAI co-founder Ilya Sutskever wrote in a blog post co-written by the head of alignment, Jan Leike.

“But the vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction,” the two added.

Furthermore, they believe this “superintelligence” could manifest itself within a decade — which is why figuring out ways of “managing these risks” is so important.

The problem, of course, is that currently there’s no “solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue.”

“Our current techniques for aligning AI, such as reinforcement learning from human feedback, rely on humans’ ability to supervise AI. But humans won’t be able to reliably supervise AI systems much smarter than us, and so our current alignment techniques will not scale to superintelligence. We need new scientific and technical breakthroughs,” the blog post continues.

The good news is that Sutskever and his team “are assembling a team of top machine learning researchers and engineers to work on this problem.”

“We are dedicating 20% of the compute we’ve secured to date over the next four years to solving the problem of superintelligence alignment. Our chief basic research bet is our new Superalignment team, but getting this right is critical to achieve our mission and we expect many teams to contribute, from developing new methods to scaling them up to deployment,” the blog post notes.

Sutskever and Leike acknowledge in their post that their mission is an “incredibly ambitious” one that’s “not guaranteed to succeed,” but stress that they’re “optimistic that a focused, concerted effort can solve the problem.”

The pair isn’t the first to raise concerns about AI. Back in May, former Google CEO Eric Schmidt warned that AI could be “misused by evil people” one day to cause harm, even death.

Speaking at The Wall Street Journal’s CEO Council Summit in London, Schmidt specifically warned that AI is an “existential risk” to humanity.

“And existential risk is defined as many, many, many, many people harmed or killed,” he bluntly said.

“There are scenarios not today, but reasonably soon, where these systems will be able to find zero-day exploits in cyber issues or discover new kinds of biology. Now, this is fiction today, but its reasoning is likely to be true. And when that happens, we want to be ready to know how to make sure these things are not misused by evil people,” he added.

Zero-day exploits refer to security vulnerabilities or software flaws that are unknown to the maker of the particular software or application. These vulnerabilities face the risk of being exploited by malicious people to gain unauthorized access to computer systems, launch malicious activities, or bypass security measures.

Prior to Schmidt’s warning came one from a group of researchers and “experts.”

“Artificial intelligence poses ‘an existential threat to humanity’ akin to nuclear weapons in the 1980s and should be reined in until it can be properly regulated, an international group of doctors and public health experts warned,” Axios reported in early May.

Writing for BMJ Global Health, an online journal, the doctors and “experts” noted, “With exponential growth in AI research and development, the window of opportunity to avoid serious and potentially existential harms is closing.”

In other words, the technology is expanding too fast, leaving people unprepared for its more sinister and potentially deadly side effects.

The doctors and “experts” also warned that AI’s ability to quickly analyze huge quantities of data could also be misused to “further undermine democracy by causing a general breakdown in trust or by driving social division and conflict, with ensuing public health impacts.”

Flashing back further to April, former presidential candidate Andrew Yang also warned about AI:

Speaking on Fox Business Network’s “Cavuto: Coast to Coast,” he was asked by host Neil Cavuto specifically about remarks billionaire Elon Musk had recently made about the risks that AI technology ultimately poses to humans.

In speaking about the risks, Musk added that there should be a six-month minimum pause on AI technology for the sake of safety. Yang agreed with this.

“I think he’s right to be cautious and concerned. We all should be concerned. I was talking to my friend about this, and she said, hey, what’s the worst that could happen? And I said, well, unwarranted military conflict, mass identity theft, spoofing of people by voices of their loved ones giving them a call,” he told Cavuto.

“I mean, all of these things are now on the table. Science fiction-type scenarios are here with us and the incentives for these tech companies are to go as fast as possible because you’re in a bit of a race. And in that kind of context, bad things are likely to happen, so I think Elon’s spot-on for calling for a pause,” he added.


If you are fed up with letting radical big tech execs, phony fact-checkers, tyrannical liberals and a lying mainstream media have unprecedented power over your news please consider making a donation to American Wire News to help us fight them.

Thank you for your donation!
Vivek Saxena


We have no tolerance for comments containing violence, racism, profanity, vulgarity, doxing, or discourteous behavior. If a comment is spam, instead of replying to it please click the ∨ icon below and to the right of that comment. Thank you for partnering with us to maintain fruitful conversation.

Latest Articles