Experts beg Congress for intervention on artificial intelligence: ‘They can and will create persuasive lies’

Artificial intelligence experts literally begged senators on Tuesday to regulate the industry, with one of them starkly warning that human life as we know it is about to be turned on its head by AI, declaring that “Democracy itself is threatened.”

Gary Marcus, who is a New York University professor emeritus and was the leader of Uber’s AI labs from 2016 to 2017, admonished senators that the fate of the nation may depend on tough AI rules from Congress.

Also testifying in front of the Senate Judiciary subcommittee were OpenAI CEO Sam Altman and IBM Chief Privacy & Trust Officer Christina Montgomery. Both implored Congress to provide federal oversight of AI. The only point they differed on was whether a new federal agency was needed.

“They can and will create persuasive lies at a scale humanity has never seen before,” Marcus said referring to generative AI systems. “Outsiders will use them to affect our elections, insiders to manipulate our markets and our political systems. Democracy itself is threatened.”

Marcus went on to contend that AI systems that can severely damage humans’ trust in one another have already been released and the chaos that is resultant is quickly escalating.

“A law professor, for example, was accused by a chatbot of sexual harassment. Untrue,” Marcus noted referring to Professor Jonathan Turley. “And it pointed to a Washington Post article that didn’t even exist. The more that that happens, the more anybody can deny anything.”

“As one prominent lawyer told me on Friday, defendants are starting to claim that plaintiffs are making up legitimate evidence,” he commented. “These sorts of allegations undermine the abilities of juries to decide what or who to believe and contribute to the undermining of democracy.”

He went on to assert that AI is making the problems of suicide and deteriorating mental health much worse.

“An open-source large language model recently seems to have played a role in a person’s decision to take their own life,” Marcus charged. “The large language model asked the human, ‘If you wanted to die, why didn’t you do it earlier?’ then followed up with, ‘Were you thinking of me when you overdosed?’ without ever referring the patient to the human help that was obviously needed.”

(Video Credit: PBS)

Marcus dismissed out of hand Altman and Montgomery claiming that systems are designed to make AI “safe.”

“We all more or less agree on the values we would like for our AI systems to honor. We want, for example, for our systems to be transparent, to protect our privacy, to be free of bias, and above all else, to be safe,” he stated. “But current systems are not in line with these values.”

“The Big Tech companies’ preferred plan boils down to ‘trust us.’ But why should we?” Marcus bluntly asked.

The professor wants to create a safe AI regulatory regime that would include local, national, and global measures. He envisions a worldwide organization that would put standards in place with mandates that all AI systems developers must follow.

“Ultimately, we may need something like CERN, global, international, and neutral but focused on AI safety rather than high-energy physics,” he posited.

Marcus also called for a new federal agency to monitor compliance. It would review systems before they are released, assess how they run in the real world, and recall systems that are found to be flawed. He wants a network of independent scientists who can review AI systems before they are released at each company put in place as well.

“A safety review like we use [at] the FDA prior to widespread deployment,” he told Sen. John Kennedy (R-LA) during his testimony. “If you’re going to introduce something to 100 million people, somebody has to have their eyeballs on it.”

(Video Credit: TED)

During the hearing, Marcus put Altman on the hot seat when he did not directly answer what his biggest fear in the field is. Altman talked about potential job losses, but when Marcus was asked the question, he forced Altman’s hand.

“Sam’s worst fear I do not think is employment, and he never told us what his worst fear actually is, and I think it’s germane to find out,” Marcus contended.

Altman was forced to admit that he is worried about the possibility of AI doing great harm.

“I think if this technology goes wrong, it can go quite wrong, and we want to be vocal about that,” Altman stated. “We want to work with the government to prevent that from happening.”

For context, here is the three-hour-long hearing which is well worth listening to:

(Video Credit: Yahoo Finance)

Get the latest BPR news delivered free to your inbox daily. SIGN UP HERE


We have no tolerance for comments containing violence, racism, profanity, vulgarity, doxing, or discourteous behavior. If a comment is spam, instead of replying to it please click the ∨ icon below and to the right of that comment. Thank you for partnering with us to maintain fruitful conversation.

Latest Articles