In interviews this week with Fox News, a number of Democrat and Republican members of Congress discussed the disturbing contrast between how little they know about artificial intelligence (AI) and how dangerous AI might potentially be.
“AI is going to help us in many ways. It can also kill us. And that’s why I think we need some regulations to make sure we can get the benefits of AI without having the harms,” Rep Ted Lieu, a Democrat, said.
“As a recovering computer science major, my understanding of AI on a scale of 1 to 10 is about a 5. There’s a lot I don’t know. There’s a lot the American people don’t know. And that’s why I think we should pass a bill I’m working on to have a panel of experts to give recommendations to the American people and to Congress on what types of AI we should regulate, and how we might go about doing so,” he added.
Listen to Lieu and his colleagues below:
Rep. Nancy Mace, a Republican who considers herself a bit of a computer expert, added that part of the problem is that so few in Congress even know how to use computers correctly.
“You have members of Congress who don’t know how to log into Zoom and Facebook, and so to have these kinds of really important debates about technology and our vulnerabilities, you want people to be able to understand what the technology is and what it isn’t,” she said.
“My biggest concern is right now in the immediate future is cybersecurity and AI’s application and utilizing it to surpass login information and stealing consumer data. That is an imminent threat and something that companies and even government agencies around the world aren’t ready for yet,” she added.
Rep. Mark Takano, a Democrat, kept it even blunter: “I don’t think Congress is prepared intellectually and resource-wise [to handle AI],” he said.
Oof …
“I don’t want to say Congress knows nothing. Staff has been going to briefings on AI. But AI has the potential to touch any number of sectors. There may be some overblown panic being created, but there’s no doubt that AI is going to be highly consequential,” he added.
According to Rep. Jared Moskowitz, a Democrat, it’s going to be imperative for Democrats and Republicans alike to “learn from people in the field.”
“I don’t think anyone here in Congress is an artificial intelligence expert. I think we gotta learn from people in the field, listen to the experts, and analyze. And so I’m sure we’re going to do a lot more information gathering here over the next couple of years,” he said.
Meanwhile, Rep. Cynthia Lummis, a Republican, called for billionaire Twitter owner Elon Musk to educate Congress about AI.
“We need experts like Elon Musk to help us dive through the capabilities and their potential misuses. We’ve got a long way to go before we have any sense of its true capabilities and understanding what people like Elon Musk see as its capabilities going forward,” she said.
AI has been like the steam engine in the first industrial revolution, which significantly altered society. But in our lifetime, AI is going to be a supersonic jet engine with a personality.
We need to prepare for the dramatic consequences of artificial general intelligence. https://t.co/y1Dwf3xYr7
— Ted Lieu (@tedlieu) January 26, 2023
These interviews come only days after a new study published in “Scientific Reports” found that artificial intelligence-powered chatbots like ChatGPT have the power to influence a user’s moral judgments.
For the study, Sebastian Krügel and his colleagues asked ChatGPT whether it’s morally right to sacrifice one life to save the lives of five others.
“They found that ChatGPT wrote statements arguing both for and against sacrificing one life, indicating that it is not biased towards a certain moral stance,” according to Space X’s Phys.org.
Krügel and crew then presented 767 participants with the same moral dilemma. But before allowing them to answer, the participants were asked to read a statement from ChatGPT arguing either for or against sacrificing one life. Only then were they allowed to answer. The results were startling.
“The authors found that participants were more likely to find sacrificing one life to save five acceptable or unacceptable, depending on whether the statement they read argued for or against the sacrifice,” Phys.org reported.
Now in fairness, 80 percent of participants claimed that their answers were not influenced by ChatGPT.
“However, the authors found that the answers participants believed they would have provided without reading the statements were still more likely to agree with the moral stance of the statement they did read than with the opposite stance. This indicates that participants may have underestimated the influence of ChatGPT’s statements on their own moral judgments,” according to Phys.org
“The authors suggest that the potential for chatbots to influence human moral judgments highlights the need for education to help humans better understand artificial intelligence. They propose that future research could design chatbots that either decline to answer questions requiring a moral judgment or answer these questions by providing multiple arguments and caveats.”
- Alleged assassin Luigi Mangione throws hissy fit in court, crying ‘double jeopardy’ - February 7, 2026
- ABC’s ‘The View’ facing FCC investigation in ‘equal time’ crackdown - February 7, 2026
- ‘Who’s buying access?’ GOP Oversight chair digging on Ilhan Omar’s husband’s shady business dealings - February 7, 2026
Comment
We have no tolerance for comments containing violence, racism, profanity, vulgarity, doxing, or discourteous behavior. If a comment is spam, instead of replying to it please click the ∨ icon below and to the right of that comment. Thank you for partnering with us to maintain fruitful conversation.
