DHS deploys A.I. surveillance tool to detect ‘sentiment and emotion’ in online posts

It would appear that the U.S. Government has spent millions of dollars to move this nation closer to an Orwellian nightmare.

According to a report from 404 Media, “Customs and Border Protection (CBP), part of the Department of Homeland Security, has bought millions of dollars worth of software from a company that uses artificial intelligence to detect ‘sentiment and emotion’ in online posts.”

The outlet cites in its report “a cache of documents” it obtained via “Freedom of Information Act requests with CBP and other U.S. law enforcement agencies.”

“CBP told 404 Media it is using technology to analyze open source information related to inbound and outbound travelers who the agency believes may threaten public safety, national security, or lawful trade and travel,” the outlet reveals. “In this case, the specific company called Fivecast also offers ‘AI-enabled’ object recognition in images and video, and detection of ‘risk terms and phrases’ across multiple languages, according to one of the documents.”

According to its website, Fivecast “is a global provider of open-source intelligence solutions, delivering targeted data collection and AI-enabled risk analytics.”

“Delivering more than monitoring,” the company states, “Fivecast solutions explore unprecedented amounts of digital data and deliver deep, actionable insights.”

While the company says it collects data from large social media platforms such as Facebook, it clearly wishes to appeal to those looking for potentially problematic posts on traditionally conservative websites.

“Marketing materials promote the software’s ability to provide targeted data collection from big social platforms like Facebook and Reddit,” 404 reports, “but also specifically names smaller communities like 4chan, 8kun, and Gab.”

“To demonstrate its functionality,” 404 continues, “Fivecast promotional materials explain how the software was able to track social media posts and related Persons-of-Interest starting with just ‘basic bio details’ from a New York Times Magazine article about members of the far-right paramilitary Boogaloo movement.”

CPB has relied on AI to monitor “travelers and targets, which can include U.S. citizens,” the outlet notes.

“In May, I revealed CBP’s use of another AI tool to screen travelers which could link peoples’ social media posts to their Social Security number and location data,” writes the report’s author, Joseph Cox. “This latest news shows that CBP has deployed multiple AI-powered systems, and provides insight into what exactly these tools claim to be capable of while raising questions about their accuracy and utility.”

In an email to 404, Patrick Toomey, deputy director of the ACLU’s National Security Project, called the AI tools “junk science.”

“CBP should not be secretly buying and deploying tools that rely on junk science to scrutinize people’s social media posts, claim to analyze their emotions, and identify purported ‘risks,'” Toomey wrote.

Cox details one of the documents the outlet obtained, and its content is hair-raising:

One document obtained by 404 Media marked “commercial in confidence” is an overview of Fivecast’s “ONYX” product.

In it Fivecast says its product can be used to target individuals or groups, single posts, or events. As well as collecting from social media platforms big and small, Fivecast users can also upload their own “bulk” data, the document says.

Fivecast says its tool has been built “in consultation” with Five Eyes law enforcement and intelligence agencies, those being agencies from the U.S., United Kingdom, Canada, Australia, and New Zealand. Specifically on building “person-of-interest” networks, the tool “is optimized for this exact requirement.”

 

The emotional state of the users — feelings such as “anger,” “disgust,” “fear,” “joy,” “sadness” and “surprise” — can be detected over time, according to charts contained in the Fivecast document.

“One chart shows peaks of anger and disgust throughout an early 2020 timeframe of a target, for example,” 404 reports.

“Logistical difficulties of AI assessing human emotion aside, this would theoretically open the door for the government to surveil and censor not just the substance of speech, but also the alleged emotion behind that speech (which could potentially at some point be admissible in court to impune the intent/motive of defendants),” wrote Ben Bartee for PJ Media. “It’s almost impossible to overestimate the dystopian applications of this technology, which for obvious reasons governments around the world are eager beavers to adopt.”

“The public knows far too little about CBP’s Counter Network Division, but what we do know paints a disturbing picture of an agency with few rules and access to an ocean of sensitive personal data about Americans,” Toomey told 404. “The potential for abuse is immense.”

Melissa Fine

Comment

We have no tolerance for comments containing violence, racism, profanity, vulgarity, doxing, or discourteous behavior. If a comment is spam, instead of replying to it please click the ∨ icon below and to the right of that comment. Thank you for partnering with us to maintain fruitful conversation.

Latest Articles