NewsNation Now

Companies using AI to monitor employee messages: Report

(Getty Images)

(NewsNation) — Major United States employers, including Walmart, Delta Air Lines, T-Mobile, Chevron and Starbucks, are using a startup artificial intelligence company to monitor employee messages, according to a recent CNBC article.

European brands, like Nestle and AstraZeneca, are also employing the services of AI firm Aware, which uses dozens of models built to read text and process messages.


These AI models can see how employees in certain age groups or geographies are responding to corporate policies and marketing campaigns, Jeff Schumann, Aware’s co-founder and CEO, told CNBC.

They can also identify bullying, harassment, discrimination, noncompliance, pornography, nudity and other behaviors, Schumann said. While analytics tools looking at employee sentiment and toxicity can’t flag individual employee names, Schumann said a separate eDiscovery tool can do so if it detects “extreme threats” or other risk behaviors.

Starbucks, Walmart, T-Mobile, and Chevron use Aware for governance risk and compliance, according to the company. This kind of work makes up about 80% of the company’s business, CNBC reported.

A representative from AstraZeneca said to CNBC that the company uses the eDiscovery product but not the analytics for sentiment or toxicity. Delta uses Aware’s analytics and eDiscovery to keep track of trends and sentiment as a way of gathering feedback from employees and its legal records.

Walmart, T-Mobile, Chevron, Starbucks and Nestle did not respond to CNBC’s request for comment.

It’s not just these companies using AI to see what their employees are doing, though. Business Insider published an investigation that showed JPMorgan Chase used an internal tool to track employees’ office attendance, calls and calendars. Autopilot workers at Tesla’s New York plant, meanwhile, said in a Bloomberg article that their keystrokes are tracked to ensure they’re “actively working.”

When it comes to employee surveillance AI in general, Jutta Williams, co-founder of AI accountability nonprofit Humane Intelligence, said on CNBC that “a lot of this becomes thought crime.”

“This is treating people like inventory in a way I’ve not seen,” she said to the outlet.