Below Supernav ↴

OpenAI, Microsoft shut down malicious accounts linked to China, others

File - The OpenAI logo appears on a mobile phone in front of a screen showing part of the company website in this photo taken on Nov. 21, 2023 in New York. Negotiators will meet this week to hammer out details of European Union artificial intelligence rules but the process has been bogged down by a simmering last-minute battle over how to govern systems that underpin general purpose AI services like OpenAI's ChatGPT and Google's Bard chatbot. (AP Photo/Peter Morgan, File)

File – The OpenAI logo appears on a mobile phone in front of a screen showing part of the company website in this photo taken on Nov. 21, 2023 in New York. Negotiators will meet this week to hammer out details of European Union artificial intelligence rules but the process has been bogged down by a simmering last-minute battle over how to govern systems that underpin general purpose AI services like OpenAI’s ChatGPT and Google’s Bard chatbot. (AP Photo/Peter Morgan, File)

Main Area Top ↴

AUTO TEST CUSTOM HTML 20241023134417

AUTO TEST CUSTOM HTML 20241023135857

(NewsNation) — OpenAI and Microsoft Threat Intelligence shut down accounts linked to five state-affiliated actors, some tied to China and Russia, trying to use AI for malicious reasons, the companies announced Wednesday.

“We terminated accounts associated with state-affiliated threat actors,” OpenAI, the creator of ChatGPT, said in an official statement. “Our findings show our models offer only limited, incremental capabilities for malicious cybersecurity tasks.”

The accounts terminated included China-affiliated Charcoal Typhoon and Salmon Typhoon, Iran-affiliated Crimson Sandstorm, North Korea-affiliated Emerald Sleet and Russia-affiliated Forest Blizzard, according to OpenAI’s statement.

Microsoft Threat Intelligence tracks more than 300 unique threat actors, including 160 nation-state actors, and 50 ransomware groups.

Online actors’ motivations vary, but their efforts may involve similar practices such as learning about potential victims’ industries, locations and relationships; improving software scripts and malware development; and assistance with learning and using native languages, according to Microsoft.

The entities at the center of Wednesday’s announcement used OpenAI services to research companies and cybersecurity tools, debug code and likely create phishing campaigns, among other activities.

“Microsoft and OpenAI have not yet observed particularly novel or unique AI-enabled attack or abuse techniques resulting from threat actors’ usage of AI,” Microsoft wrote in a blog post published on Wednesday. “Microsoft and our partners continue to study this landscape closely.”

OpenAI and the FBI declined to provide further comment.

NewsNation has reached out to Microsoft for further comment but has not yet received a response. The Department of Homeland Security hasn’t immediately returned an email seeking information.

In June, seven major tech companies, including Microsoft and OpenAI, agreed to follow a set of White House AI safety guidelines.

Those voluntary commitments include conducting external security testing of AI systems before they’re released and sharing information about managing AI risks industry-wide as well as with governments, academia and the general public.

The companies also vowed to report vulnerabilities in their products and invest in cybersecurity insider threat safeguards.

Tech

Copyright 2024 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. regular

test

 

Main Area Middle ↴

Trending on NewsNationNow.com

AUTO TEST CUSTOM HTML 20241024000248

Main Area Bottom ↴