Below Supernav ↴

As companies implement AI, others focus on AI security

  • New tech helps companies crackdown on issues with artificial intelligence
  • The first AI firewall is designed to prevent inaccuracies and data leaks
  • Some businesses have blocked employees from using AI at work

FILE – Text from the ChatGPT page of the OpenAI website is shown in this photo, in New York, Feb. 2, 2023. (AP Photo/Richard Drew, File)

 

Main Area Top ↴

AUTO TEST CUSTOM HTML 20241211205327

AUTO TEST CUSTOM HTML 20241212105526

(NewsNation) — As companies rush to implement cutting-edge artificial intelligence systems, others are rolling out tools to protect those same systems from themselves.

Earlier this week, Arthur — an A.I. monitoring platform — introduced a first-of-its-kind firewall for “large language models” (LLMs).

LLMs, which are a type of artificial intelligence that learns skills by analyzing massive amounts of text, have already been shown to boost productivity but they also come with vulnerabilities.

When OpenAI released its artificial intelligence chatbot, ChatGPT, in November users realized they could generate inaccurate, and sometimes, toxic responses. Those issues may not matter much for a user who’s looking for a great recipe but they make a big difference at the corporate level.

“In a business context, where there’s billions of dollars at stake, we better be very sure (the A.I. response) is accurate before we return it to the user,” said Arthur CEO Adam Wenchel.

Rather than accept that sometimes mistakes happen, Arthur’s platform can intervene and prevent certain prompts where errors are likely. The stakes are even higher in a healthcare or legal setting where lives are on the line, Wenchel pointed out.

There are also privacy and data leak concerns.

As companies implement A.I., they’ll have to use massive troves of data to train their systems. For example, a bank’s model may include investment data that contains sensitive information that neither the public, nor the company’s own employees, should be able to access.

Arthur’s firewall helps filter that data by analyzing the A.I.’s response before it’s sent to the user. Wenchel said the tool can flag responses that include Social Security numbers or individual health records and block the A.I. from presenting that information.

The monitoring platform is already being used by the Department of Defense and some of the top U.S. banks. Like the artificial intelligence systems it’s designed to protect, Arthur’s platform is continuously learning and improving as it’s used.

The additional security tools will come as welcome news for companies who have already become more cautious of A.I.

Just this week Samsung banned employees in its electronics division from using generative AI tools like ChatGPT after staff uploaded sensitive code to the platform, Bloomberg reported. Earlier in the year, some Wall Street banks did the same.

But those interventions may end up being temporary. With additional A.I. security in place, Wenchel thinks businesses will feel more comfortable integrating the technology going forward.

“It’s a whole new world,” he said. “Companies that don’t normally move with incredible speed are moving pretty quickly which is pretty amazing to see.”

Tech

Copyright 2024 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. regular

test

 

Main Area Middle ↴

Trending on NewsNationNow.com

Main Area Bottom ↴