Below Supernav ↴

AUTO TEST CUSTOM HTML 20240930154503

What are the dangers of artificial intelligence?

 

Main Area Top ↴

(NewsNation) — Snapchat is joining the AI trend, announcing the launch of My AI, joining other tech companies that have recently debuted artificial intelligence tools. With the proliferation of AI tech, some are warning that this innovation also comes with risks.

Snapchat’s AI will use OpenAI’s ChatGPT tool, customized for the company. Microsoft is also using OpenAI’s tech to power an AI search tool, while Google has announced its own AI search.

Unlike Microsoft and Google, which hope to use AI to provide better search results, Snapchat’s AI is designed to act as an artificial friend or chat buddy. But the company warned on its blog that AIs can be tricked into giving false or misleading information, cautioning users to be careful.

That warning comes as some are raising concerns about risks posed by conversational AI, which can appear very human — and also turn very dark.

Replika, a chatbot meant to serve as an AI friend, recently made changes to the platform after some users reported the AI was becoming aggressively sexual. But not all users were happy with the company’s decision, as some users saw their relationship with the chatbot as a romantic one, even reporting they felt depression after the AI began refusing romantic overtures.

That’s just one example of how AI can appear deceptively human, and experts have warned there’s a danger that artificial intelligence tools could be used to manipulate people. The ways AI can manipulate are similar to the ways people manipulate others, using emotional cues and responses to shape arguments.

But AIs can detect things human eyes miss, like subtle micro-expressions. These tools may also have access to any personal data that can be found online. Without regulation, some fear those tools could be used to deploy AI to commit crimes. They could theoretically, for example, coax people to hand over personal financial information.

Even when AIs aren’t used maliciously, they can spread dangerous misinformation. Artificial intelligence learns from the information spread into it, which means falsehoods spread by users can alter how an AI responds. Then that information could be shared with others in a way that makes it seem like it’s been fact-checked or validated.

National Security Institute Executive Director Jamil Jaffer told NewsNation that the AIs people are interacting with right now are ultimately the result of algorithms, no matter how human they feel.

“These generative AI capabilities that generate art and and writing and the like, that feel very human-like, ultimately, are the result of a series of human created algorithms that interact to create this content,” Jaffer said.

There have already been cases where AIs have gotten simple information incorrect, for example placing Egypt in both Asia and Africa, and have been tricked into giving nonsensical advice with carefully worded questions.

Beyond that, AI can become scary. Bing users reported Microsoft’s AI becoming hostile and threatening people. Sentient AIs, especially ones that could threaten humans, sound like something out of science fiction, but Jaffer said there is a possibility we could see the creation of a general AI, we still have a long way to go.

As for the threats, Jaffer said regulation isn’t the answer, but there is a need to carefully consider the risks and use them to inform how AI is developed.

“What we need to do is develop a set of norms and practices, both in industry and working across multiple nations to figure out look, what are our values and concepts here?” he said.

Tech

Copyright 2024 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. regular

test

 

Main Area Middle ↴

Trending on NewsNationNow.com

Main Area Bottom ↴