NewsNation Now

2023: A breakout year for artificial intelligence

FILE - The OpenAI logo is seen on a mobile phone in front of a computer screen which displays output from ChatGPT, Tuesday, March 21, 2023, in Boston. As schools across the country debate banning AI chatbots in 2023, some math and computer science teachers are embracing them as just another tool. (AP Photo/Michael Dwyer, File)

(NewsNation) — When the history books are written, 2023 may be remembered as the year artificial intelligence went mainstream.

With OpenAI’s release of ChatGPT just over a year ago, everyday Americans got their first look at a technology that’s expected to change the way we learn, work and interact with each other.

Optimistic proponents of generative AI — a type of AI that uses massive amounts of training data to create new content like images, text and videos — argue it could add trillions of dollars in value to the global economy. Other science and tech industry leaders are more cautious and worry the AI revolution poses an existential threat to humanity.

Here’s how some of the optimism — and pessimism — has come to pass over the last year.

The Good: Ultimate efficiency

OpenAI launched ChatGPT in November 2022, marking the start of generative AI’s breakout year.

Users were amazed at the AI chatbot’s ability to have humanlike conversations that came across as both knowledgeable and informal.

Students realized they could produce entire term papers in a matter of seconds. Others sought life advice, peppering the app with philosophical questions. ChatGPT even led a church service of more than 300 people in one case.

Within just a few months, the viral AI chatbot had 100 million monthly active users, making it the fastest-growing consumer app ever.

From there, the race was on.

Microsoft introduced its new AI-powered Copilot in March — Bing also got an AI upgrade. Google unveiled Bard, and Meta rolled out its open-source large language model, Llama 2. Even the CIA started building a chatbot to help with investigations.

In November, Silicon Valley startup Humane offered a glimpse into the future of AI-powered hardware with its $700 AI Pin, which is designed to replace the smartphone. The wearable device effectively allows users to attach an AI chatbot to their clothing.

Other AI models, like OpenAI’s DALL-E and Stability AI’s Stable Diffusion — which generate images from text — have continued to improve over the past year.

The hopeful view is that AI will automate the tasks nobody wants to do, which will free up employees to do the type of work they enjoy. In other words, AI will complement, rather than replace, humans.

An English teacher, for example, may spend less time writing tests and more time working directly with students. A financial analyst could focus less on crunching numbers and more on drawing insights.

With that synergy will come a new era of efficiency.

Generative artificial intelligence is poised to “unleash the next wave of productivity” and is set to add up to $4.4 trillion of value to the global economy annually, according to a report from the McKinsey Global Insitute.

The Bad: A productivity booster or a job killer?

There’s no question AI will change how people work, but it’s less clear whether that impact will be positive or negative. It could be both.

Anywhere from 60 to 70% of the activities that make up an employee’s workday have the potential to be automated, McKinsey estimated.

That potential has many Americans feeling anxious about AI and their jobs.

Most people think AI will negatively affect the U.S. job market, with 75% saying it will decrease the total number of jobs over the next decade, a recent Gallup poll found. Nearly 70% of college graduates and 80% of those without a degree felt that way.

Historically, advances in workplace automation have focused on physical tasks, but AI represents a shift toward cognitive automation. Those with a bachelor’s degree or higher are more than twice as likely to be in jobs most exposed to AI compared to those with a high school diploma, Pew found.

Depending on which analysis you draw from, anywhere from 19 to 27% of jobs rely on skills that could be easily automated with AI.

Additionally, doctors are worried about the role of artificial intelligence. Nearly two-thirds of physicians are concerned about AI influencing diagnosis and treatment decisions, according to a recent MedScape survey.

There are also privacy concerns. AI algorithms rely on vast troves of training data, but that data has to come from somewhere. In July, Google updated its privacy policy with new language, making it clear that the company uses “publicly available information” to help train its artificial intelligence models.

An article in Scientific American put it bluntly: “Companies are training their generative AI models on vast swathes of the Internet — and there’s no real way to stop them.”

Most Americans, 70%, don’t trust companies to use AI responsibly, according to an October Pew report.

AI could also change warfare, and some have already signaled the start of a “Tech Cold War” with China. Experts have warned that Chinese-owned apps like TikTok could be used to gather data from Americans that could be used to train AI models. The company has repeatedly denied those accusations.

The Ugly: Deepfakes a rising concern

While many are worried about their data on the backend, others have seen how AI tools can be abused on the front end. Generative AI apps have become more powerful and easier to use, making it possible for nearly anyone to create harmful content.

Experts have sounded the alarm about the increased risks of “sharenting,” a term for when parents publicize their children’s private lives on social media. The fear, as explained by the top prosecutors in all 50 states, is that AI tools can create so-called “deepfake” child pornography using images posted online.

Also referred to as “synthetic media,” deepfakes are a form of digitally altered content that can make it look like someone said or did something that never happened. AI algorithms have made deepfakes more realistic and at times hard to discern from reality.

In November, a nude photo controversy at a New Jersey high school amplified those concerns, potentially foreshadowing the future of cyberbullying.

Students at Westfield High School told the Wall Street Journal that one or more classmates used an online tool to generate AI pornographic pictures of female classmates and then shared them in group chats.

A NewsNation investigation showed just how pervasive AI “nudifier” tools are. A search on TikTok revealed dozens of videos pushing websites that allow users to “remove clothes from any picture,” including those of their “crush.”

Over the past year, several major platforms have tried to keep pace by updating their AI policies. Last month, YouTube announced a new rule requiring creators to disclose when using generative artificial intelligence to make realistic-looking videos.

In March, TikTok banned deepfakes of private figures and young people. Those updates built on the company’s previous guidelines that prohibited deepfakes that mislead viewers about real-world events and cause harm.