If 2023 was a year of wonder about artificial intelligence, 2024 was the year to try to get that wonder to do something useful without breaking the bank.
There was a "shift from putting out models to actually building products", said Arvind Nara- yanan, a Princeton University computer science professor and co-author of the new book AI Snake Oil: What Artificial Intelligence Can Do, What It Can't, and How to Tell The Difference.
The first 100 million or so people who experimented with ChatGPT upon its release two years ago actively sought out the chatbot, finding it amazingly helpful at some tasks or laughably mediocre at others.
Now such generative AI technology is baked into an increasing number of technology services whether we're looking for it or not, for instance, through the AI-generated answers in Google search results or new AI techniques in photo-editing tools.
"The main thing that was wrong with generative AI last year is that companies were releasing these really powerful models without a concrete way for people to make use of them," said Narayanan.
"What we're seeing this year is gradually building out these products that can take advantage of those capabilities and do useful things for people."
At the same time, since OpenAI released GPT-4 in March 2023 and competitors introduced similarly performing AI large language models, these models have stopped getting significantly "bigger and qualitatively better", resetting overblown expectations that AI was racing every few months to some kind of better-than-human intelligence, Narayanan said.
That's also meant that the public discourse has shifted from "is AI going to kill us?" to treating it like a normal technology, he said.
Some workers wonder whether AI tools will be used to supplement their work or to replace them as the technology continues to grow.
Tech company Borderless AI has been using an AI chatbot from Cohere to write up employment contracts for workers in Turkiye or India without the
help of outside lawyers or translators.
Video game performers with the Screen Actors Guild-American Federation of Television and Radio Artists, who went on strike in July, said they feared AI could reduce or eliminate job opportunities because it could be used to replicate one performance into a number of other movements without their consent.
Concerns about how movie studios will use AI helped fuel last year's film and television strikes by the union, which lasted four months.
Game companies have also signed side agreements with the union that codify certain AI protections in order to keep working with actors during the strike.
Musicians and authors have voiced similar concerns about AI scraping their voices and books.
But generative AI still can't create unique work or "completely new things", said Walid Saad, a professor of electrical and computer engineering and AI expert at Virginia Tech.
"We can train it with more data so it has more information. But having more information doesn't mean you're more creative.
"As humans, we understand the world around us, right? We understand the physics.
"You understand if you throw a ball on the ground, it's going to bounce. AI tools don't understand the world."
Saad pointed to a meme about AI as an example of that shortcoming.
When someone prompted an AI engine to create an image of salmon swimming in a river, he said, the AI created a photo of a river with cut pieces of salmon found in grocery stores.
"What AI lacks today is the common sense that humans have, and I think that is the next step," he said.
The authors are AP technology writers