REPLACE Big Data Show logo

5 Biggest Issues AI Needs to Figure Out

If you’ve been keeping up with media reports covering the world of generative AI, large language models (LLMs) and AI in general, you might have noticed an increasing number of articles and posts that seem to have a negative slant. Whether they are being driven by humans’ fears of being replaced in their jobs, investors’ fears of not reaping monetary returns they expected or simply where AI currently sits on the normal development and adoption cycle for a new technology, more and more often the news about AI is less hopeful than it has been at any time since Open AI released the first public version of Chat GPT in November of 2022.

Some experts say we’re heading into (or already entombed in) Gartner’s “trough of disillusionment.” Maybe we’re there with Gen AI and LLMs or perhaps we’re not. That AI can be transformative, however, is obvious. That it could solve heretofore intractable problems seems likely. That it could end up as a net positive for society is possible. But clearly there are challenges the business world and policy makers will need to address before AI can achieve those things and reaches the potential indicated in the early reports. Despite those challenges, though, the push to advance AI and its applications doesn’t seem likely to end.

Here are five of the biggest challenges facing AI it needs to solve—or at least improve upon—to move beyond the trough of disillusionment and create the most value and the most positive outcomes it can.

Trust

AI has a trust problem. Hallucinations by LLMs, models biased by training data, privacy concerns and security can all affect the trust humans have in AI. Many of those problems can stem from the complexity of the models and our inability to understand exactly how they work. The lack of transparency into how models produce their output is a main culprit.

Explainable AI is a set of processes built into LLMs and other AI models that enable humans to understand how the algorithm they are using produced the output that resulted. Until humans can better understand how LLMs arrive at their answers, lack of trust will continue to hinder adoption. Explainable AI is one way to address that challenge.

Energy

While trust is a problem of human perception, there are significant technical and manufacturing challenges hindering the expansion of AI. One of the biggest is power consumption. The ability to crunch ever-larger amounts of data is dependent on building more data centers, which will consume more electricity. A recent article from Goldman Sachs estimated that power demands by data centers will increase by 160 percent in the next six years.

“At present, data centers worldwide consume 1-2 % of overall power, but this percentage will likely rise to 3-4% by the end of the decade,” the authors wrote. “In the US and Europe, this increased demand will help drive the kind of electricity growth that hasn’t been seen in a generation.”

That will require attention to the power grid and generating new sources of electricity. Government action will be necessary for both.

Chips

Another major manufacturing issue is making enough chips required to provide the necessary computing power involved in running AI systems. Last summer, news reports began warning of a shortage of chips required in the manufacture of graphics processing units (GPUs) used in AI development. This constraint on the computing power necessary to implement AI solutions will be a challenging hurdle because, at the moment, it’s estimated that one company—Nvidia—controls between 70 and 95 percent of the market for the GPUs that use advanced AI chips.

While it remains a problem, it probably will be resolved faster than the power issue. Other companies are beginning to produce GPUs and, according to TSMC, the Taiwanese company that manufactures the chips for Nvidia and others, the shortage will hopefully ease in 2026.

Training Data

In April, the Wall Street Journal published a piece called “For Data Guzzling AI Companies, the Internet is Too Small.” According to the report, the high-quality text data needed to train LLMs—scientific research, news articles, novels is running out and could be gone in two years.

According to a report from an MIT-led research group called the Data Provenance Initiative, it’s not that the data doesn’t exist. It’s that the data sources that have been commonly used to train evolving models, making them better, are in many cases withholding their consent to AI companies to harvest the data. A New York Times article on the study noted that publishers have begun charging companies like Anthropic, Google and OpenAI for access to data and have mounted legal challenges as well.

LLM developers must address this continuing challenge to keep improving their models and the output generated.

Business Case

A hallmark of the Gartner Hype Cycle is the irrational exuberance that happens in the early stage. Gen AI and LLMs were deemed transformative, the media hopped on and a boom was born. Theoretically, the value of AI makes sense. Being able to automate various processes should save businesses money, should enable them to get by with fewer staff, should eliminate many of the errors made by humans, should make operations more efficient. And all that should ensure that businesses investing in AI will realize more value in return. That seems not to be the case, in general, just yet.

In a recent Goldman Sachs research note, MIT professor Daron Acemoglu was one of several experts who think the promise of AI will not be realized by businesses for a decade or more.

“Generative AI has the potential to fundamentally change the process of scientific discovery, research and development, innovation, new product and material testing, etc. as well as create new products and platforms,” Acemoglu said. “But given the focus and architecture of generative AI technology today, these truly transformative changes won’t happen quickly and few—if any—will likely occur within the next 10 years. Every human invention should be celebrated, and generative AI is a true human invention. But too much optimism and hype may lead to the premature use of technologies that are not yet ready for prime time.”

At Data Universe, despite acknowledging the challenges, we’re a bit more bullish on AI adoption and ROI prospects in the near term. We will continue to report on the industry and what is possible and happening today in this space. Stay tuned.


Want Weekly Insights delivered straight to your inbox?
Sign up below to make sure you never miss out on the newest stories from Data Universe.