11.1 C
Sofia
Thursday, April 18, 2024

No hallucination: Overall AI investment falls, generative AI funding soars

Microsoft ↗ (NASDAQ: MSFT ↗) may be doubling down on artificial intelligence ↗ (AI), but the sector as a whole appears to be seeing rising skepticism from investors that the technology will deliver profits anytime soon.

Tech giant Microsoft recently announced a $1.5 billion strategic investment ↗ in G42 ↗, the Dubai-based AI technology holding company. The pair are already well acquainted, but this “expanded partnership” will see Microsoft president Brad Smith join G42’s board of directors.

G42 will run its AI applications and services on Microsoft’s Azure ↗ cloud computing platform “to deliver advanced AI solutions to global public sector clients and large enterprises.” The partners also plan to work together “to bring advanced AI and digital infrastructure to countries in the Middle East, Central Asia, and Africa.”

Microsoft hasn’t been shy about opening its wallet ↗ when it comes to AI projects, but a new report ↗ from Stanford’s Institute for Human-Centered Artificial Intelligence reveals that global investment in AI declined in 2023, marking the second straight year in which investor interest in AI appears to be waning.

Private investment in AI totaled $96 billion in 2023, down from $103.4 billion in 2022, a steep drop from 2021’s roughly $130 billion. However, the number of newly funded AI firms jumped 40.6% from 2022 to 2023, suggesting that while investors are still interested in betting on who’ll be the next big thing, they’re placing smaller bets on more firms.

One segment that bucked this downward trend was generative AI ↗, in which investment surged eightfold from 2022 to $25.2 billion. Some familiar faces—OpenAI ↗Anthropic ↗,
Hugging Face ↗, and Inflection ↗—were among the primary beneficiaries of this outlay.

As for the top five ways that businesses are using AI, contact-center automation led the way with 26%, personalization ranked second with 23%, with customer acquisition and AI-based enhancement of products tied at 22%, and the creation of new AI-based products at 19%.

Among companies that have implemented AI solutions, 59% reported increased revenue, while 42% reported cost reductions from these solutions.

‘Murica leads the way

America continues to dominate AI investment, spending $67.2 billion last year, dwarfing the $11 billion spent by the EU/United Kingdom and the $7.8 billion spent by China. America also dominates the number of notable machine learning models, with 61 last year, four times that of China.

However, China claimed a clear majority (61%) of AI patents in 2022 (2023 info apparently unavailable), leaving America (21%) in the dust. The number of overall AI patent grants in 2022 was up nearly two-thirds from 2021.

The EU led the race ↗ to regulate AI, passing 32 new rules last year, seven more than the U.S. But American politicians know a publicity generator when they see one, leading to the introduction of 181 new AI-related bills last year, up from 88 in 2022. Only one of those bills actually passed, which is more a reflection of the most dysfunctional Congress in history than a lack of consensus that AI needs guardrails.

America’s AI focus probably explains why the percentage of Yanks who claim that companies using AI make them nervous rose by 11 points last year. And while a majority of most demographics believe AI will reduce the amount of time it takes to get things done, far fewer (roughly one-third overall) believe AI will improve the job market ↗.

Sadly, more than one-third (36%) of Americans believe AI will replace their current job within the next five years. But this may cut both ways, as AI-related positions accounted for 1.6% of all job postings in 2023, down from 2% in 2023.

Basic training

One element that’s likely cooling AI investor interest is the rising cost of training large language models (LLMs). Google (NASDAQ: GOOGL ↗) spent $191 million training its Gemini Ultra (formerly Bard), while OpenAI spent $78 million training GPT-4. With that much money on the line, generative AI will need to start showing that it’s capable of doing more than simply inserting black people into Donald Trump’s orbit ↗.

Despite all this investment, AI’ hallucinations’ remain common, and we’re not just talking about lawyers citing nonexistent legal precedents or researchers citing studies that were never conducted. (On a personal level, GPT-4 recently murdered a former colleague ↗.)

As a recent episode of 60 Minutes ↗ documented, an AI-aided chatbot designed to help prevent eating disorders ended up dispensing advice more appropriate for a pro-anorexia website. It’s been theorized that a third-party company that had taken over the programming of the chatbot injected some generative AI features that were scraping websites for related information.

This type of uncritical hoovering of data from any site on the internet is credited with the ‘garbage-in/garbage-out’ reputation that AI is beginning to develop in the minds of the public.

For example, Cnet recently derided Google’s Gemini ↗ as “a bit of a mess” that was “prone to hallucinate and links to incorrect pieces of information.” Gemini’s connection to the open internet makes its responses more up-to-date, but at the cost of its capacity to filter out often wildly inaccurate data.

CoinGeek has frequently suggested that blockchain technology could provide a more reliable repository of approved, fact-checked data on which LLMs could be trained. The BSV blockchain ↗, with its ability to scale to handle enterprise-worthy data needs, is particularly well-suited to this task. (It would likely need to be smaller, focused datasets to start, but given the network’s constant growth ↗, the sky’s the limit.)

CoinGeek isn’t alone in suggesting blockchain as the chocolate to AI’s peanut butter. Grass ↗
is an AI training network that recently proposed using a Solana ↗-based layer 2 as its training platform.

Sadly, this effort appears doomed from the start due to Solana’s all too frequent network outages (including a five-hour downtime ↗ this February). More recently, Solana was forced to roll out a network upgrade to resolve the congestion caused by the chain’s overabundance of function-free memecoins. So, good luck with all that.

It’s inevitable

Some researchers have concluded that hallucination is inevitable ↗ in LLMs, based on the fact that most models are required to provide a response to a prompt, even when their knowledge is inadequate to provide a proper response.

There’s even a continually updated hallucination leaderboard ↗ that ranks LLMs in terms of how frequently they make stuff up. The current best performer is GPT-4 Turbo, with a modest 2.5% hallucination rate, while the low ‘man’ on this totem pole (we won’t name/shame) is talking out his digital ass around 16% of the time.

This ‘inevitable’ dilemma is forcing AI proponents to think of other ways to mitigate the delivery of incorrect info. Nvidia ↗ (NASDAQ: NVDA ↗) president Jensen Huang recently suggested that AI hallucinations can be handled ↗ by “retrieval-augmented generation,” aka ensuring that “the AI shouldn’t just answer; it should do research first to determine which one of the answers is the best.”

Others have developed a system known as Search-Augmented Factuality Evaluator ↗ (SAFE) that fact-checks chatbot responses. SAFE “utilizes an LLM to break down a long-form response into a set of individual facts and to evaluate the accuracy of each fact using a multi-step reasoning process comprising sending search queries to Google Search and determining whether a fact is supported by the search results.”

The prevailing wisdom is that AI’s ability to adapt and learn on the fly will continue to reduce the number of hallucinations, as will new versions of the software (and new entrants ↗ in this race pushing the development envelope). But with the flood of misinformation and outright disinformation showing no signs of abating, human monitoring of generative AI responses ↗ won’t be going away anytime soon.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage ↗ on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI ↗.

Watch: Blockchain & AI—there should be confluence between these tech

New to blockchain? Check out CoinGeek’s Blockchain for Beginners ↗ section, the ultimate resource guide to learn more about blockchain technology.

James Mackreides
James Mackreides
'Mac' is a short tempered former helicopter pilot , now a writer based in Sofia, Bulgaria. Loves dogs, the outdoors and staying far away from the ocean.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles