
Today’s main image shows the power of today’s AI tools, generating the desired image of a brain while getting the specified words wrong — a total mess!
I’m not a fan of “good enough” when it comes to science and engineering. When I was working at IBM many years ago, my idol who was a world-famous engineer told me that:
“the primary function of computer systems it to maintain data integrity. If it is lost, all bets are off.”
For example, if your bank account gets corrupted by losing your bank balance, the bank’s reputation and business would be quickly damaged. Today’s AI is limited because is makes errors but is promoted as a useful general-purpose tool, anyway.
Let’s look at what the experts are saying about the technology and what LLM-promoting CEOs such as at tech giants and creators of LLMs are saying, some of whom may have financial interests in the ongoing use of a technology that is being overvalued in the hope that “breakthroughs” will rescue it.
There are three sections to today’s newsletter:
The case against today’s AI
The support given to the technology regardless of its capabilities
The trouble with financial motivations to hype technology for profit
The case against today’s AI
There are strong arguments against the predominantly investor and interested-party-CEO-led excitement for AI in which hundreds of thousands of jobs are proposed to be eliminated in software, call centers, and even the medical profession. By claiming such capabilities, the valuation of their products is boosted and their company’s valuation and shares go up. But if the technology doesn’t work like that, shouldn’t the industry shut down the speculation?
Let’s consider just a very small review of the goal of human-level AI in the next year or two.
Association for the Advancement of AI (AAAI) Survey
The AAAI is a leading body for AI. In this article (click) the title says it all: “Majority of AI Researchers Say Tech Industry Is Pouring Billions Into a Dead End.” The survey which queried 475 AI researchers:
Asked whether "scaling up" current AI approaches could lead to achieving artificial general intelligence (AGI), or a general purpose AI that matches or surpasses human cognition, an overwhelming 76 percent of respondents said it was "unlikely" or "very unlikely" to succeed.
Prof Thomas G. Dietterich
As an engineer myself as well as a cognitive scientist, I appreciate the work in building machines that perform useful functions. I use Dietterich’s expertise as a machine learning pioneer who is up-to-date with today’s designs. In one of his lectures he presented this (click for YouTube link):
There are many flaws with LLMs, and he proposes a solution as shown that I fully agree with since that’s the kind of system I have been developing and now show in a working system.
Meta’s Yann LeCun, Turing Award Winner
In an interview with Lex Fridman, Yann LeCun presented the following diagram (click for YouTube link). In his model he graphs the range of correct and incorrect answers possible.
The key words in the diagram is “It’s not fixable.”
Gary Marcus - Leading AI Critic
Gary Marcus is a cognitive scientist and from my perspective he is one of the sober voices about the pros and cons of today’s AI. Here’s a good introduction to the few who don’t know of him (IEEE Spectrum article - click) and his excellent Substack (click).
In the Spectrum article for example, Gary explains the problem clearly (in 2024):
“There’s basically been $75 billion spent on generative AI, another $100 billion on driverless cars. And neither of them has really yielded stable AI that we can trust. We don’t know for sure what we need to do, but we have very good reason to think that merely scaling things up will not work. The current approach keeps coming up against the same problems over and over again.”
The support given to AI (without merit)
How many CEOs are claiming that AGI will arrive this year, next year or maybe in 2027? Here’s the Brave browser’s LLM summary: pro (Altman) and con (Hassabis):
Sam Altman: CEO of OpenAI, claims AGI has been solved and the company is working on ASI. He predicts AI agents will join the workforce in 2025.
Demis Hassabis: CEO of Google DeepMind, warns that the definition of AGI is being "watered down" and believes true AGI is still far off. He emphasizes the need for international standards and cooperation in AGI development.
What about those who say that AI will take over hundreds of thousands of jobs?
CNN reported on the future in April 2024 (click).
Goldman Sachs economists said in March last year that as many as 300 million full-time jobs could be lost or diminished globally by the rise of generative AI, with white-collar workers likely to be the most at risk.
Doomsday for workers
Bill Gates was interviewed in Sept 2024 and claimed that even doctors and teachers may not be needed within ten years (click).
“Bill Gates: Within 10 years, AI will replace many doctors and teachers—humans won’t be needed ‘for most things’”
Ex-CEO of Google, Eric Schmidt warns about the dangers of intelligent machines, but today’s AI is human designed and not intelligent!
“Artificial intelligence keeps getting smarter — and soon, warns former Google CEO Eric Schmidt, it won't take orders from us anymore.”
Intermission
Where are we now? There seems to be a solid view that today’s AI is a dead end from the experts that develop generative AI, and at the same time, well known investors in tech giants and CEOs are very confident that this ‘dead-end’ technology is about to change the world!
I’m with the technical experts who can recognize the potential, but also see that the Next Generation is needed to get there, since for languages, today’s AI is fundamentally flawed.
The Financial Benefits of Hype
Most companies are focused on profit. Even companies like OpenAI who started with the central purpose of being non-profit can quickly convert to for-profit, but in the case of the current AI interest there are too many potential winners to get an objective view.
The market for human-emulation in AI is immense. It’s probably the entire value of the sum of all economies arounds the world. I mean if a robot can do anything 24 hours a days and 7 days a week as well or better than a human, why not replace everyone with better robots? (Hint, we can’t make them today)
The Total Addressable Market
I was reading a story from a robotics company who pointed out that the Total Addressable Market (TAM) for robotics is the biggest in history. In the market to deal with written text messaging, a robot that can perform the tasks of a human agent could replace a large percentage of the workforce. A robot that can do manual labor tasks like a human could replace another large piece of the labor market.
In short, there are many trillions of dollars available to be picked up by a company that solves the problems of AI. But, of course there is one trick to picking up all that money, solving the problems that have halted progress in AI since the 1950s when the name “artificial intelligence” was coined. All that is needed is:
a solution to artificial general intelligence, strong AI or full human emulation. That is sufficient to cash in on the promise of AI and start a revolution.
Without a working solution, the market remains only a target, but there are startups who are hoping to beat Google at generative-AI, despite the difference in capital availability. Doing the same thing as many others won’t win in time because in a direct competition, the most money tends to win especially if the approach is faulty.
Distortion from financial incentives
There is an ecosystem for the current AI that is distorting the markets. The AI requires huge datacenter expense for training and operation. This uses lots of power, GPU hardware and data. It can be outsourced by Azure, AWS, Google cloud and others at a profit.
The Magnificent Seven
The companies known as the “Magnificent Seven” (Alphabet who owns Google, Amazon, Apple, Meta who owns Facebook and Instagram, Microsoft, Nvidia and Tesla) are technology giants whose stocks have been rising as the thought of human-level AI (aka AGI Artificial General Intelligence or strong AI) is associated with the latest statistical AI systems (aka generative-AI or gen-AI and language-specific Large Language Models or LLMs).
These public companies are owned by some powerful investors. And at some level, anything that creates overall financial value is important. As they say, follow the money.
Why Hype AI?
Whether the AI works or not, the model is simple. Look at the market for AI. It is worth trillions and therefore startup companies can be valued in the billions. VCs who manage investments are safe to value AI that uses gen-AI because the market supports high valuations.
When a new startup using gen-AI is created, it receives a big premium on its financing to pay for the datacenter costs. This benefits the Magnificent Seven who have cloud hosting as they receive good revenue from the startup through its rounds. So investors in startups can indirectly get a benefit from their investments in the Magnificent Seven!
The key is to maintain focus on the goals of AI. That keeps valuations of these companies high, even though their technology’s limitations have no chance of realizing the possible upside.
And the pump and dump model may be a win-win for those who can exit while the stock price is still going up.
That’s why the bubble in gen-AI will be supported due to the financial benefits within the ecosystem.
Conclusion
The stakeholders wanting to promote the current generation of AI have good financial reasons for pushing it. The tech giants can win as investors (temporarily) think they are onto the Next Big Thing, even if in secret. The cloud hosting tech giants win as small entrepreneurs are funded to process millions of dollars of training data that transfers their funds to the profits of the cloud providers. Advertisers win by being paid to promote new products that hope to become human-like.
In the end, of course, science will win with its implementation. Humans will have the best of both worlds — trustworthy language based applications with lossless representation combined with a computer’s accurate and vast storage.
Do you want to get more involved?
If you want to get involved with our upcoming project to enable a gamified language-learning system, the site to track progress is here (click). You can keep informed by adding your email to the contact list on that site.
Do you want to read more?
If you want to read about the application of brain science to the problems of AI, you can read my latest book, “How to Solve AI with Our Brain: The Final Frontier in Science” to explain the facets of brain science we can apply and why the best analogy today is the brain as a pattern-matcher. The book link is here on Amazon in the US.
In the cover design below, you can see the human brain incorporating its senses, such as the eyes. The brain’s use is being applied to a human-like robot who is being improved with brain science towards full human emulation in looks and capability.
Interesting, insightful and clarity and depth of the AI players and their thinking - both with the hype and cognitive science. Great 1st post.
The article started bang on, debunking the hype by pointing out the gap between promise and reality and referring to expert opinions. It did not end well though. At the end it seemed as if even you intend to join the party busy creating human replacing technology rather than engaging in genuine science.