There are three primary types of people in the AI community today, typically categorized by their roles.
CEOs and other leaders in technology companies who are excited about the ‘big changes’ coming from AI, especially the generative AI made popular in 2022 with the many users trying ChatGPT for free when offered.
Experts in that technology, typically computer scientists. An AAAI (Association for the Advancement of Artificial Intelligence) survey of experts found that 75% of AI experts believe that today’s AI is not on a path towards the important solutions in AI, known as strong AI or AGI (the general form of AI that includes consciousness, human levels of sensory-motor control and human languages).
Experts in the cognitive sciences (can be combinations of philosophy, linguistics, neuroscience, artificial intelligence/computer science, anthropology, and psychology) who are looking to emulate human capabilities. While sometimes called skeptics, these people are looking for the solutions to AI and highlighting key factors that are missing from today’s systems.
1. Leaders in technology companies
Unfortunately, leaders in technology companies can have conflicts of interest regarding AI. Hype about the potential of AI can convince investors that roadblocks that have held the industry back for 70 years are solved and the market reacts by increasing the value of relevant stocks.
As the value of fully working AI in industry is probably close to the summed values of the world’s GDPs, the promise is large.
The magnificent seven label has been applied to stocks including
Apple (AAPL),
Alphabet (GOOGL, GOOG),
Microsoft (MSFT),
Amazon (AMZN),
Meta (META),
Tesla (TSLA), and
Nvidia (NVDA)
Recently, there was a large increase in their valuation. The increase of “62% of the S&P 500's advance in May” was attributed to the Magnificent Seven (warning, this quote may be false as it was generated by a large language model).
Not only is there value to be gained from AI stocks (amazingly at the Super Bowl this year, Coke advertised that they are an AI company!), prominent AI startups without profits are also beneficiaries of the AI hype as their investors will back their claims, even if not backed up by results or capabilities.
There is a big difference between recognizing the untold wealth that will come when AI is fully solved, and having the capability to solve it.
2. Experts in AI
In contrast to the unbridled optimism coming from the Magnificent Seven and their partners in AI startups, most AI experts don’t believe the hype. Once you understand what strong AI is with its requirement for consciousness, sensory perceptions and human-level motor control and full human language use, including conversational capability and human-level knowledge bases, the gap between today’s capabilities and the goal is immense.
As I say, the original goals of AI from 1956 are just as far away now as then - e.g. human-emulation of speech, vision and motor control. Driverless cars require all of these capabilities to meet most people’s expectations of safety and usability.
A driverless taxi should allow a passenger to request a destination in their local language, the vehicle should safely navigate to that destination without hitting other things like vehicles, people or animals and then drop off the passengers safely, dealing with financial transactions as needed.
To bridge the gap between capabilities and the goals of AI, even if it were partly working today (as in my systems based on Patom brain theory), there is still a ten-year or so effort to harden and further develop it as called out by roboticist and AI pioneer, Rodney Brooks here (click) because the strong AI claim incorporates robotics as well as language-based AI.
3. Cognitive Scientists looking to solve AI
My work fits into this category. I want to leave this world with progress in AI. We should have a good model for how our brain works, what it does, although perhaps not how it does it. That’s what Patom theory does.
Further, the linguistic model that converts text to its meaning is well documented in the Role and Reference Grammar (RRG) linguistic model.
We have implemented a working version of it - firstly converting words to their meaning and secondly using that meaning to validate conversation by checking for inconsistencies in context and by answering questions based on the context (immediate common ground).
By splitting the language (words, phrases and their meaning in context) from knowledge that should additionally incorporate the 85% of our communications that is non-verbal (per a 1971 study by Albert Mehrabian, according to an LLM: but note that “AI responses may include mistakes”).
The adoption of Patom theory and the consequential split of language from knowledge promises to remove the need for more datacenters and power plants driving them, when AI can typically run on laptops and other small devices.
There are many, many of my friends and colleagues who work in a myriad of jobs relating to cognitive science and AI are also appalled by the AI hype. The reasons AI is incorrect and even unscientific comes back to their strong industry experience and yet the hype continues, ignoring their voices (refer to point 1).
They want to see the use of AI for good, not evil.
Issues range from pushing technology that cannot be trusted because it (may) contain errors and statistics show the error rate to rarely be acceptable for use-cases. Others are concerned about the AI winter imposed on novel ideas for AI that may work.
It is commonly stated that the air, and funding, has been sucked out of the room by generative AI and its supporter’s endless promises of a future AGI that experts refute.
Conclusion
We need to deal with the challenges that are impeding progress towards the goals of AI. The current focus on generative AI as the solution to everything relating to AI continues to be pushed by groups that seem to have a conflict of interest, including CEOs and other leaders in technology companies.
Amazingly, 75% of researchers in AI, typically computer scientists, did not believe that today’s AI is on a path towards AGI. How can corporations, the media and others promote AI as a solution when the experts don’t agree? Shouldn’t there be more caution?
To improve AI for the next generation, there are many possibilities such as the use of Patom brain theory to eliminate the need for new power stations and data centers. A refocus and funding of cognitive science could see the future quickly change from today’s expensive, mainstream alternative.
Do you want to get more involved?
If you want to get involved with our upcoming project to enable a gamified language-learning system, the site to track progress is here (click). You can keep informed by adding your email to the contact list on that site.
Do you want to read more?
If you want to read about the application of brain science to the problems of AI, you can read my latest book, “How to Solve AI with Our Brain: The Final Frontier in Science” to explain the facets of brain science we can apply and why the best analogy today is the brain as a pattern-matcher. The book link is here on Amazon in the US.
In the cover design below, you can see the human brain incorporating its senses, such as the eyes. The brain’s use is being applied to a human-like robot who is being improved with brain science towards full human emulation in looks and capability.