American-AI versus International-AI
Leveling the playing field in science and commerce for everyone
In 2014, Deep Learning was exploding around the world as corporations were told to “embrace conversational AI.” Nobody wanted to miss out, and the idea that a chatbot was now ready for prime time was pushed everywhere. But it wasn’t. Language is more than just recognizing a sequence of words (called an intent).
We have the parallel today in which companies are told not to miss out on the AI revolution using generative-AI or Large Language Models (LLMs) or maybe just ‘ChatGPT.’ The same enthusiasm to “do AI” is now pushing something that really can’t do the job properly.
So rewinding to 2014, what happened to the idea that language was now solved with AI/Deep Learning? There were hundreds of startups all funded to find a domain that they could excel in and rent a big-tech platform to drive it as a part of their business model.
Now the idea is that startups can all pay a tech giant to rent their LLM, but again the technology isn’t ready for prime time. Last time the effort for developers to capture a conversational flow was out of reach with the Deep Learning tools available. This time, LLMs don’t include human-like meaning and so cannot help a developer control a conversation or access Legacy databases without adding additional coded subsystems.
AI remains out-of-reach because the models aren’t aligned with human cognition. Instead they are aligned with a statistical model of human language that is incompatible with human language. 2025 is like 2014 all over again!
Let’s look at today’s evolution of a chatbot at an Australian bank that I’m familiar with.
As I recall, my Australian bank, The Commonwealth Bank, quickly implemented a pure implementation of the Deep Learning chatbot model around 2014 with a message asking customers to enter their request! It didn’t work very well, as it wasn’t conversational as much as it was a search engine. Sometimes you could get something that wasn’t just search, but then the question was whether it was doing what was asked. Quickly, the approach evolved into a click-bot in which, rather than understanding language, the system simply prompted the user to click on what they wanted, as shown in Figure 1 below (accessed online via Netbank at https://www.my.commbank.com.au/netbank).

The chatbot model seemed to be a distraction since it wasn’t as easy to navigate as the existing web-site. But business leaders were persuaded to avoid “missing out.”
It’s happening again.
American-AI or Corporate-AI
What would you call ChatGPT or the equivalent products from the “Magnificent Seven” (Apple, Amazon, Google/Alphabet, Microsoft, Meta, Tesla and Nvidia)?
The White House wrote:
“This order revokes certain existing AI policies and directives that act as barriers to American AI innovation, clearing a path for the United States to act decisively to retain global leadership in artificial intelligence…”
“It is the policy of the United States to sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security.” (Click here to view source)
Today, that AI technology centers around ChatGPT, the 2022 demo from OpenAI that ignited excitement in the general population for AI that understands human language and can do everything a human can. Unfortunately, ChatGPT has failed to deliver on profitable products due to its architecture and its ongoing hallucination problem in which errors are generated in most responses.
What should the current AI be called? American-AI in light of the White House’s name? Corporate-AI perhaps, since the ChatGPT technology is controlled by the few companies with the money to develop the prototypes?
Whatever we call it, American-AI suffers from large cost to train, a need for datacenters to run it due to its need for many GPUs, a lack of context treatment as a human would do, and hallucinations. It seemingly has no path to strong AI or AGI because it isn’t a robotics technology, just a text generation one.
Media Support for American-AI
I wonder why the media doesn’t often critique American-AI? For example, the issues with it such as hallucinations are really a show-stopper for most corporate initiatives. A bank that transfers the wrong amount to the wrong account isn’t going to operate for long! And yet workarounds proposed in theory like agentive AI and others are starting from a base that is error-prone. Isn’t the best model one that works from start to finish, like the digital computer already does for banking?
Will American-AI become AGI?
The expectation has been set that the ChatGPT model will soon produce artificial general intelligence or AGI. AGI includes consciousness, sensory manipulation like a human, motor control like a human and, well, it fully emulates a human.
But rather than asking CEOs of the Magnificent Seven or hopeful competitors like OpenAI what they think, the question was put to the experts in the field at the AAAI.
From a Futurism quote (click for link), with my emphasis included below:
“Asked whether "scaling up" current AI approaches could lead to achieving artificial general intelligence (AGI), or a general purpose AI that matches or surpasses human cognition, an overwhelming 76 percent of respondents said it was "unlikely" or "very unlikely" to succeed.
Published in a new report, the findings of the survey, which queried 475 AI researchers and was conducted by scientists at the Association for the Advancement of Artificial Intelligence, offer a resounding rebuff to the tech industry's long-preferred method of achieving AI gains — by furnishing generative models, and the data centers that are used to train and run them, with more hardware. Given that AGI is what AI developers all claim to be their end game, it's safe to say that scaling is widely seen as a dead end.”
I wonder whether the 25% who responded that American-AI would become AGI have a conflict of interest with their corporations.
Non-American-AI or International-AI
What should we call AI that has global support? American-AI is seen as being pushed by the Magnificent Seven and the US government to support their dominance, but there are other ways to solve AI that align with the cognitive sciences. They may also align better with non-American national interests.
My work on Patom (brain) theory works well with Role and Reference Grammar (RRG) linguistic theory because both are based on scientific observations of human capabilities. While my work is called Deep Symbolics because it retains the fully explainable and symbolic aspects for human emulation, non-American-AI labels it correctly as being very different, and superior to American-AI (even though I’m an American, as is the primary developer of RRG, Prof. Robert D. Van Valin)!
Unlike the limitations of American-AI, non-American-AI has the advantage of aligning to a number of existing theories without constraint.
Contrast with American-AI
I was recently told that there is no point in funding alternatives to American-AI because American-AI is mature and dominant. Who would invest in new, innovative technology when the war has been won? But winning is about products, not prototypes and hope for profitable products. For example:
Google was the 25th search engine and is now one of the largest companies on earth.
Google won in the late 1990s because it was better at search. Systems that interact like people will similarly win against systems that do not and make errors.
Innovation can win against anything. Ten times improvements in single metrics can be enough. Deep Symbolics can run on a laptop while American-AI needs hundreds of billions of dollars just to create datacenter space!
Equally, American-AI tightly couples language with knowledge. This means that when it generates text, something needs to interpret the meaning of its response to apply it to existing technology. The obvious alternative is to do this in two steps: (a) convert the text to meaning in context and then (b) validate with the extended context from the knowledge repository.

American-AI generates text based on the next probable token in its statistical base. Non-American-AI converts text to meaning in context and back. In the Figure 2 sequence shown above, all the interaction uses language alone without any background knowledge. Language is very powerful on its own and notice that the questions generalize from the meaning of the first sentence.
Do you want an AI that costs nothing to train?
Conclusion
American-AI is proposed as a label for the systems proposed by the Magnificent Seven for dominance of the world’s AI market and now supported by the government of the USA.
The alternative, logically, is named non-American AI. Or International-AI. Let’s agree on the right nomenclature!
Do you want to get more involved?
If you want to get involved with our upcoming project to enable a gamified language-learning system, the site to track progress is here (click). You can keep informed by adding your email to the contact list on that site.
Do you want to read more?
If you want to read about the application of brain science to the problems of AI, you can read my latest book, “How to Solve AI with Our Brain: The Final Frontier in Science” to explain the facets of brain science we can apply and why the best analogy today is the brain as a pattern-matcher. The book link is here on Amazon in the US.
In the cover design below, you can see the human brain incorporating its senses, such as the eyes. The brain’s use is being applied to a human-like robot who is being improved with brain science towards full human emulation in looks and capability.