What's blocking AI/AGI progress?
with Robert D. Van Valin featuring his ongoing work to solve linguistics
I was lucky to be able to catch up with Robert D. Van Valin, Jr. in Santa Fe this week at the New Mexican restaurant, Tomasita’s, as an extra stop on my US and European book tour. We used the opportunity to solve AI over lunch, like solving-the-problems-of-the-world that other’s do! It’s always an amazing experience to talk to the most consequential linguist in this century and the last about his experiences in linguistics and its impact on Artificial Intelligence (AI) in general.
For those who don’t know Professor Van Valin (I call him ‘Van’) he is the primary developer of Role and Reference Grammar (RRG), a linguistic framework that models all of the world’s diverse languages in which words and phrases (morphosyntax) link to meaning in context (key part of knowledge) and back. The analysis of human language with RRG is extensive — the latest book is more than 1000 pages to cover the breadth of focus areas in language.
By the way, the next RRG international conference is in Kobe, Japan and I’ll be talking about my industry experience in using RRG for machines to interact using language. (Kobe RRG Conference Details linked here).
AI is theoretically a multidisciplinary field in which many areas of studies should come together, but today there seems to be one expensive statistical approach that is promoted—with rare debate about its fundamental limitations. This is the perfect introduction for Van, since he has been a figure at the center of language and its meaning since the 1970s.
Why UC Berkeley is important
Van chose where to study for his PhD in the 1970s because, as he put it: “… there seemed to me to be only one place to go: Berkeley, the citadel of generative semantics. Chafe and Fillmore were there, along with George Lakoff.”
Further to that, he discussed other significant professors who were there such as Hubert Dreyfus (Van called him ‘Bert’), John Searle (yes, the guy behind the famous Chinese Room) and even (Herbert) Paul Grice (known today for the critically important Gricean Maxims that AI should embrace, but doesn’t today).
I wondered what happened to the skepticism that ‘Bert’ was famous for when I was young since he used scientific arguments to challenge the AI Community at the time, somewhat as Gary Marcus does today who uses cognitive science arguments to make eerily accurate predictions that contrast with many public claims of AI’s imminent potential.
Where linguistics went, and why AI should not also
I often mention that in the 1960s to 1990 there was unbridled enthusiasm for the Chomsky revolution. To me, the world couldn’t get enough of Syntactic Structures from that time, and since then, the world cannot get enough statistical models.
Van told me that there were two articles written by John Searle on the topic whose titles are historically pertinent (the revolution, then its end!):
A Special Supplement: Chomsky’s Revolution in Linguistics (sorry, paywalled, but introduction sets the scene) and
You can read the interaction about Chomsky’s Revolution responded by Searle in April of 2002 here (link) and then soon after in July of 2002 between Chomsky and Searle here: Chomsky’s Revolution: An Exchange (link).
My point is that these are very strong open debates, as seen when one of the debaters claims the other made fundamental mistakes (but did they?).
Do science and engineering still have such open and public debates today?
Coming up soon: what Dreyfus would say today
In a future article, I will discuss the arguments of ‘Bert’ Dreyfus — he made 4 key points on the underlying assumptions that hold AI back. They are just as important today as they were in the 1970s! In Wikipedia today, his key points are:
biological assumption: the brain is analogous to computer hardware and the mind is analogous to computer software
psychological assumption: the mind works by performing discrete computations (in the form of algorithmic rules) on discrete representations or symbols
the epistemological assumption: all activity (either by animate or inanimate objects) can be formalized (mathematically) in the form of predictive rules or laws and
the ontological assumption: reality consists entirely of a set of mutually independent, atomic (indivisible) facts.
Patom theory rejects (1), (2), (3) and (4). In particular, facts are always in context, never standalone.
Can these four assumptions as shown be the platform to emulate a brain? Not according to Dreyfus because a context-free psychology is a contradiction in terms. We always make decisions based on context.
Understanding these assumptions and whether they apply will be helpful to position today’s claims of AGI and strong AI by many. The AI revolution hasn’t yet started.
Do you want to get more involved?
If you want to get involved with our upcoming project to enable a gamified language-learning system, the site to track progress is here (click). You can keep informed by adding your email to the contact list on that site.
Do you want to read more?
If you want to read about the application of brain science to the problems of AI, you can read my latest book, “How to Solve AI with Our Brain: The Final Frontier in Science” to explain the facets of brain science we can apply and why the best analogy today is the brain as a pattern-matcher. The book link is here on Amazon in the US.
In the cover design below, you can see the human brain incorporating its senses, such as the eyes. The brain’s use is being applied to a human-like robot who is being improved with brain science towards full human emulation in looks and capability.




