Last time we looked at the model of a brain as the organ of computation explained by famous cognitive scientist, Steven Pinker. It doesn’t stand up to scrutiny because how a computer works is intrinsic to the idea of computation. Concepts like encoded storage in binary, instructions, programs, and data don’t fit into a brain model.
Today I want to step through how a brain works based on: (a) what brain damage can do and what (b) brain scans show us. Both contain enigmas that are explained by a pattern-matching brain, not a processing one.
Brain damage
We can get a glimpse of how a brain works by looking at what happens when it is damaged. All human brains are different. The principle, plasticity, explains how a brain can use different areas for the same function. Equally, there are typical regions in a brain that perform the same function in other brains.
Let’s looks at some of the unexpected deficits in human brains that need to be explained by a valid brain theory.
Language regions
The first support for a brain to have localized function was by Paul Broca <www.simplypsychology.org/broca-area.html>. In 1861, his patient Tan was unable to produce speech resulting from significant cortex damage to his left frontal lobe.
“Typically, Broca’s area is located within the dominant hemisphere of the frontal lobes, which is the left hemisphere in around 97% of people. In general, the brain’s left hemisphere is responsible for language and speech.”
Over a decade later in 1874 Carl Wernicke published his findings where patients lacked language comprehension. How would a computational machine be designed that splits language comprehension from production?
When Patom theory was being developed, an early model of mine named brain regions as ‘intelligent memory’ or machine intelligence memory (MIM). I wrote about that in an early book on Kindle focused specifically on how pattern matching can explain all brain function. This is the opposite of most processing-based brain theory in which a brain ‘generates’ recognition from sensory input.
Illusions explained by brain theory, not processing
Back in the 1950s, Oliver Selfridge used the image below to show how more than vision is needed to read. Context, a consistent meaning in some situation, allows an English reader to recognize “the cat” from “THE CHT” in the image below (Minsky, M, Society of Mind, A Touchstone Book, 1986, P. 209):

More brain regions
Visual Cortex V4
Damage to brain region V4 can result in color blindness, only seeing the world in shades of gray. Why would damage to a brain region stop recognition of color and also stop the ability to imagine color?
“Even his power to imagine colors, which before had been vivid, was gone.” (Hoffman, D., Visual Intelligence: How we create what we see, W.W.Norton & Co, 1998, P108.)
Visual Cortex “Letterbox”
Imagine your English writing is fine and you speak without problem but you can’t read anymore. The words don’t make sense. In 1892 a French neurologist discovered this effect in a stroke victim with damage to his left visual cortex - in the so-called ‘letterbox’ region. How would a computer-based brain create such a deficit?
“Spontaneously, the patient writes as well as he speaks… Writing under dictation can still be done easily… [he said]: ‘I can still write those letters, so why can’t I read them?’” (Dehaene, S., Reading in the Brain: The New Science of how we read, Penguin Books, 2009, P.56.
Left frontal lobe - Phineas Gage
Our left frontal cortex is particularly well protected by our skulls, but Phineas Gage damaged his with an accident in a railway gang in 1848. Others have received similar damage due to tumors. Let’s skip the details and jump to the conclusion of what can happen when a human’s left frontal cortex is destroyed.
“…something was missing … Nowhere was there a sense of his own suffering … he was not pained by his tragedy… He could sense how topics that once had evoked a strong emotion no longer caused any reaction…
“We might summarize Elliot’s predicament as to know but not to feel.” (Damasio, A., Descartes’ Error: Emotion Reason and the Human Brain, Picador, 1994, P.44.)
Patient H.M Memory Formation
To treat severe epilepsy, doctors performed brain surgery on H.M. in 1953. The shocking side effect was the inability to “form new memories.” <en.wikipedia.org/wiki/Henry_Molaison> The surgeons removed a number of brain regions in the limbic system from both brain hemispheres.
A few things are seen after the surgery. The scope of the surgery was:
short-term and procedural memory not impacted, but episodic memory was
motor skills could still be learned, despite not remembering learning it
spatial memory was still learned, as seen by his sketch of his residence (the one he had moved into **after** his surgery)
language deficits - words and phonemes recognized normally, but at the sentence level the same deficits as memory were present - inability to persist “context” for the long term.
Brain Scans and Function
Brain scans provide brain function feedback, such as with fMRI, PET and so on. fMRI for example measures brain activity by “detecting changes associated with blood flow. <en.wikipedia.org/wiki/Functional_magnetic_resonance_imaging>”
I’ve always found the activity of a brain when listening to words compared with speaking words compared to having a conversation. There’s a great image of (a) reading words showing visual cortex activated, (b) hearing words showing temporal lobe activation, (c) speech showing frontal lobe activation (Broca’s area and motor cortex) and (d) thinking about words showing many active areas. (Greenfield, S., (ed), The Human Mind Explained: An owners Guide to the Mysteries of the Mind, Reader’s Digest, 1996, P.136-137.)
Why some regions are more active than others
The reason those particular areas are active relates to the lack of granularity of the scanner, but the information is still valuable. When reading, the recognition of the sensory input is primarily a visual task as letters and words are recognized visually. Validation is needed for the words recognized, but that’s not intensive like the sensory work. When hearing words, the auditory input is identifying the phonemes and words - an auditory task. When speaking, it isn’t surprising that the motor cortex for the mouth, lips, jaw and Broca’s area is active. In addition to the generation of speech is the selection of what to say, but again that is a minor task compared with the motor effort to sequence the muscles in speech.
The illustrations I describe next are here: (McCrone, J., Going Inside: A tour round a single moment in consciousness, Faber and Faber, 1999, Illustration 11.) The illustrations show what brain regions are active when comparing nonsense objects and noise with real objects.
“Further proof that the brain has fluid organization comes from scanning experiments… As expected, the temporal lobe became busy, but so did a host of other brain areas including high-level regions of the prefrontal lobes and low-level organs like the cerebellum.
Why does recognition of an object, known to be stored in the temporal lobe, also activate the prefrontal lobe? In the pattern-matching paradigm, while the object itself is located in one place, the uses of that object are located elsewhere. They are linked. Similarly the object’s manipulation would be tracked and controlled elsewhere, with the cerebellum’s activation unexpected as it supports fine motor control, but reasonable in Patom theory since linked patterns are activated when patterns are matched.
The final illustration caption reads (my bold emphasis):
“The brain seemed to develop its processing logic on the fly, tailoring its response to the task. The problem for science was to marry such dynamism with the traditional computer-like view of the brain.”
Again, there has been a maniacal focus to fit the computer processing model into visible brain function. It just doesn’t fit.
Brain’s don’t have ‘data’
Brains don’t encode data. If it wasn’t for the digital computer model, nobody would ever have stumbled on that proposal. Encoding numbers into binary is great for computer-based mathematics, but humans don’t need that in a brain.
The problem in a brain is the transmission of encoded data and unpacking it at the destination. How could a computer-like brain represent a dinner at a pizza parlor with your friends on Nassau Street at Christmas in 1995?
An improved version of storage is to retain all the original patterns where they are matched, and connect them forward to one that retains the full context of who was at the event. This way, individual activations can be traced back to their source, the infinitely more detailed matches.
Store patterns and then use them
Based on neurons and their organization into regions, this alternative model. Patom theory, is plausible. Assume for a moment that the purpose of a brain region is to store the patterns it receives. Auditory regions start by storing auditory patterns. Visual regions start by storing visual patterns. When a pattern is matched, a signal is sent to subsequent regions (dictated by the brain region’s anatomy). The receiver of that pattern matches patterns also and signals back and forwards when matched. As the receiver is laid out by brain formation that is not fully prescriptive, variations in region locations result as observed by plasticity.
A primary sense brain region doesn’t contain its senses, just the output from them. So by matching a pattern and signaling when matched, the region interacting with a sense can be identical to the one projected to from any other sense. This is a useful approach for “general AI” —or what is being called AGI— as the appearance of intelligence comes from a single algorithm - in this case the Patom theory algorithm to store match and use specific kinds of patterns.
Interestingly, all the senses project forward into the entorhinal cortex, a region removed in H.M’s surgery. That region could match patterns between episodic events, connecting objects together with their use, such as the memory of who came to dinner, how the table was laid out and so on. But that region is still just matching patterns. It sees active neurons as input and seeks patterns. The neurons aren’t codes, but physical elements that tie back to previous regions.
H.M. still could learn other types of memory skills, since the patterns used for those skills were untouched by the surgery.
By using regions in this way, patterns are stored at the edge that trace back to experience, at the input points that come from senses. They remain available as memory by using the reverse links from the regions they project to. No encoding or transmission is needed.
Every region looks the same: receiving inputs and returning output to the source region and projecting to the target regions. The types of patterns stored as combinations are either sets, lists or both and it is brain anatomy that connects the regions together in a manner that generally works as seen by human expertise.
Discussion
The majority of the work in an active brain will be in the region that needs to match patterns from their input.
When reading, for example, a large number of neurons in a visual region recognizes the letter sequences (pattern matching), but these patterns are then wired into the language sequences (they are linked) and so don’t do any search at all.
To reinforce the point, a matched pattern can represent a large set of neurons such as in the image of someone’s face (seen as active on a scan), but the signal of that match to other regions could be performed with as few as 1 neuron activation (plus validation via bidirectional links forward and backward). The signaling would not be seen by that scanner.
That’s why some regions do not activate much on a scan despite a region’s deficits being obvious when damaged.
When speaking, the main area of activation is converting the meaning of what is to be said with the motor patterns needed to say it. This typically shows in a scan as frontal left lobe activity as there are many neurons to activate in sequence to move muscles correctly.
The function of a region as described earlier can be thought of as an MIM - a kind of intelligent memory for machines. These MIMs are distributed to cover off senses and subsequently combinations of senses. They also store motor control sequences of muscle sets. These days, MIMs are just called Patoms—regions that store pattern atoms. Their function is radically different from that of a computer program. They remove the need for elements in the network to process at all. Processing was already problematic because it would require a program to be inserted into neurons inside the network in order to perform a parallel computation!
Conclusion
Patom theory is theoretical neuroscience that emerged from the study of brain damage and brain function. The brain model that comes from this study is consistent with observations of brain activity and the results of brain damage.
Today’s newsletter explains the principles that have already been applied to the problem of human language understanding in context and shown its effectiveness. No doubt further work in neuroscience and cognitive science generally can use this model to solve problems that otherwise have no way forward.
And scientific debate is always useful in areas that are not progressing, such as in the major quest to emulate human capabilities in general. The AI goals of today, like in the 1950s, need motor control, sensory integration and human language capability—the new approach enabled by Patom theory.
Do you want to read more?
If you want to read about the application of brain science to the problems of AI, you can read my latest book, “How to Solve AI with Our Brain: The Final Frontier in Science” to explain the facets of brain science we can apply and why the best analogy today is the brain as a pattern-matcher. The book link is here on Amazon in the US.
In the cover design below, you can see the human brain incorporating its senses, such as the eyes. The brain’s use is being applied to a human-like robot who is being improved with brain science towards full human emulation in looks and capability.