Actualizing non-computational means of outputting 2+2=4 without calculation is revolutionary. We have long since considered declarative stores of a’priorI knowledge as the most direct path to fluid human-machine communication. Semiotic stores, independent of language, provides enormous functional advantages to include situation, circumstance, and heuristic representations. I look forward to seeing this product come to market!
It's nice that you pointed out that manipulation happens at the conceptual, or atomic level. That's something neural networks can never do because everything is a meaningless vector or token. I'm sure some would argue that a token represents a pattern,, but there is nothing in the architecture that allows for one pattern to "do work" on another (or, in the case of a comparison, pair). Which leads back to my issue: xNN's are just math, and I believe your point is that our brains are not operating like a bunch of loss functions with curve fitting, etc. It's not THAT kind of computation.
Any implementation is going to require symbol manipulation, and at any level (from vision to cognition), there is going to be a patter "reader" and a patter "executor." So, for me, when you throw away "computation," you are removing that needed aspect.
Thanks Steven, as usual. My work involves understanding human language using meaning in context so it can be used in conversation. Agreed, the vector model sits at the core of loss in LLM models, because (a) words have multiple meanings and (b) transformers miss the point that a string of words usually applies to DIFFERENT phrases with their own constraints.
I think we can control the pattern 'reader' versus 'executator' with a hierarchical pattern that sits above the level in question. To recognize a phrase, we use 2 steps - recognize the sequence and THEN resolve it with meaningful constraints. The resolution merges the additional pattern into the current one - I think it's covered. 2+2=4 has 2 distinct levels for example (+ & =).
The old semiotics models before the 1950s computer revolution seems to answer a number of questions in today's technology. The split between words and meaning, but retention of both in use, is fascinating and hard to see how a vector could use the difference.
Please remember to remind me if the form of 'computation' seems to be still missing after a few more scene-setting articles are published.
Actualizing non-computational means of outputting 2+2=4 without calculation is revolutionary. We have long since considered declarative stores of a’priorI knowledge as the most direct path to fluid human-machine communication. Semiotic stores, independent of language, provides enormous functional advantages to include situation, circumstance, and heuristic representations. I look forward to seeing this product come to market!
It's nice that you pointed out that manipulation happens at the conceptual, or atomic level. That's something neural networks can never do because everything is a meaningless vector or token. I'm sure some would argue that a token represents a pattern,, but there is nothing in the architecture that allows for one pattern to "do work" on another (or, in the case of a comparison, pair). Which leads back to my issue: xNN's are just math, and I believe your point is that our brains are not operating like a bunch of loss functions with curve fitting, etc. It's not THAT kind of computation.
Any implementation is going to require symbol manipulation, and at any level (from vision to cognition), there is going to be a patter "reader" and a patter "executor." So, for me, when you throw away "computation," you are removing that needed aspect.
Hi Dennis, it's nice to hear from you! Thanks for your words. We live in exciting times.
Thanks Steven, as usual. My work involves understanding human language using meaning in context so it can be used in conversation. Agreed, the vector model sits at the core of loss in LLM models, because (a) words have multiple meanings and (b) transformers miss the point that a string of words usually applies to DIFFERENT phrases with their own constraints.
I think we can control the pattern 'reader' versus 'executator' with a hierarchical pattern that sits above the level in question. To recognize a phrase, we use 2 steps - recognize the sequence and THEN resolve it with meaningful constraints. The resolution merges the additional pattern into the current one - I think it's covered. 2+2=4 has 2 distinct levels for example (+ & =).
The old semiotics models before the 1950s computer revolution seems to answer a number of questions in today's technology. The split between words and meaning, but retention of both in use, is fascinating and hard to see how a vector could use the difference.
Please remember to remind me if the form of 'computation' seems to be still missing after a few more scene-setting articles are published.