0:00
/
0:00
Transcript

Reaction: LLMs fail to use context

... and context is a central feature of human language
1

This is my unedited reaction video interacting with LLMs - Leo from Brave and CoPilot from Microsoft. You can see how human context is ignored and, instead, non-human associations are assumed.

It is ***funny***!! I tried to stay serious in my interview session, but it was hard.

You can see how assigning a name skews results based on the statistical uses of that name, even though a name is arbitrary in a human language until resolved to a specific case.

Early on, the word 'promise' is misrepresented as 'convince.' That even fooled me for a while until I spotted that subtle but catastrophic error.

I will show you the correct meaning from my language tool in a future video, but here’s what failed in the LLM:

  • John promised Beth to eat the tuna (means John is obligated to eat the tuna)

  • John persuaded Beth to eat the tuna (means Beth wants to eat the tuna)

Notice that there is a difference between who eats tuna between promise and persuade. That’s explained by Role and Reference Grammar (RRG) linguistic theory.

When we enter new context in a conversation, it is necessary to identify who is being referred to. Context tracks each participant in a conversation and in this example, humans wouldn’t jump to conclusions. If we introduce Sue and Samantha, even if we have heard the names before, we confirm who they are. If we don’t, we can get by in the conversation, but we are forced to confirm who they are if asked a question, to avoid the embarrassment of answering incorrectly.

There is very strong linguistic science behind how context works. Immediate command ground (ICG) describes the current context of a conversation and expands as we add new participants, General common ground (GCG) represents all our background knowledge such as that statistically modelled in an LLM and Context of Utterance (CoU) is the current consolidation of context at the point something is said, somewhat like ICG. These concepts describe different types of context in human conversation, for example, and linguists can use these to draw distinctions. In today’s video, the lack of ICG is a problem especially when the LLM uses every element of GCG as if it were known in the conversation.

Summary: LLMs don't emulate human context which makes them very unlike humans with language.

Discussion about this video

User's avatar