

31·
6 days agoThat does not follow. I can’t speak for you, but I can tell if I’m involved in a conversation or not.
That does not follow. I can’t speak for you, but I can tell if I’m involved in a conversation or not.
It allows us to conclude that an LLM doesn’t “think” about what it is saying. Based on the mechanics, the LLM doesn’t even know it’s a participant in the conversation.
Well, the neural network is given a prefix (series of tokens) and a token, and it spits out how likely is it that the token follows the prefix. Text is generated by calculating this probability for all known tokens, then picking one random, weighted based on the calculated probabilities.
The burden of proof is on those who say that LLMs do think.
It has no memory, for one. What makes you think that it does know its in a conversation?