Do Large Language Models Display Emergent Theory of Mind? AI Critic Says No.

“It’s consistent with earlier results, that large language models can keep track of variables and attributes in simple stories. Calling this ‘theory of mind’ is vast over-interpretation.”
– Complexity researcher and AI critic Melanie Mitchell, calling the conclusions—that GPT-3.5 models display an emergent theory of mind comparable to 9-year-olds—of Stanford computational psychologist Michal Kosinski’s recent (viral) paper into question. Instead, Mitchell points at research that demonstrates neural language models’ capacity for “dynamic representations of meaning and implicit simulation of entity state.”
Metadata: Contributors:
$40 USD