{"pk":49656,"title":"Humans learn proactively in ways that language models don't","subtitle":null,"abstract":"Do large language models (LLMs) learn like people do? We investigate this question with a simple task that compares human learning and LLM finetuning on the same set of novel inputs. We find that while humans learn and generalize robustly, finetuned LLMs largely fail to generalize from what they learned and are more influenced by prior expectations than humans are. We then analyze human solutions of our task and find that stronger performance is characterized by the proactive formation of efficient representations that aid learning and generalization. Although LLMs can use in-context learning to match the performance of humans who do not form these representations, and can use similar representations provided in-context to match the performance of those who, they do not form these representations on their own. Given these findings, we then consider how future theories of human learning might be built in the age of LLMs","language":"eng","license":{"name":"","short_name":"","text":null,"url":""},"keywords":[{"word":"Artificial Intelligence; Learning; Reasoning; Representation; Neural Networks"}],"section":"Papers with Poster Presentation","is_remote":true,"remote_url":"https://escholarship.org/uc/item/47c985zr","frozenauthors":[{"first_name":"Simon","middle_name":"Jerome","last_name":"Han","name_suffix":"","institution":"Stanford University","department":""},{"first_name":"Jay","middle_name":"","last_name":"McClelland","name_suffix":"","institution":"Stanford University","department":""}],"date_submitted":null,"date_accepted":null,"date_published":"2025-01-01T18:00:00Z","render_galley":null,"galleys":[{"label":"PDF","type":"pdf","path":"https://journalpub.escholarship.org/cognitivesciencesociety/article/49656/galley/37618/download/"}]}