{"pk":32716,"title":"An Entropy Model of Artifical Grammar Learning","subtitle":null,"abstract":"We propose a model to characterize the type of knowledge acquired in Artificial Grammar Learning (AGL). In particular, we suggest a way to compute the complexity of different test items in an AGL task, relative to the training items, based on the notion of Shannon entropy: The more predictable a test item is from training items, the higher the likelihood that it will be selected as compatible to the training items. Our model is an attempt to formalize some aspects of inductive inference by providing a quantitative measure of the knowledge abstracted by experience. We motivate our particular approach from research in reasoning and categorization, where reduction of entropy has also been seen as a plausible cognitive objective. This may suggest that reducing (Shannon) uncertainty may provide a single explanatory framework for modeling as diverse aspects of cognition, as learning, reasoning, and categorization.","language":"eng","license":{"name":"","short_name":"","text":null,"url":""},"keywords":[],"section":"Long Papers","is_remote":true,"remote_url":"https://escholarship.org/uc/item/19v426x9","frozenauthors":[{"first_name":"Emmanuel","middle_name":"M.","last_name":"Pothos","name_suffix":"","institution":"School of Psychology, Bangor","department":""},{"first_name":"Todd","middle_name":"M.","last_name":"Bailey","name_suffix":"","institution":"Department of Experimental Psychology, Oxford","department":""}],"date_submitted":null,"date_accepted":null,"date_published":"1999-01-01T18:00:00Z","render_galley":null,"galleys":[{"label":"PDF","type":"pdf","path":"https://journalpub.escholarship.org/cognitivesciencesociety/article/32716/galley/23779/download/"}]}