{"pk":29806,"title":"Generalizing Outside the Training Set:When Can Neural Networks Learn Identity Effects?","subtitle":null,"abstract":"Often in language and other areas of cognition, whether twocomponents of an object are identical or not determine whetherit is well formed. We call such constraints identity effects.When developing a system to learn well-formedness from ex-amples, it is easy enough to build in an identify effect. But canidentity effects be learned from the data without explicit guid-ance? We provide a simple framework in which we can rig-orously prove that algorithms satisfying simple criteria cannotmake the correct inference. We then show that a broad classof algorithms including deep neural networks with standardarchitecture and training with backpropagation satisfy our cri-teria, dependent on the encoding of inputs. Finally, we demon-strate our theory with computational experiments in which weexplore the effect of different input encodings on the ability ofalgorithms to generalize to novel inputs.","language":"eng","license":{"name":"","short_name":"","text":null,"url":""},"keywords":[{"word":"identity effects"},{"word":"machine learning"},{"word":"neural net-works"},{"word":"Generalization"}],"section":"Poster Session 2","is_remote":true,"remote_url":"https://escholarship.org/uc/item/7g08r13k","frozenauthors":[{"first_name":"Simone","middle_name":"","last_name":"Brugiapaglia","name_suffix":"","institution":"Concordia University","department":""},{"first_name":"Matthew","middle_name":"","last_name":"Liu","name_suffix":"","institution":"Concordia University","department":""},{"first_name":"Paul","middle_name":"","last_name":"Tupper","name_suffix":"","institution":"Simon Fraser University","department":""}],"date_submitted":null,"date_accepted":null,"date_published":"2020-01-01T18:00:00Z","render_galley":null,"galleys":[{"label":"PDF","type":"pdf","path":"https://journalpub.escholarship.org/cognitivesciencesociety/article/29806/galley/19660/download/"}]}