{"pk":50410,"title":"Additive Analogies Reveal Compositional Structure in Neural Network Weights","subtitle":null,"abstract":"A central question in cognitive science is how to reconcile connectionist and symbolic models of the mind (e.g., Fodor &amp; Pylyshyn 1988, Smolensky &amp; Legendre 2006). Attempts have been made to bridge these competing schools of thought by showing how compositional structure can emerge in continuous vector representations (e.g., Manning et al. 2020). A key example is Mikolov et al. (2013), who demonstrated that word embeddings learned by a neural network encode semantic structure: subtracting the vector \"man\" from \"king\" and adding \"woman\" approximates \"queen\" (i.e., king - man + woman â‰ˆ queen). Our work moves up one level of abstraction, from representations to functions. We analyze whether entire networks display emergent compositional structure by treating a trained network as a single vector (obtained by concatenating the network's parameters) encoding its function. We show that these parameter vectors can be recomposed through simple additive analogies to create networks with new functions.","language":"eng","license":{"name":"","short_name":"","text":null,"url":""},"keywords":[{"word":"Artificial Intelligence; Neural Networks"}],"section":"Member Abstracts with Poster Presentation","is_remote":true,"remote_url":"https://escholarship.org/uc/item/60w3r1p6","frozenauthors":[{"first_name":"Abi","middle_name":"","last_name":"Tenenbaum","name_suffix":"","institution":"Yale University","department":""},{"first_name":"R. Thomas","middle_name":"","last_name":"McCoy","name_suffix":"","institution":"Yale University","department":""}],"date_submitted":null,"date_accepted":null,"date_published":"2025-01-01T18:00:00Z","render_galley":null,"galleys":[{"label":"PDF","type":"pdf","path":"https://journalpub.escholarship.org/cognitivesciencesociety/article/50410/galley/38372/download/"}]}