Skip to main content
Emergent Spatio-Semantic Structure in Large Language Model Embedding Spaces

Emergent Spatio-Semantic Structure in Large Language Model Embedding Spaces

This is a Preprint and has not been peer reviewed. This is version 1 of this Preprint.

Add a Comment

You must log in to post a comment.


Comments

There are no comments or no comments have been made public for this article.

Downloads

Download Preprint

Authors

Joseph Shingleton, Yunus Serhat Bicakci, Yu Wang, Ana Basiri

Abstract

Large Language Models (LLMs) are increasingly used in geospatial applications typically as generators of geographic text or as natural language interfaces to spatial data. Here, we explore whether LLM embedding spaces can instead function as geospatial representations that can be exploited directly. Using embeddings extracted from Airbnb property descriptions in London, we show that off-the-shelf LLM embeddings exhibit emergent spatial structure. We further demonstrate that a lightweight residual geo-adapter substantially sharpens this spatial signal, enabling approximate localisation even when explicit geographic references are removed, while preserving semantic relationships learned during LLM pre-training. These results suggest a path toward spatially explicit foundation models which operate over the spatio-semantic embedding space, rather than generated text.

DOI

https://doi.org/10.31223/X5FQ93

Subjects

Artificial Intelligence and Robotics, Geographic Information Sciences

Keywords

Large Language Models, Semantic Embeddings, Natural Language Processing, Geospatial Artificial Intelligence

Dates

Published: 2026-02-24 08:46

Last Updated: 2026-02-24 08:46

License

CC BY Attribution 4.0 International

Additional Metadata

Data Availability (Reason not available):
Will be made available after peer review.

Metrics

Views: 18

Downloads: 0