Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
keywords:
geospatial-ai
ai-safety
llm
The rapid progress of generative AI (Gen-AI) and large language models (LLMs) offers significant potential for geospatial applications, but simultaneously introduces critical privacy, security, and ethical risks. Existing general-purpose AI safety frameworks inadequately cover GeoAI-specific risks such as geolocation privacy violations and re-identification, with False Safe Rates exceeding 40\% in some models. To address this, we present $\texttt{GeoSAFE}$ (Geospatial Safety Assurance Framework and Evaluation), introducing the first GeoAI-specific safety taxonomy with six hazard categories and a multimodal $\texttt{GeoSAFE-Dataset}$. It includes 11694 textual prompts with explanations, augmented by real-world queries and images to reduce synthetic bias and reflect operational use. We benchmark model performance on detecting $\texttt{unsafe}$ geospatial queries. Additionally, we present $\texttt{GeoSAFEGuard}$, an instruction-tuned LLM achieving 4.6\% False Safe Rate, 0.4\% False Unsafe Rate, and 97\% F1-score on text-to-text evaluation of $\texttt{GeoSAFE-Dataset}$. An anonymous user-survey confirms human-$\texttt{GeoSAFE}$ alignment emphasizing the urgent need for domain-specific safety evaluations as general-purpose LLMs fail to detect unsafe location-powered queries.
