Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
Alerts generated by Security Operations Centers (SOCs) are often numerous and scattered, requiring significant effort from security analysts to manage, which severely slows response times. While recent alert correlation graph methods can effectively reduce alert volume, these graphs are often too complex for analysts to understand. As a result, analysts are increasingly seeking ways to automatically correlate alerts and generate concise, human-readable attack path summaries. Recently, Large Language Models (LLMs) have demonstrated superior performance due to their advanced capabilities in knowledge reserve and reasoning. In this work, we propose GARNET, a framework that uses LLMs for reasoning on alert correlation graphs. GARNET addresses three key technical challenges: 1) modality alignment between alert graphs and logs; 2) semantic alignment between alert graphs and logs; 3) enabling LLMs reasoning along graph paths. Specifically, we first project the embeddings of the graph and logs into the same vector space using contrastive learning. Then, we design self-supervised graph-log instructions to bridge the semantic gap between the graph and logs by training a novel LLM. Finally, GARNET uses a novel Graph-of-Thought (GoT)-based interaction reasoning approach to guide LLM reasoning along graph paths, ultimately generating structured, concise, and human-readable attack path summaries. Experimental results across six attack scenarios show that GARNET reduces false positives by an average of 80\%, lowering the false positive rate to below 0.0037. It outperforms the latest approaches and provides more explainable attribution.
