Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.
Recent advancements in Large Language Models (LLMs) have transformed natural language understanding and generation, leading to extensive benchmarking across diverse tasks. However, cryptanalysis—a critical area for data security and encryption—has not yet been thoroughly explored in LLM evaluations. To address this gap, we present the first study evaluating the cryptanalytic potential of state-of-the-art LLMs on encrypted texts generated using a range of cryptographic algorithms. We introduce a novel benchmark dataset comprising diverse plain texts—spanning various domains, lengths, writing styles, and topics—paired with their encrypted versions. Using a zero-shot approach, we assess multiple LLMs for both decryption accuracy and semantic comprehension across different encryption schemes. Our findings reveal key insights into the strengths and limitations of LLMs in side-channel communication, while raising concerns about their susceptibility to jailbreaking attacks. This research highlights the dual-use nature of LLMs in security contexts and contributes to the ongoing discussion on AI safety and security.