Lecture image placeholder

Premium content

Access to this content requires a subscription. You must be a premium user to view this content.

Monthly subscription - $9.99Pay per view - $4.99Access through your institutionLogin with Underline account
Need help?
Contact us
Lecture placeholder background
VIDEO DOI: https://doi.org/10.48448/te7k-6m61

poster

ACL 2024

August 13, 2024

Bangkok, Thailand

Transferable Embedding Inversion Attack: Uncovering Privacy Risks in Text Embeddings without Model Queries

keywords:

inversion attack

privacy risk

language models

This study investigates the privacy risks associated with text embeddings, focusing on the scenario where attackers cannot access the original embedding model. Contrary to previous research requiring direct model access, we explore a more realistic threat model by developing a transfer attack method. This approach uses a surrogate model to mimic the victim model's behavior, allowing the attacker to infer sensitive information from text embeddings without direct access. Our experiments across various embedding models and a clinical dataset demonstrate that our transfer attack significantly outperforms traditional methods, revealing the potential privacy vulnerabilities in embedding technologies and emphasizing the need for enhanced security measures.

Downloads

SlidesTranscript English (automatic)

Next from ACL 2024

Favi-Score: A Measure for Favoritism in Automated Preference Ratings for Generative AI Evaluation
poster

Favi-Score: A Measure for Favoritism in Automated Preference Ratings for Generative AI Evaluation

ACL 2024

+1Jan Milan DeriuMark CieliebakPius von Däniken
Pius von Däniken and 3 other authors

13 August 2024

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Lectures
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2023 Underline - All rights reserved