
Rishav Hada
Research Fellow @ Microsoft Research India
evaluation
multilingual
multilinguality
llm
nlp
tokenization
gender
dataset for offensive language detection
meta-evaluation
gpt
llms
abusive language detection
contamination
offensive language detection
bias
6
presentations
2
number of views
SHORT BIO
Rishav Hada is a Research Fellow at Microsoft Research. He is interested in the intersection of Natural Language Processing, Computational Social Science, and Fairness and Transparency in AI. Specifically, he is interested in understanding how language use can reveal information about individuals and communities, and how to integrate this knowledge into neural models to develop socially inclusive AI applications. He has conducted research on various topics, including offensive language detection, measuring viewpoint diversity, and identifying gender biases in datasets and models. Rishav is motivated to address the limitations of existing methods and develop new strategies for dataset evaluation that promote careful curation and help mitigate social biases.
Presentations

METAL: Towards Multilingual Meta-Evaluation
Rishav Hada and 4 other authors

MEGAVERSE: Benchmarking Large Language Models Across Languages, Modalities, Models and Tasks
Sanchit Ahuja and 10 other authors

Are Large Language Model-based Evaluators the Solution to Scaling Up Multilingual Evaluation?
Rishav Hada and 7 other authors

MEGA: Multilingual Evaluation of Generative AI | VIDEO
Kabir Ahuja and 11 other authors

“Fifty Shades of Bias”: Normative Ratings of Gender Bias in GPT Generated English Text | VIDEO
Rishav Hada and 3 other authors

Ruddit: Norms of Offensiveness for English Reddit Comments
Rishav Hada and 1 other author