
Prasanna Parthasarathi
information retrieval
large language models
hallucination
llm
data augmentation
reasoning
evaluation
story
benchmark
nlp
few-shot learning
nli
qa
ir
text classification
8
presentations
2
number of views
SHORT BIO
Prasanna Parthasarathi is a Senior Researcher at Huawei Noah's Ark Lab, Montreal. Prasanna received his Ph.D. in Computer Science from McGill University, where he also shared an academic affiliation with Mila, the Quebec AI Institute. His research interests include probing tasks for neural language models, dialogue systems, optimization, and instruction taking reinforcement learner. Prasanna has worked as Research Intern at Facebook AI Research - Montreal, Google Brain - Mountain View, and Montreal. Prasanna co-authored a paper that received the outstanding paper award at ACL 2021. He co-organized Novel Ideas in Learning to Learn through Interaction workshop at EMNLP (2021-2023), and also serves in the program committee of NeurIPS, AAAI, ACL, EACL, EMNLP, NAACL and COLING conferences.
Presentations

Context-Aware Assistant Selection for Improved Inference Acceleration with Large Language Models
Jerry Huang and 3 other authors

Do Large Language Models Know How Much They Know?
Gabriele Prato and 4 other authors

EWEK-QA : Enhanced Web and Efficient Knowledge Graph Retrieval for Citation-based Question Answering Systems
Mohammad Dehghan and 14 other authors

EpiK-Eval: Evaluation for Language Models as Epistemic Models
Gabriele Prato and 4 other authors

Sometimes We Want Ungrammatical Translations
Prasanna Parthasarathi

Sometimes We Want Ungrammatical Translations
Prasanna Parthasarathi

Sometimes We Want Ungrammatical Translations
Prasanna Parthasarathi

UnNatural Language Inference
Koustuv Sinha and 3 other authors