profile picture

William W. Cohen

question answering

retrieval

knowledge bases

cross-lingual

summarization

factuality

interpretability

language model

knowledge representation

representation learning

explainable ai (xai)

temporal modeling

post-editing

dictionary

question-answering

13

presentations

22

number of views

SHORT BIO

William Cohen Principal Scientist at Google, and is based in Google's Pittsburgh office. He received his bachelor's degree in Computer Science from Duke University in 1984, and a PhD in Computer Science from Rutgers University in 1990. From 1990 to 2000 Dr. Cohen worked at AT&T Bell Labs and later AT&T Labs-Research, and from April 2000 to May 2002 Dr. Cohen worked at Whizbang Labs, a company specializing in extracting information from the web. From 2002 to 2018, Dr. Cohen worked at Carnegie Mellon University in the Machine Learning Department, with a joint appointment in the Language Technology Institute, as an Associate Research Professor, a Research Professor, and a Professor. Dr. Cohen also was the Director of the Undergraduate Minor in Machine Learning at CMU and co-Director of the Master of Science in ML Program.

Presentations

MEMORY-VQ: Compression for Tractable Internet-Scale Memory

Yury Zemlyanskiy and 6 other authors

Beyond Contrastive Learning: A Variational Generative Model for Multilingual Retrieval

John Wieting and 4 other authors

WinoDict: Probing language models for in-context word acquisition

Julian Eisenschlos and 3 other authors

Augmenting Pre-trained Language Models with QA-Memory for Open-Domain Question Answering

Wenhu Chen and 4 other authors

QA is the New KR: Question-Answer Pairs as Knowledge Bases

William W. Cohen and 6 other authors

Correcting Diverse Factual Errors in Abstractive Summarization via Post-Editing and Language Model Infilling

Vidhisha Balachandran and 3 other authors

Time-Aware Language Models as Temporal Knowledge Bases

Bhuwan Dhingra and 5 other authors

Evaluating Explanations: How Much do Explanations from the Teacher aid Students?

Danish Pruthi and 7 other authors

Explain, Edit, and Understand: Rethinking User Study Design for Evaluating Model Explanations

Siddhant Arora and 5 other authors

MATE: Multi-view Attention for Table Transformer Efficiency

Julian Eisenschlos and 3 other authors

MATE: Multi-view Attention for Table Transformer Efficiency

Julian Eisenschlos and 3 other authors

Adaptable and Interpretable Neural MemoryOver Symbolic Knowledge

Pat Verga and 3 other authors

Differentiable Open-Ended Commonsense Reasoning

Bill Yuchen Lin and 5 other authors

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Lectures
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved