profile picture

Hinrich Schütze

Professor @ University of Munich

language models

cross-lingual transfer

bias

few-shot learning

natural language processing

pretrained language models

prompts

tokenization

prompting

self-attention

reddit

low-resource languages

crosslingual

convolution

social network analysis

18

presentations

21

number of views

SHORT BIO

Hinrich Schütze is professor of computational linguistics and director of the Center for Information and Language Processing at LMU Munich in Germany. Before moving to Munich in 2013, he taught at the University of Stuttgart. He received his PhD in Computational Linguistics from Stanford University in 1995 and worked on natural language processing and information retrieval technology at Xerox PARC, at several Silicon Valley startups and at Google 1995-2004 and 2008/9. He is a coauthor of Foundations of Statistical Natural Language Processing (with Chris Manning) and Introduction to Information Retrieval (with Chris Manning and Prabhakar Raghavan).

Presentations

A Crosslingual Investigation of Conceptualization in 1335 Languages

Yihong Liu and 6 other authors

Does Manipulating Tokenization Aid Cross-Lingual Transfer? A Study on POS Tag- ging for Non-Standardized Languages

Verena Blaschke and 2 other authors

Hengam: An Adversarially Trained Transformer for Persian Temporal Tagging

Amir Hossein Kargaran and 3 other authors

Modeling Ideological Salience and Framing in Polarized Online Groups with Graph Neural Networks and Structured Sparsity

Valentin Hofmann and 2 other authors

The Reddit Politosphere: A Large-Scale Text and Network Resource of Online Political Discourse

Valentin Hofmann and 2 other authors

Measuring and Improving Consistency in Pretrained Language Models

Yanai Elazar and 6 other authors

Continuous Entailment Patterns for Lexical Inference in Context

Martin Schmitt and 1 other author

Few-Shot Text Generation with Natural Language Instructions

Timo Schick and 1 other author

Measuring and Improving Consistency in Pretrained Language Models

Yanai Elazar and 6 other authors

Generating Datasets with Pretrained Language Models

Timo Schick and 1 other author

Continuous Entailment Patterns for Lexical Inference in Context

Martin Schmitt and 1 other author

Increasing Learning Efficiency of Self-Attention Networks through Direct Position Interactions, Learnable Temperature, and Convoluted Attention

Philipp Dufter and 2 other authors

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Lectures
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2023 Underline - All rights reserved