profile picture

Fei Mi

large language models

dataset

pre-training

continual learning

pre-trained language model

information retrieval

dialog state tracking

dialog systems

natural language generation

climate change

multimodality

dialogue generation

hallucinations

prompting

benchmark

23

presentations

3

number of views

SHORT BIO

Fei Mi is a Senior Reseacher at Huawei Noah's Ark Lab. Prior to that, he obtained his PhD from EPFL and Master from HKUST. His research interests focus on NLP, include dialog system, few-shot learning, pre-trained models.

Presentations

CoSafe: Evaluating Large Language Model Safety in Multi-Turn Dialogue Coreference

Erxin Yu and 6 other authors

FollowBench: A Multi-level Fine-grained Constraints Following Benchmark for Large Language Models

Yuxin Jiang and 9 other authors

Dynamic Stochastic Decoding Strategy for Open-Domain Dialogue Generation

Yiwei Li and 6 other authors

Defending Large Language Models Against Jailbreaking Attacks Through Goal Prioritization

Zhexin Zhang and 5 other authors

Enhancing Large Language Models Against Inductive Instructions with Dual-critique Prompting

Rui Wang and 6 other authors

Cue-CoT: Chain-of-thought Prompting for Responding to In-depth Dialogue Questions with LLMs

Hongru WANG and 7 other authors

Large Language Models as Source Planner for Personalized Knowledge-grounded Dialogues

Hongru WANG and 9 other authors

Improving Factual Consistency for Knowledge-Grounded Dialogue Systems via Knowledge Enhancement and Alignment

Boyang XUE and 9 other authors

ReSee: Responding through Seeing Fine-grained Visual Knowledge in Open-domain Dialogue

Haoqin Tu and 3 other authors

MoralDial: A Framework to Train and Evaluate Moral Dialogue Systems via Moral Discussions

Hao Sun and 8 other authors

DecompEval: Evaluating Generated Texts as Unsupervised Decomposed Question Answering

Pei Ke and 6 other authors

A Synthetic Data Generation Framework for Grounded Dialogues

Jianzhu Bao and 6 other authors

Retrieval-free Knowledge Injection through Multi-Document Traversal for Dialogue Models

Rui Wang and 9 other authors

One Cannot Stand for Everyone! Leveraging Multiple User Simulators\\ to train Task-oriented Dialogue Systems

Yajiao LIU and 7 other authors

Towards Fewer Hallucinations in Knowledge-Grounded Dialogue Generation via Augmentative and Contrastive Knowledge-Dialogue

Bin Sun and 5 other authors

KPT: Keyword-guided Pre-training for Grounded Dialog Generation

Qi Zhu and 8 other authors

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved