profile picture

Steven Y. Feng

PhD Student @ Stanford University, Computer Science, Stanford, United States of America

language models

data augmentation

text generation

methods

generation

nlp

survey

challenges

tasks

techniques

applications

future directions

seq2seq

healthcare

inference efficiency

8

presentations

2

citations

SHORT BIO

I'm a Stanford Computer Science PhD student and NSERC PGS-D scholar, working with the Stanford AI Lab and Stanford NLP Group. I am co-advised by Michael C. Frank and Noah Goodman as part of the Language & Cognition (LangCog) and Computation & Cognition (CoCo) Labs. I am grateful to receive support from Google DeepMind, Amazon Science, Microsoft AFMR, and StabilityAI.

My ultimate goal is to blend knowledge from multiple disciplines to advance AI research. My current research centers around aligning foundation model and human learning and capabilities, particularly in reasoning, generalization, and efficiency. I have explored ways to improve the controllability of language and visual generation models, and integrate structured and multimodal information to enhance their reasoning capabilities.

I'm investigating psychologically and cognitively inspired methods for continual learning, self-improvement, and advanced reasoning in foundation models. I'm also exploring methods to bridge the data efficiency gap between human and model learning while shedding further light on human cognitive models and our efficient language acquisition capabilities.

Previously, I was a master's student at Carnegie Mellon University (CMU), where I worked with Eduard Hovy and Malihe Alikhani on language generation, data augmentation, and commonsense reasoning. Before that, I was an undergraduate student at the University of Waterloo, where I worked with Jesse Hoey on dialogue agents and text generation.

My research contributions have been recognized with several publications at major conferences and a best paper award at INLG 2021. I am also an Honorable Mention for the Jessie W.H. Zou Memorial Award and CRA Outstanding Undergraduate Researcher Award.

I am a co-instructor for the Stanford CS25 Transformers course, and mentor and advise several students. I also led the organization of CtrlGen, a controllable generation workshop at NeurIPS 2021, and was involved in the GEM benchmark and workshop for NLG evaluation.

In my free time, I enjoy gaming, playing the piano and guitar, singing, dancing, martial arts, and table tennis. I am also the founder and president of the Stanford Piano Society.

Presentations

Is Child-Directed Speech Effective Training Data for Language Models?

Steven Y. Feng and 2 other authors

CHARD: Clinical Health-Aware Reasoning Across Dimensions for Text Generation Models

Steven Y. Feng and 4 other authors

PANCETTA: Phoneme Aware Neural Completion to Elicit Tongue Twisters Automatically

Sedrick Keh and 4 other authors

PINEAPPLE: Personifying INanimate Entities by Acquiring Parallel Personification data for Learning Enhanced generation

Sedrick Keh and 6 other authors

Retrieve, Caption, Generate: Visual Grounding for Enhancing Commonsense in Text Generation Models

Steven Y. Feng and 6 other authors

NAREOR: The Narrative Reordering Problem

Varun Gangal and 4 other authors

A Survey of Data Augmentation Approaches for NLP

Steven Y. Feng

A Survey of Data Augmentation Approaches for NLP

Steven Y. Feng

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved