
Carl Yang
self-training
large language model
bi-encoder
clinical nlp
text classification
information retrieval
semi-supervised learning
graph classification
sentence
similarity
retrieval augmented generation
app: misinformation & fake news
dmkm: graph mining
social network analysis & community
ml: neuro-symbolic learning
5
presentations
17
number of views
1
citations
SHORT BIO
Prof.Yang is an Assistant Professor of Computer Science at Emory University. Before that, he received my Ph.D. in Computer Science at the University of Illinois, Urbana Champaign, where he was working in the Data Mining Group led by Prof. Jiawei Han. Further before, he received my B.Eng. in Computer Science in 2014, from the Chu Kochen Honors College of Zhejiang University, where he was working in the State Key Lab of CAD&CG under Prof. Xiaofei He.
His research interests lie in graph data mining, applied machine learning, knowledge graphs, and federated learning, as well as their applications in recommender systems, social networks, neuroscience, and healthcare.
Presentations

EHRAgent: Code Empowers Large Language Models for Few-shot Complex Tabular Reasoning on Electronic Health Records
Wenqi Shi and 9 other authors

BMRetriever: Tuning Large Language Models as Better Biomedical Text Retrievers
Ran Xu and 8 other authors

MedAdapter: Efficient Test-Time Adaptation of Large Language Models Towards Medical Reasoning
Wenqi Shi and 7 other authors

Unveiling Implicit Deceptive Patterns in Multi-Modal Fake News via Neuro-Symbolic Reasoning
Yiqi Dong and 6 other authors

Neighborhood-Regularized Self-Training for Learning with Few Labels
Ran Xu and 7 other authors