EMNLP 2025

November 06, 2025

Suzhou, China

Would you like to see your presentation here, made available to a global audience of researchers?
Add your own presentation or have us affordably record your next conference.

In this paper, we propose TrInk, a Transformer-based model for ink generation, enabling parallel training and better capturing global dependencies. To better facilitate the alignment between the input text and generated stroke points, we introduce scaled positional embeddings and a Gaussian memory mask in the cross-attention module. Additionally, we design both subjective and objective evaluation pipelines to comprehensively assess the legibility and style consistency of the generated handwriting. Experiments demonstrate that our Transformer-based model achieves a 35.56% reduction in character error rate (CER) and an 29.66% reduction in word error rate (WER) on the IAM-OnDB dataset compared to previous methods. We provide an online demo page with handwriting samples from TrInk and baseline models at: https://akahello-a11y.github.io/trink-demo/

Downloads

SlidesPaperTranscript English (automatic)

Next from EMNLP 2025

M-BRe: Discovering Training Samples for Relation Extraction from Unlabeled Texts with Large Language Models
poster

M-BRe: Discovering Training Samples for Relation Extraction from Unlabeled Texts with Large Language Models

EMNLP 2025

Piji LiHongliang Dai
Hongliang Dai and 2 other authors

06 November 2025

Stay up to date with the latest Underline news!

Select topic of interest (you can select more than one)

PRESENTATIONS

  • All Presentations
  • For Librarians
  • Resource Center
  • Free Trial
Underline Science, Inc.
1216 Broadway, 2nd Floor, New York, NY 10001, USA

© 2025 Underline - All rights reserved