Hongli Zhan | 詹弘立
honglizhan@utexas.edu

Office: RLP 4.108
Arrogance is a sign of ignorance.
I am a Ph.D. candidate in Computational Linguistics at The University of Texas at Austin, where I’m blessed to be advised by Professor Junyi Jessy Li.
The ambition of my Ph.D. research is to build emotionally intelligent AI systems in a broader social context (see my first-authored publications at EMNLP 2022, ACL 2023, EMNLP 2023 Findings, COLM 2024). During my internship in the industry, I’ve also worked on aligning language models (see my recent first-authored work at ICML 2025).
My research contributes to IBM’s Granite Guardian models.
Casually, I go by Henry.
I am on the job market this year, and I am actively looking for Research Scientist roles in the industry!
Education
Ph.D. in Computational Linguistics, 2021 – Present
The University of Texas at Austin
⁃ Advisor: Dr. Junyi Jessy Li
The University of Texas at Austin
⁃ Advisor: Dr. Junyi Jessy Li
B.A. in English Linguistics, 2017 – 2021
Shanghai Jiao Tong University
⁃ Awards: Outstanding Undergraduate; Outstanding Undergraduate Thesis Award
Shanghai Jiao Tong University
⁃ Awards: Outstanding Undergraduate; Outstanding Undergraduate Thesis Award
Industry Experience
Research Scientist Intern, IBM Research, Summer 2025
IBM Thomas J. Watson Research Center, Yorktown Heights, NY
⁃ Manager: Dr. Raya Horesh; Mentors: Dr. Muneeza Azmat & Dr. Pin-Yu Chen
IBM Thomas J. Watson Research Center, Yorktown Heights, NY
⁃ Manager: Dr. Raya Horesh; Mentors: Dr. Muneeza Azmat & Dr. Pin-Yu Chen
Research Scientist Intern, IBM Research, Summer 2024
IBM Thomas J. Watson Research Center, Yorktown Heights, NY
⁃ Manager: Dr. Raya Horesh; Mentors: Dr. Muneeza Azmat & Dr. Mikhail Yurochkin
⁃ Work resulted in a first-authored paper at ICML 2025 & a first-authored U.S. patent, and was incorporated as part of the features in IBM's Granite Guardian
IBM Thomas J. Watson Research Center, Yorktown Heights, NY
⁃ Manager: Dr. Raya Horesh; Mentors: Dr. Muneeza Azmat & Dr. Mikhail Yurochkin
⁃ Work resulted in a first-authored paper at ICML 2025 & a first-authored U.S. patent, and was incorporated as part of the features in IBM's Granite Guardian
News
[2025/06/24] ![]() |
[2025/05/19] ![]() |
[2025/05/01] ![]() |
Selected First-Authored Publications
* denotes equal contributions
-
ICML 2025SPRI: Aligning Large Language Models with Context-Situated PrinciplesIn Proceedings of the 42nd International Conference on Machine Learning. 2025. [26.9% acceptance rate (3,260 out of 12,107 submissions); Work started and partially done during my internship at IBM Research]
-
COLM 2024Large Language Models are Capable of Offering Cognitive Reappraisal, if GuidedIn Proceedings of the 1st Conference on Language Modeling. 2024. [28.8% acceptance rate (299 out of 1,036 submissions)]
-
EMNLP 2023FindingsEvaluating Subjective Cognitive Appraisals of Emotions from Large Language ModelsIn Findings of the Association for Computational Linguistics: EMNLP 2023. Dec 2023. [45.4% acceptance rate (1,758 out of 3,868 submissions)]
-
EMNLP 2022