Hongli Zhan | 詹弘立


Office: RLP 4.108

Arrogance is a sign of ignorance.

I am a 3rd year Ph.D. student in Computational Linguistics at The University of Texas at Austin, where I’m blessed to be advised by Professor Junyi Jessy Li. The ambition of my research is to build emotionally intelligent AI systems in a broader social context. I am part of the UT Austin NLP community, and am currently leading the UT Austin Natural Language Learning Reading Group. If you have a paper you’re interested in reading, feel free to share it with me, and we can discuss it at the bi-weekly meeting together ;)

My CV is linked here. Media coverage of my research can be found here. The profile photo was taken at the presentation of my first-ever paper, which was accepted to EMNLP 2022.

Casually, I go by Henry.


UT Austin Logo
Ph.D. in Computational Linguistics (Minor in Computer Science), 2021 – Present
The University of Texas at Austin
Advisor: Professor Junyi Jessy Li
B.A. in English Linguistics (Second Major in Law), 2017 – 2021
Shanghai Jiao Tong University
Awards: Outstanding Undergraduate; Outstanding Undergraduate Thesis Award

Industry Experience

IBM Logo
Research Intern, IBM Research, Yorktown Heights, NY, Summer 2024
Hosts: Dr. Raya Horesh, Dr. Muneeza Azmat, Dr. Mikhail Yurochkin

Research Highlights

    Understanding emotions is crucial in assessing one's well-being. How do people feel about and make sense of what took place in their lives during crises? In our work investigating emotional tolls caused by COVID-19 (Zhan et al., EMNLP 2022), we developed models that could jointly predict fine-grained emotion given social media text, and generate a description of what triggered the emotion. Nevertheless, the same event can often result in different emotional experiences, based on an individual's subjective evaluations or appraisals.
    In follow-up work (Zhan et al., EMNLP 2023 Findings), we found that state-of-the-art LLMs were on par with (and in some cases better than) lay people in uncovering the implicit cognitive information for emotional understanding. Having established such cognitive capabilities, we can subsequently zoom in on the specific negative appraisals which lead to negative emotions, and try to change them by offering targeted reappraisals. Based on the cognitive appraisal theories of emotions, this provides a precise, principled way to help regulate someone's emotions. In our most recent work (Zhan et al., 2024), we dived into instilling such cognitive reappraisal abilities into LLMs. Our extensive expert evaluations (with practicing psychologists holding advanced degrees) revealed that even LLMs at smaller scales (e.g., 7 billion) can generate cognitive reappraisals that significantly outperform human-written ones if we guide them with psychologically-informed instructions. These findings underscore the potential of AI systems for emotional support and mental well-being.


* denotes equal contributions

  1. Pre-Print
    Large Language Models are Capable of Offering Cognitive Reappraisal, if Guided
  2. Pre-Print
    Large Language Models Produce Responses Perceived to be Empathic
    ArXiv 2024
  3. EMNLP 2023Findings
    Evaluating Subjective Cognitive Appraisals of Emotions from Large Language Models
    Hongli ZhanDesmond C. Ong, and Junyi Jessy Li
    In Findings of the Association for Computational Linguistics: EMNLP 2023 Dec 2023
  4. ACL 2023Main
    Unsupervised Extractive Summarization of Emotion Triggers
    In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) Jul 2023
  5. EMNLP 2022Main
    Why Do You Feel This Way? Summarizing Triggers of Emotions in Social Media Posts
    In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing Dec 2022