Discourse Diversity in Multi-Turn Empathic Dialogue
LLMs repeat the same empathy tactics. MINT trains models to vary their support strategies across turns while preserving response quality.
LLMs can sound empathic in one turn, but they repeat the same discourse moves across a conversation.
After a tactic appears, LLMs reuse it next turn at 0.50 to 0.56, compared with 0.27 for humans.
MINT combines an empathy quality reward with cross turn tactic novelty during reinforcement learning.
The best variant improves aggregate empathy by 25.3% and reduces 4B cross turn repetition by 26.3%.
The problem in one conversation
A vanilla model can sound warm while falling into the same discourse moves turn after turn. MINT keeps the response empathic but changes the support strategy as the conversation evolves.
Empathy vs. Tactic Stickiness
- Human
- Vanilla
- PsychoCounsel
- R1-Zero-Div
- MINT
- 1.7B
- 4B
MINT moves models toward the upper right: higher empathy with lower tactic stickiness, matching the paper's reversed stickiness axis. This green region marks the high empathy, low stickiness zone containing both MINT models.
Try it yourself
Use the released artifacts directly. The website stays light; the tools live in Colab, Hugging Face, and the repository.
Conversation examples
The main page keeps the story short. Open the examples if you want to inspect matched human, vanilla LLM, and MINT transcripts with tactic tags.
View conversation examples
I'm feeling so discouraged.. I just got passed over for the promotion I was work…
Want to tag your own conversations? Open in Colab.
I've been feeling so detached from everything lately, like I'm just going throug…
Want to tag your own conversations? Open in Colab.
So, I just lost my job today. I had a sense this was coming, but it's still a sh…
Want to tag your own conversations? Open in Colab.
BibTeX
@article{zhan2026discourse,
title={Discourse Diversity in Multi-Turn Empathic Dialogue},
author={Zhan, Hongli and Gueorguieva, Emma S and Hernandez, Javier and Suh, Jina and Ong, Desmond C and Li, Junyi Jessy},
journal={arXiv preprint arXiv:2604.11742},
year={2026}
}