Abstract
Research on developing dialogue systems with large language models (LLMs) has been extensive, relying heavily on LLMs’ capabilities to generate contextually nuanced responses. Yet, these approaches are not easily transferable to smaller language models (sLMs), particularly in task-oriented dialogue (ToD) scenarios, where dialogue systems are required to engage in personalized interactions with humans. In this paper, we investigate LLM distillation approaches for sLM-based ToD systems and present an Aspect-Augmented Dialogue Distillation (A2D2) framework, aiming to compress the human aspect-aware capabilities of an LLM into an sLM while ensuring the fulfillment on task specific requirements. The framework incorporates a set of human aspects individually into LLM-based ToD data generation to improve the effectiveness and efficiency of the LLM-to-sLM distillation process, thereby establishing robust sLM-based ToD systems that are adaptable to diverse users and achieving higher task success rates. We demonstrate that the sLM-based ToD systems derived through A2D2 yield competitive performance on various ToD scenarios including unseen task settings, adapting to a wide range of synthetic users characterized by multiple aspects.
| Original language | English |
|---|---|
| Article number | 130494 |
| Journal | Expert Systems with Applications |
| Volume | 302 |
| DOIs | |
| State | Published - 15 Mar 2026 |
Keywords
- Human aspect modeling
- Knowledge distillation
- Small language models
- Task-oriented dialogue
Fingerprint
Dive into the research topics of 'Aspect-augmented distillation of task-oriented dialogues to small language models'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver