Abstract
The expansion of speech models emphasizes the importance of parameter efficiency in practical automatic speech recognition (ASR) systems. Parameter sharing, which reuses the same parameter multiple times, has emerged as a promising solution to reduce storage requirements. However, previous studies have often faced challenges in balancing the number of parameters with performance. In this paper, we propose a novel architecture that effectively reduces the number of parameters while minimizing performance degradation. The key idea is to insert a lightweight adapter module that adjusts the features generated by shared parameters, thereby enhancing the diversity of representations. We introduce a unique adapter module and parameter-sharing configuration tailored for Conformer-based ASR encoders. Experimental results demonstrate that the proposed architecture reduces approximately 50% of parameters and 20% of computations without compromising speech recognition performance.
| Original language | English |
|---|---|
| Pages (from-to) | 2380-2384 |
| Number of pages | 5 |
| Journal | Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH |
| DOIs | |
| State | Published - 2024 |
| Externally published | Yes |
| Event | 25th Interspeech Conferece 2024 - Kos Island, Greece Duration: 1 Sep 2024 → 5 Sep 2024 |
Keywords
- adapter
- conformer
- parameter efficiency
- parameter sharing
- speech recognition
- transformer