SLEB: Streamlining LLMs through Redundancy Verification and Elimination of Transformer Blocks

  • Jiwon Song
  • , Kyungseok Oh
  • , Taesu Kim
  • , Hyungjun Kim
  • , Yulhwa Kim
  • , Jae Joon Kim

Research output: Contribution to journalConference articlepeer-review

5 Scopus citations

Abstract

Large language models (LLMs) have proven to be highly effective across various natural language processing tasks. However, their large number of parameters poses significant challenges for practical deployment. Pruning, a technique aimed at reducing the size and complexity of LLMs, offers a potential solution by removing redundant components from the network. Despite the promise of pruning, existing methods often struggle to achieve substantial end-to-end LLM inference speedup. In this paper, we introduce SLEB, a novel approach designed to streamline LLMs by eliminating redundant transformer blocks. We choose the transformer block as the fundamental unit for pruning, because LLMs exhibit block-level redundancy with high similarity between the outputs of neighboring blocks. This choice allows us to effectively enhance the processing speed of LLMs. Our experimental results demonstrate that SLEB outperforms previous LLM pruning methods in accelerating LLM inference while also maintaining superior perplexity and accuracy, making SLEB as a promising technique for enhancing the efficiency of LLMs. The code is available at: https://github.com/jiwonsong-dev/SLEB.

Original languageEnglish
Pages (from-to)46136-46155
Number of pages20
JournalProceedings of Machine Learning Research
Volume235
StatePublished - 2024
Event41st International Conference on Machine Learning, ICML 2024 - Vienna, Austria
Duration: 21 Jul 202427 Jul 2024

Fingerprint

Dive into the research topics of 'SLEB: Streamlining LLMs through Redundancy Verification and Elimination of Transformer Blocks'. Together they form a unique fingerprint.

Cite this