Iterative Pruning-based Model Compression for Pose Estimation on Resource-constrained Devices

Sung Hyun Choi, Wonje Choi, Youngseok Lee, Honguk Woo

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

2 Scopus citations

Abstract

In this work, we propose a pruning-based model compression scheme, aiming at achieving an efficient model that has strength in both accuracy and inference time on an embedded device environment with limited resources. The proposed scheme consists of (1) pruning profiling and (2) iterative pruning via knowledge distillation. With the scheme, we develop a resource-efficient 2D pose estimation model using HRNet and evaluate the model on NVIDA JetsonNano with the Microsoft COCO keypoint dataset. Specifically, our compressed model obtains the fast pose estimation of 20.3 FPS on NVIDA JetsonNano, while maintaining a high accuracy of 74.1 AP. Compared to the conventional HRNet model without compression, the proposed compression technique achieves 33 % improvement in FPS with only 0.4 % degradation in AP.

Original languageEnglish
Title of host publicationICMVA 2022 - 5th International Conference on Machine Vision and Applications
PublisherAssociation for Computing Machinery
Pages110-115
Number of pages6
ISBN (Electronic)9781450395670
DOIs
StatePublished - 18 Feb 2022
Event5th International Conference on Machine Vision and Applications, ICMVA 2022 - Singapore, Singapore
Duration: 18 Feb 202220 Feb 2022

Publication series

NameACM International Conference Proceeding Series

Conference

Conference5th International Conference on Machine Vision and Applications, ICMVA 2022
Country/TerritorySingapore
CitySingapore
Period18/02/2220/02/22

Keywords

  • Embedded System inference
  • Knowledge Distillation
  • Model compression
  • Pose estimation
  • Pruning

Fingerprint

Dive into the research topics of 'Iterative Pruning-based Model Compression for Pose Estimation on Resource-constrained Devices'. Together they form a unique fingerprint.

Cite this