TY - JOUR
T1 - SEDNet
T2 - Synergistic Learning Network with Embedded Encoder and Dense Atrous Convolution for Vehicle Re-identification
AU - Xiong, Mingfu
AU - Gui, Tanghao
AU - Sun, Zhihong
AU - Anwar, Saeed
AU - Alotaibi, Aziz
AU - Muhammad, Khan
N1 - Publisher Copyright:
© 2025 The Authors
PY - 2025/9
Y1 - 2025/9
N2 - To address the issue of information redundancy (such as color and vehicle model) caused by excessive emphasis on local features in vehicle re-identification, this paper proposes a Synergistic Learning Network with Embedded Encoder and Dense Atrous Convolution (SEDNet). The proposed SEDNet framework consists of three unique branches: a global embedded multi-head encoder (GEME), local dual-dense atrous convolution (LDAC), and auxiliary attribute embedding (AAM). The GEME branch integrates the global appearance features of the vehicle to enhance consistency in descriptions from different perspectives. To suppress redundant information such as color and vehicle model information, and refine local features, the LDAC branch employs an attention mechanism to capture multiscale features using convolutional kernels with varying dilation rates. In addition, the AAM branch uses vehicle metadata, such as direction and camera perspectives, to enhance feature robustness. Our proposed SEDNet method has been rigorously tested on the mainstream benchmark vehicle re-identification datasets, including VeRi-776, VehicleID, and VeRi-Wild. The results show that our method enhances the mAP by 2.2%, 2.2%, and 0.2%, respectively, when compared to the latest methods, all evaluated on a regular scale. Additional experiments conducted on the Market-1501 and DukeMTMC-reID datasets further verify our method's generalization capability.
AB - To address the issue of information redundancy (such as color and vehicle model) caused by excessive emphasis on local features in vehicle re-identification, this paper proposes a Synergistic Learning Network with Embedded Encoder and Dense Atrous Convolution (SEDNet). The proposed SEDNet framework consists of three unique branches: a global embedded multi-head encoder (GEME), local dual-dense atrous convolution (LDAC), and auxiliary attribute embedding (AAM). The GEME branch integrates the global appearance features of the vehicle to enhance consistency in descriptions from different perspectives. To suppress redundant information such as color and vehicle model information, and refine local features, the LDAC branch employs an attention mechanism to capture multiscale features using convolutional kernels with varying dilation rates. In addition, the AAM branch uses vehicle metadata, such as direction and camera perspectives, to enhance feature robustness. Our proposed SEDNet method has been rigorously tested on the mainstream benchmark vehicle re-identification datasets, including VeRi-776, VehicleID, and VeRi-Wild. The results show that our method enhances the mAP by 2.2%, 2.2%, and 0.2%, respectively, when compared to the latest methods, all evaluated on a regular scale. Additional experiments conducted on the Market-1501 and DukeMTMC-reID datasets further verify our method's generalization capability.
KW - Dense atrous convolution
KW - Embedded encoder
KW - Intelligent networks
KW - Synergistic learning
KW - Vehicle re-identification
UR - https://www.scopus.com/pages/publications/105006800346
U2 - 10.1016/j.aej.2025.04.101
DO - 10.1016/j.aej.2025.04.101
M3 - Article
AN - SCOPUS:105006800346
SN - 1110-0168
VL - 128
SP - 297
EP - 305
JO - Alexandria Engineering Journal
JF - Alexandria Engineering Journal
ER -