TY - GEN
T1 - MCPNet-CLL
T2 - 2025 International Technical Conference on Circuits/Systems, Computers, and Communications, ITC-CSCC 2025
AU - Moon, Kyu Hoon
AU - Shin, Jitae
N1 - Publisher Copyright:
© 2025 IEEE.
PY - 2025
Y1 - 2025
N2 - Multi-Level Concept Prototype Networks (MCP-Net) learn human-understandable prototypes at every feature layer, yet they leave semantic links between layers largely unconstrained, leading to fragmented or redundant hierarchies. We introduce Cross-Layer Linking Loss (CLL) to weld those layers into a coherent concept ladder. CLL works in three coordinated steps: (i) for every prototype it pulls the most semantically related neighbour in the next layer closer through a temperature-annealed InfoNCE objective; (ii) it selects hard negatives - high-similarity but semantically unrelated prototypes - and pushes them away, preserving layer diversity; and (iii) it adds a margin-based repulsion term that discourages multiple prototypes within the same layer from collapsing onto the same concept. This contrastive formulation replaces earlier heuristic "selective matching"and delivers sharper alignment without extra parameters or labels. On synthetic shapes, AWA2, Caltech-101 and CUB-200, MCPNet-CLL matches or exceeds the baseline's top-1 accuracy while raising completeness, purity and distinctiveness scores of the discovered concepts. Visualisations reveal smooth, non-overlapping concept evolution across layers, confirming that CLL yields a more faithful and interpretable representation of the model's reasoning process.
AB - Multi-Level Concept Prototype Networks (MCP-Net) learn human-understandable prototypes at every feature layer, yet they leave semantic links between layers largely unconstrained, leading to fragmented or redundant hierarchies. We introduce Cross-Layer Linking Loss (CLL) to weld those layers into a coherent concept ladder. CLL works in three coordinated steps: (i) for every prototype it pulls the most semantically related neighbour in the next layer closer through a temperature-annealed InfoNCE objective; (ii) it selects hard negatives - high-similarity but semantically unrelated prototypes - and pushes them away, preserving layer diversity; and (iii) it adds a margin-based repulsion term that discourages multiple prototypes within the same layer from collapsing onto the same concept. This contrastive formulation replaces earlier heuristic "selective matching"and delivers sharper alignment without extra parameters or labels. On synthetic shapes, AWA2, Caltech-101 and CUB-200, MCPNet-CLL matches or exceeds the baseline's top-1 accuracy while raising completeness, purity and distinctiveness scores of the discovered concepts. Visualisations reveal smooth, non-overlapping concept evolution across layers, confirming that CLL yields a more faithful and interpretable representation of the model's reasoning process.
KW - CLL loss
KW - Cross Layer Linking
KW - Explainable AI
KW - Multi-level concept learning
KW - Prototype-based interpretability
UR - https://www.scopus.com/pages/publications/105016388030
U2 - 10.1109/ITC-CSCC66376.2025.11137649
DO - 10.1109/ITC-CSCC66376.2025.11137649
M3 - Conference contribution
AN - SCOPUS:105016388030
T3 - 2025 International Technical Conference on Circuits/Systems, Computers, and Communications, ITC-CSCC 2025
BT - 2025 International Technical Conference on Circuits/Systems, Computers, and Communications, ITC-CSCC 2025
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 7 July 2025 through 10 July 2025
ER -