Belloni, Julia Eva (2025) Does Implicit Clustering Matter? Comparing Quality of Embedding-Space Explanations from Cross-Entropy and Triplet Loss Training. Bachelor's Thesis, Artificial Intelligence.
|
Text
Finalthesis.pdf Restricted to Registered users only Download (7MB) |
Abstract
Interpretable embedding spaces enhance model transparency, allowing practitioners to detect failure modes and spurious correlations early in development. In high-stakes applications, it is crucial to understand the model's reasoning to prevent incorrect decision-making. In the context of image classification, we compare embedding spaces learned with metric-based triplet loss to those produced by conventional cross-entropy training. Experiments were conducted on Imagenette using ResNet-18 backbones, with Grad-CAM, Eigen-CAM, and Guided Grad-CAM employed to evaluate explanation faithfulness and robustness. We found that triplet-loss models consistently produce more faithful explanations. However, both training strategies yield comparable predictive performance, computational cost, and explanation stability. These results highlight triplet loss as a strong alternative to cross-entropy, combining equivalent accuracy with superior interpretability.
| Item Type: | Thesis (Bachelor's Thesis) |
|---|---|
| Supervisor name: | Zullich, M. and Valdenegro Toro, M.A. |
| Degree programme: | Artificial Intelligence |
| Thesis type: | Bachelor's Thesis |
| Language: | English |
| Date Deposited: | 15 Jul 2025 10:40 |
| Last Modified: | 15 Jul 2025 10:40 |
| URI: | https://fse.studenttheses.ub.rug.nl/id/eprint/36277 |
Actions (login required)
![]() |
View Item |
