Discrete neural representations for explainable anomaly detection

Stanislaw Szymanowicz, James Charles and Roberto Cipolla


Robustly detecting and explaining anomalies

Our objective is to robustly detect anomalies in video while also automatically explaining the reason behind the detector's response, e.g. `these frames are anomalous, because people are fighting'.

We see both explainability and robustness in this task as crucial, because systems used in practical setting need to be transparent to prevent any bias. Moreover, anomaly detection has to be robust to object appearance, because anomalous objects can come from unknown classes, be occluded or suffer from motion blur. We show how to decouple anomaly detection from anomaly explanation to maintain both robustness and explainability, develop a method for explaining anomalies detected based on per-pixel error maps and develop a new architecture for better performance guarantees.

See the video below for a presentation of our work at WACV 2022.


Peer-reviewed publication

S. Szymanowicz, J. Charles and R. Cipolla
WACV, 2022

Last updated 3rd Jan 2022