Using mostly code from totti0223's Gradcam++'s implementation.
This code is explained on this blog article and more in depth in the paper Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization.
git clone https://github.com/sam1902/GradCAM-Keras
pip3 install -r GradCAM-Keras/requirements.txt
cd GradCAM-Keras
To use this program, run
python3 main.py
from within the root of this repository.
main.py [-h] --input /path/to/input/image.png
[--output /path/to/output/graph.png] --layer
{block1_conv1,block1_conv2,block2_conv1,block2_conv2,block3_conv1,block3_conv2,block3_conv3,block3_conv4,block4_conv1,block4_conv2,block4_conv3,block4_conv4,block5_conv1,block5_conv2,block5_conv3,block5_conv4}
[-v] [-q]
Explains what VGG19 looks at to classify the given image.
optional arguments:
-h, --help show this help message and exit
--input /path/to/input/image.png, -i /path/to/input/image.png
Image to run onto, can be any format.
--output /path/to/output/graph.png, -o /path/to/output/graph.png
Where to generate the output visualisation.
--layer {block1_conv1,block1_conv2,block2_conv1,block2_conv2,block3_conv1,block3_conv2,block3_conv3,block3_conv4,block4_conv1,block4_conv2,block4_conv3,block4_conv4,block5_conv1,block5_conv2,block5_conv3,block5_conv4}, -l {block1_conv1,block1_conv2,block2_conv1,block2_conv2,block3_conv1,block3_conv2,block3_conv3,block3_conv4,block4_conv1,block4_conv2,block4_conv3,block4_conv4,block5_conv1,block5_conv2,block5_conv3,block5_conv4}
Layer at which to "look at".
-v, --verbose
-q, --quiet If set, doesn't show the plot and only saves the
output (if one is provided).
To only show the output without saving
python3 main.py \
--layer block5_conv4 \
--input llama.png \
--verbose
To save the output
python3 main.py \
--layer block5_conv4 \
--input llama.png \
--output graph.png \
--verbose
To only save the output without showing
python3 main.py \
--layer block5_conv4 \
--input llama.png \
--output graph.png \
--quiet