Skip to content

xzqjack/FeatureMapInversion

Repository files navigation

Feature Map Inversion with MXnet

This code is used to reproduce the experiments in the paper with MXnet: Zhiqiang Xia, Ce Zhu, Zhengtao Wang, Qi Guo, Yipeng Liu. "Every Filter Extract a Specific Texture in Convolutional Neural Networks-short".

Installation

This code is written in Python and requires MXnet. If you're on Ubuntu, install MXnet in your home directory as the link described:

Usage

Input content images:

Input style images:

To visualize modified code, you can run

python vis_invert.py [content-image] [style-image] [layer-name] [mod_type]
  • layer-name must be str like "[relu1_1, relu2_1, relu3_1]"
  • mod_type should be original, feature_map, random, or purposeful

Feature Map Inversion:

Randomly Modified Code Inversion:

To do style transfer, you can run

python vis_style.py [content-image] [style-image] [layer-name] [mod_type]
  • layer-name must be str like "[relu1_1, relu2_1, relu3_1]"
  • mod_type should be original or purposeful_optimization
  • Content / style tradeoff, you can set parameters [content-weight] and [style-weight]

Purposefully Modified Code Inversion:

Reference

This code referred https://github.com/dmlc/mxnet/tree/master/example/neural-style.

Future work

To add "Activation Maximization" such as deepdream.

About

Feature Map Inversion to visualize what feature a filter extract from input image in CNNs

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages