Skip to content

VisRepNet is a symbolic language for describing and visualizing neural network architectures and their related training environments.

Notifications You must be signed in to change notification settings

joshclancy/VisRepNet

Repository files navigation

VisRepNet is a symbolic language for describing and visualizing neural network architectures and their related training environments. The language is specialized towards visualizing complex networks at a high level of detail as to be presented on paper. We are continually refining this language and welcome suggestions for change.

How to read VisRepNet Diagrams

The Symbols within VisRepNet

VisRepNet aims to describe all neural networks and accurately visualize the differences between them. In striving towards this goal, we have created systems of visualization to represent as many niche cases as possible. This has resulted in general representational systems, but perhaps not obvious ones.

Convolutions

A good example of this, is how VisRepNet represents a convolutional layer:

This may seem to be overly complicated. However, this representational system allows us to represent various niche types of convolutions as well. For example, below we show:

1. A dilated convolution
2. A custom convolution found in Axial Attention.
3. A convolution that allows for parallel processing along the group dimension.
4. A temporal convolution.

Tensors

We employ another system of representation to visually describe a wide range of tensors. Simple tensors are represented simply and intuitively. However, we also provide a method of representing more complex tensors. This is done as shown below.

We can mix and match channel, grouping, spatial, and temporal dimensions to produce a wide range of possible tensors. Below are some further examples:

Other Neural Layers

Having a good system to represent and visualize tensors allows us to use those tensors to provide context to neural layers.

Other Symbols

Our other symbols tend to be intuitive.

When we need to represent a function for which we have no symbol, we use the symbol closest to the new function with a label (e.g., use the downsample symbol to represent max pool) or we use our standard stand-in symbol with a label.

The Organization of VisRepNet

We organize VisRepNet diagrams in procedural abstraction levels. There are three main levels.

  1. Update Environment Level
  2. Network Architecture Level
  3. Lower Function Level

Update Environment Level

In the Update Environment there is a system diagram that describes how the network is trained. This becomes particularly handy when a network is being trained in a new or obscure way. For example, below is DC-GANs training environment. We can see clearly that DC-Gan’s training involves an interaction between two networks. This interaction is both visualized and described.

The system diagram visualization is organized within the INOMU framework. INOMU stands for Input, Network, Output, Measure and Update. Each function is placed within its appropriate section to create a predictable and intuitive system diagram.

Based on our user studies, deep learning practitioners want a lot of information about the update environment. VisRepNet then provides a written area to accommodate for this. VisRepNet expect a description of the preprocessing, input/s, output/s, loss function, and update hyperparameters. On top of this we expect a general description of the idea behind the network (or area of improvement, if the diagram is accompanying an academic paper).

Network Architecture Level

The network architecture level describes a network’s architecture with the use of abstracted functions. For example, the sigmoid and phi functions are described in detail in the next lower layer.

At this level we can see the general change in the shape of tensors as they travel through the network.

Lower Function Level

At the lower function level, we get detailed descriptions of the functions within the neural network architecture. We aim to provide enough detail, so that one could use these diagrams to code up the network.

How to create VisRepNet Diagrams

Resources

VisRepNet Diagrams are created in draw.io, a free open source diagramming tool. You can use it in the browser or you can download a desktop version

To get you started, we provide:

  1. A template draw.io document (See the VisRepNet_templates.drawio document in the github repo for latest version)
  2. An example draw.io document (See the VisRepNet_examples.drawio document in the github repo for the latest version)
  3. A library of signs to create VisRepNet diagrams. (See the signset.xml document in the Github repo for latest version)

Help design and innovate the next VisRepNet Update.

To Do



Some examples of VisRepNet v2.0 Diagrams:

Resnet- 34 layers

DC-GAN

BERT

SimCLR


About

VisRepNet is a symbolic language for describing and visualizing neural network architectures and their related training environments.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published