explorable, self-explanatory research outputs

Overview

Explorable explanations are interactive web essays that explain challenging technical ideas. This elegant distill.pub article explains matrix convolution and related ideas like receptive field, important notions in CNNs that also have applications in image processing. Educational efforts like these are valuable but labour-intensive, especially for the kind of interactive graphics we might want to show how an algorithm like convolution works.

How could a language like Fluid help? For an interactive explanation of an algorithm, one possibility is to use Fluid's built-in provenance-tracking infrastructure to allow a user to explore the relationships between the stages of the convolution pipeline, using interactions like the ones shown below. This moves a real implementation closer to being a self-explanatory artifact, reducing the need for separate, custom-crafted explanations. Enriched with integrated documentation, “open implementations” like these could form the basis of a kind of literate execution and way of authoring explorable explanations with less effort.

An infrastructure for explorable explanations

As a simple illustration, consider the following Fluid implementation of convolution. The program takes an input matrix and transforms it using a small matrix called a filter (or kernel), as might be used in image processing to apply an effect like blurring or embossing. Toggle the data pane on the left to reveal the inputImage and filter; then mouse over the output to see how the inputs are being used. This implementation can automatically reveal how different cells in the output demand different cells in the input array and filter.

loading figure(s)

Relations of cognacy

The idea of related inputs introduced earlier can also be informative. Try interacting with the inputImage instead. The highlighted output now shows the elements that consume the data point under your mouse; the highlighted inputImage region includes all the cognates of that data point: all the inputs that have one of those outputs as an ancestor in the dependence graph. The highlighted region is a kind of “light cone”, picking out a causally closed region of the dependence graph.

The key take away here is that the author can simply express convolution as a pure functional algorithm; the Fluid runtime and visualisation front-end takes care of providing the interactions. The library function convolve below implements the convolution algorithm, and helper functions zero, wrap and extend implement specific policies (“methods”) for dealing with the boundary.

matrix.fld

continued..