NeurIPS conference

About Implicit Layers

By Viktor Daropoulos

On December 2020 the NeurIPS conference took place in virtual form (due to lockdowns). NeurIPS is one of the largest machine learning conferences where cutting edge research is presented in several aspects of machine and deep learning as well as applications in various areas such as mathematics, biology, etc. One of the most interesting tutorials presented in the conference was the one about "Implicit Layers". A typical neural network is composed of many layers where each one of them models a differentiable function such as for example convolution layers, attention layers etc. One common characteristic of those layers is that they are basically modeled as explicit functions or explicit layers. An input is provided to the layer in the forward pass, the output is calculated and the computation graph is constructed which is later on used during the back-propagation phase. However, there is the alternative way of viewing the function the layer performs by defining the layer such that it satisfies some constraint of the input and output (implicit layer). This is a very common scenario seen in many cases such as in fixed point iterations, differential equations etc. where this type of modeling is very useful. For much more details on implicit layers and their applicability one could follow the tutorial at https://implicit-layers-tutorial.org/ where code examples are also provided in PyTorch and JAX.

Nach oben scrollen