Autograd

How it works

When you write:

import numpy as np
from slick_dnn.variable import Variable

a = Variable(np.ones(3))
b = Variable(np.ones(3))

c = a + b

New Variable c is created. It’s c.data is numpy array [2, 2, 2]. But it also tracks history of creation.

So it’s backward_function was set to Add.backward and it’s backward_variables was set to [a, b]

Fundamental classes

class slick_dnn.autograd.Autograd[source]

Autograd is a base class for all operations made on Variables.

__call__(*variables_list)[source]

For convenience. One can use all Autograd objects by simply calling them instead of using apply method.

Parameters:variables_list – same as in forward method
Returns:what apply returns
apply(*variables_list)[source]

Actual creation of new Variable. It calls overwritten forward method, creates new Context (same context is used in forward an backward pass) and sets backward_function and backward_variables for the new Variable

Parameters:variables_list – any variables, backward function will have to calculate gradients w.r.t all input variables
Returns:one variable, the one with tracked history and calculated data
backward(ctx, grad)[source]

Backward pass. Each Autograd Object must implement it.

Parameters:
  • ctx (Context) – Same context as in forward pass
  • grad – gradient
Returns:

gradient w.r.t all inputs

forward(ctx, *tensors_list)[source]

Forward pass of variable. Each Autograd object must implement it.

Parameters:
  • ctx (Context) – Context, classes can save information in them
  • tensors_list – any list of input tensors
Returns:

one new tensor

class slick_dnn.autograd.Context[source]

This class is for storing information for back propagation. Autograd uses this class instead of using self to allow constructions:

relu = ReLU()

b = relu(a)
c = relu(b)

That means, that you can use one instance of Autograd class to all of your operations. Without it, the above example would be:

relu1 = ReLU()
relu2 = ReLU()

b = relu1(a)
c = relu2(b)
save_for_back(*data)[source]

Saves given data for back propagation.

Parameters:data (Any) – Iterable of any data to save.

Mathematical

All mathematical operations available for Variables

class slick_dnn.autograd.mathematical.Add[source]

Adds given tensors

backward(ctx, grad)[source]

Backward pass. Each Autograd Object must implement it.

Parameters:
  • ctx (Context) – Same context as in forward pass
  • grad – gradient
Returns:

gradient w.r.t all inputs

forward(ctx, tensor1, tensor2)[source]

Forward pass of variable. Each Autograd object must implement it.

Parameters:
  • ctx (Context) – Context, classes can save information in them
  • tensors_list – any list of input tensors
Returns:

one new tensor

class slick_dnn.autograd.mathematical.MatMul[source]

Matrix multiplication: tensor1 @ tensor2

backward(ctx, grad: numpy.array)[source]

Backward pass. Each Autograd Object must implement it.

Parameters:
  • ctx (Context) – Same context as in forward pass
  • grad – gradient
Returns:

gradient w.r.t all inputs

forward(ctx, tensor1, tensor2)[source]

Forward pass of variable. Each Autograd object must implement it.

Parameters:
  • ctx (Context) – Context, classes can save information in them
  • tensors_list – any list of input tensors
Returns:

one new tensor

class slick_dnn.autograd.mathematical.Mul[source]

Element-wise multiplication

backward(ctx, grad: numpy.array)[source]

Backward pass. Each Autograd Object must implement it.

Parameters:
  • ctx (Context) – Same context as in forward pass
  • grad – gradient
Returns:

gradient w.r.t all inputs

forward(ctx, tensor1, tensor2)[source]

Forward pass of variable. Each Autograd object must implement it.

Parameters:
  • ctx (Context) – Context, classes can save information in them
  • tensors_list – any list of input tensors
Returns:

one new tensor

class slick_dnn.autograd.mathematical.Sub[source]

Subtracts given tensors: tensor1-tensor2

backward(ctx, grad)[source]

Backward pass. Each Autograd Object must implement it.

Parameters:
  • ctx (Context) – Same context as in forward pass
  • grad – gradient
Returns:

gradient w.r.t all inputs

forward(ctx, tensor1, tensor2)[source]

Forward pass of variable. Each Autograd object must implement it.

Parameters:
  • ctx (Context) – Context, classes can save information in them
  • tensors_list – any list of input tensors
Returns:

one new tensor

Activation Functions

All activation functions available.

class slick_dnn.autograd.activations.ArcTan[source]

Applies the arctan function element-wise.

backward(ctx, grad)[source]

ArcTan(x)’ = \(\frac{1}{x^2 + 1}\)

forward(ctx, x)[source]

ArcTan(x) = \(tan^{-1}(x)\)

class slick_dnn.autograd.activations.ReLU[source]

Applies the ReLU function element-wise.

backward(ctx, grad)[source]

ReLU(x)’ = \(\begin{cases} 0 & \text{if } x < 0 \newline 1 & \text{if } x > 0 \end{cases}\)

forward(ctx, x)[source]

ReLU(x) = \(max(0,x)\)

class slick_dnn.autograd.activations.Sigmoid[source]

Applies the Sigmoid function element-wise.

backward(ctx, grad)[source]

Sigmoid(x)’ = \(\frac{ e^{-x} }{ (1+ e^{-x})^2 }\)

forward(ctx, x)[source]

Sigmoid(x) = \(\frac{1}{ 1 + e^{-x} }\)

class slick_dnn.autograd.activations.Softmax[source]

Applies the Softmax function element-wise.

backward(ctx, grad)[source]

\(Softmax(x_i)' = \frac{ exp(x_i) * \sum_{j \neq i}{exp(x_j)} }{ (\sum_j{exp(x_j)})^2 }\)

forward(ctx, x) → numpy.array[source]

\(Softmax(x_i) = \frac{exp(x_i)}{\sum_j{exp(x_j)}}\)

class slick_dnn.autograd.activations.Softplus[source]

Applies the softplus function element-wise.

backward(ctx, grad)[source]

Softplus’(x) = \(\frac{1}{1 + e^{-x}}\)

forward(ctx, x) → numpy.array[source]

Softplus(x) = \(ln(1 + e^x)\)

class slick_dnn.autograd.activations.Softsign[source]

Applies the softsign function element-wise.

backward(ctx, grad)[source]

Softsign’(x) = \(\frac{1}{(1 + |x|)^2}\)

forward(ctx, x)[source]

Softsign(x) = \(\frac{1}{1 + |x|}\)

class slick_dnn.autograd.activations.Tanh[source]

Applies the tanh function element-wise.

backward(ctx, grad)[source]

Tanh(x)’ = \(1 - Tanh^2(x)\)

forward(ctx, x)[source]

Tanh(x)