Using Pytorch on Fox

PyTorch is software for aiding machine learning. It contains Deep learning primitive datatypes, CPU and GPU optimized functions and models developed in Python.

An Overview of Pytorch and It's Datatypes

Tensors

The fundamental datatype of Pytorch is a type of multi-dimensional array called a Tensor.

Tensors can be declared similar to NumPy arrays, with the zeros and ones methods. You may also convert a standard library list or NumPy array into a tensor.

In [1]: import torch

In [2]: x = torch.zeros(3,1)

In [3]: print(x)
tensor([[0, 0],
        [0, 0],
        [0, 0]], dtype=torch.float32)

In [4]: import numpy as np

In [5]: y = np.zeros(3,1)  # NumPy array of zeros

In [6]: z = torch.from_numpy(z)

In [7]: print(z)  # same result from a NumPy array
tensor([[0, 0],
        [0, 0],
        [0, 0]], dtype=torch.float32)

The default numeric type for tensors is 32-bit floating point. This can be specified with the dtype parameter:

In [1]: import torch

In [2]: x = torch.zeros(3,1, dtype=torch.int16)

In [3]: print(x)
tensor([[0, 0],
        [0, 0],
        [0, 0]], dtype=torch.int16)

Random values can be seeded for reproducable results:

In [1]: import torch

In [2]: torch.manual_seed(1531)

In [3]: x = torch.rand(3,1)

In [4]: print(x)

In [5]: torch.manual_seed(1531)  # re-seeded with same value

In [6]: y = torch.rand(3,1)

In [7]: print(y)  # should be the same as x

Tensor arithmetic follows the rules of matrix arithmetic

In [1]: import torch

In [2]: torch.manual_seed(1531)

In [3]: x = torch.rand(3,1)

In [4]: y = torch.rand(3,1)  # this will have different values

In [5]: x * (2 * y)  # matrix and scalar multiplication work as expected

AutoGrad & gradients

AutoGrad is Pytorch's gradient toolkit.

In [1]: x = torch.randn(1,10)

In [2]: prev_h = torch.randn(1,20)

In [3]: W_h = torch.randn(20, 20)

In [4]: W_x = torch.randn(20, 10) 

In [5]: i2h = torch.mm(W_x, x.t())

In [6]: h2h = torch.mm(W_h, prev_h.t())

In [7]: next_h = i2h + h2h

In [8]: next_h = next_h.tanh()

In [9]: loss = next_h.sum()  # quantity to minimize

In [10]: loss.backward()  # tensors contain metadata of their history, so this is sufficient to create a gradient.

Read more:

For more in-depth guides from the PyTorch organization, see the following links:

Pytorch website
Video tutorials