Practice The building blocks of artificial neural networks: tensors

Last updated: January 4, 2023

PyTorch's central object is the tensor. Tensors have many similarities with NumPy's ndarrays, but they can be computed on GPUs. They are extremely well suited to build neural networks.

In this lesson, we will explore tensors: how to create them, run operations on them, and most importantly, what they really represent. Run all the code so that you can look at the outputs and explore on your own.

Activate your Python virtual environment

After you ssh to the training cluster, activate the virtual environment that you created so that you have access to the Python packages you installed in it:

$ source ~/env/bin/activate

Start an interactive job

For instance:

$ salloc --cpus-per-task=1 --mem=3G --time=0:30:0

You can now start an interactive Python session:

$ python

Load PyTorch

First, we need to load the torch package:

import torch

Dimensions and sizes

PyTorch's tensors are homogeneous multidimensional arrays.

You can create them with a variety of methods such as:

  • torch.rand, for a tensor filled with random numbers from a uniform distribution on \([0, 1)\)
  • torch.randn, for a tensor filled with numbers from the standard normal distribution
  • torch.empty, for an uninitialized tensor
  • torch.zeros, for a tensor filled with \(0\)
  • torch.ones, for a tensor filled with \(1\)

Each element you pass to these methods represents the length of one dimension. Consequently, the number of elements determines the number of dimensions of the tensor.

Let's have a look at a few examples:

print(torch.rand(1))

This is a one-dimensional tensor. Its length in the unique dimesion is 1. So it is a tensor with a single element.

When a tensor has a unique element, that element can be returned as a number with the method item:

print(torch.rand(1).item())

print(torch.rand(2))

This is another one-dimensional tensor. Its length in the unique dimesion is 2.

print(torch.rand(3))

A one-dimensional tensor. Its length in the unique dimesion is 3.

print(torch.rand(1, 1))
print(torch.rand(1, 1).item())

A two-dimensional tensor. Its length in one dimesion is 1 and its length in the other dimesion is also 1. So this is also a tensor with a single element.

print(torch.rand(1, 1, 1))

A three-dimensional tensor with a single element.

print(torch.rand(3, 1))

A two-dimensional tensor. Its length in one dimension is 3 and in the other, 1.

print(torch.rand(2, 6))

A two-dimensional tensor. Its length in one dimension is 2 and in the other, 6.

print(torch.rand(2, 1, 5))

A three-dimensional tensor. Its length in one dimension is 2, in a second dimension it is 1, and in the third dimension it is 5.

Play with a few more examples until this all makes sense:

print(torch.rand(2, 2, 5))
print(torch.rand(1, 1, 5))
print(torch.rand(1, 1, 5, 1))
print(torch.rand(2, 3, 5, 2))
print(torch.rand(2, 3, 5, 2, 4))
print(torch.rand(3, 5, 4, 2, 1))

Getting information

You can get the dimension of a tensor with the method dim:

print(torch.rand(3, 5, 4, 2, 1).dim())

And its size with the method size:

print(torch.rand(3, 5, 4, 2, 1).size())

Creating new tensors of the size of existing ones

All these methods to create tensor can be appended with _like to create new tensors of the same size:

x = torch.rand(2, 4)
print(x)

y = torch.zeros_like(x)
print(y)

x.size() == y.size()

Operations

Let's take the addition as an example:

Note: you need to have tensors of matching dimensions.

x = torch.rand(2)
y = torch.rand(2)

print(x)
print(y)

The addition can be done with either of:

print(x + y)
print(torch.add(x, y))

In-place operations

In in-place operations, operators are post-fixed with _:

print(x)

x.add_(y)
print(x)

x.zero_()
print(x)

Data type

PyTorch has a dtype class similar to that of NumPy.

You can assign a data type to a tensor when you create it:

x = torch.rand(2, 4, dtype=torch.float64)

To check the data type of a tensor:

print(x.dtype)

You can also modify it with:

x = x.type(torch.float)
print(x.dtype)

Indexing

Indexing works as it does in NumPy:

x = torch.rand(5, 4)
print(x)

print(x[:, 2])
print(x[3, :])
print(x[2, 3])

Reshaping

You can change the shape and size of a tensor with the method view:

Note: your new tensor needs to have the same number of elements as the old one!

print(x.view(4, 5))
print(x.view(1, 20))
print(x.view(20, 1))

You can even change the number of dimensions:

print(x.view(20))
print(x.view(20, 1, 1))
print(x.view(1, 20, 1, 1))

When you set the size in one dimension to -1, it is automatically calculated:

print(x.view(10, -1))
print(x.view(5, -1))
print(x.view(-1, 1))

GPU

Tensors can be sent to a device (CPU or GPU) with the to method:

x = torch.rand(5, 4)
x.to('cpu')
x.to('cuda')

Of course, if you don't have GPUs, that last one won't work.

Comments & questions