Zoom PyTorch tensors

Last updated: January 4, 2023

Zoom session 2 โ€” part 2

Topic: In this session, we will familiarize ourselves with PyTorch tensors.

PyTorch's central object is the tensor. Tensors have many similarities with NumPy's ndarrays, but they can be computed on GPUs. They are extremely well suited to build neural networks.

In this lesson, we will explore tensors: how to create them, run operations on them, and most importantly, what they really represent.

Setup

This lesson does not require a lot of computing power and we want it to be interactive to explore how PyTorch tensors work. This is a perfect case to use an interactive session on a cluster with salloc or, as we will be doing here, to use JupyterLab.

Start JupyterLab

First, launch a JupyterLab session (or re-open it if you already launched it).

Navigate to a sensible place

Familiarize yourself with the navigation pane, create a directory called workspace , and enter it.

Start a Jupyter notebook with the Python 3 kernel

Start a notebook and rename it to something that makes sense to you (e.g. tensors.ipynb ).

Load PyTorch

First, we need to load the main package from the PyTorch library. It is called torch:

import torch

Dimensions and sizes

PyTorch's tensors are homogeneous multidimensional arrays.

You can create them with a variety of methods such as:

  • torch.rand, for a tensor filled with random numbers from a uniform distribution on \([0, 1)\)
  • torch.randn, for a tensor filled with numbers from the standard normal distribution
  • torch.empty, for an uninitialized tensor
  • torch.zeros, for a tensor filled with \(0\)
  • torch.ones, for a tensor filled with \(1\)

Each element you pass to these methods represents the length of one dimension. Consequently, the number of elements determines the number of dimensions of the tensor.

Let's have a look at a few examples:

print(torch.rand(1))

This is a one-dimensional tensor. Its length in the unique dimesion is 1. So it is a tensor with a single element.

When a tensor has a unique element, that element can be returned as a number with the method item:

print(torch.rand(1).item())

print(torch.rand(2))

This is another one-dimensional tensor. Its length in the unique dimesion is 2.

print(torch.rand(3))

A one-dimensional tensor. Its length in the unique dimesion is 3.

print(torch.rand(1, 1))
print(torch.rand(1, 1).item())

A two-dimensional tensor. Its length in one dimesion is 1 and its length in the other dimesion is also 1. So this is also a tensor with a single element.

print(torch.rand(1, 1, 1))

A three-dimensional tensor with a single element.

print(torch.rand(3, 1))

A two-dimensional tensor. Its length in one dimension is 3 and in the other, 1.

print(torch.rand(2, 6))

A two-dimensional tensor. Its length in one dimension is 2 and in the other, 6.

print(torch.rand(2, 1, 5))

A three-dimensional tensor. Its length in one dimension is 2, in a second dimension it is 1, and in the third dimension it is 5.

Play with a few more examples until this all makes sense:

print(torch.rand(2, 2, 5))
print(torch.rand(1, 1, 5))
print(torch.rand(1, 1, 5, 1))
print(torch.rand(2, 3, 5, 2))
print(torch.rand(2, 3, 5, 2, 4))
print(torch.rand(3, 5, 4, 2, 1))

Getting information

You can get the dimension of a tensor with the method dim:

print(torch.rand(3, 5, 4, 2, 1).dim())

And its size with the method size:

print(torch.rand(3, 5, 4, 2, 1).size())

Creating new tensors of the size of existing ones

All these methods to create tensor can be appended with _like to create new tensors of the same size:

x = torch.rand(2, 4)
print(x)

y = torch.zeros_like(x)
print(y)

x.size() == y.size()

Operations

Let's take the addition as an example:

Note: you need to have tensors of matching dimensions.

x = torch.rand(2)
y = torch.rand(2)

print(x)
print(y)

The addition can be done with either of:

print(x + y)
print(torch.add(x, y))

In-place operations

In in-place operations, operators are post-fixed with _:

print(x)

x.add_(y)
print(x)

x.zero_()
print(x)

Data type

PyTorch has a dtype class similar to that of NumPy.

You can assign a data type to a tensor when you create it:

x = torch.rand(2, 4, dtype=torch.float64)

To check the data type of a tensor:

print(x.dtype)

You can also modify it with:

x = x.type(torch.float)
print(x.dtype)

Indexing

Indexing works as it does in NumPy:

x = torch.rand(5, 4)
print(x)

print(x[:, 2])
print(x[3, :])
print(x[2, 3])

Reshaping

You can change the shape and size of a tensor with the method view:

Note: your new tensor needs to have the same number of elements as the old one!

print(x.view(4, 5))
print(x.view(1, 20))
print(x.view(20, 1))

You can even change the number of dimensions:

print(x.view(20))
print(x.view(20, 1, 1))
print(x.view(1, 20, 1, 1))

When you set the size in one dimension to -1, it is automatically calculated:

print(x.view(10, -1))
print(x.view(5, -1))
print(x.view(-1, 1))

GPU

Tensors can be sent to a device (CPU or GPU) with the to method:

x = torch.rand(5, 4)

# Send to CPU
x.to('cpu')         # This won't do anything here as we are already on a CPU

# Send to GPU
# x.to('cuda')      # This can't work here since we are on a node without GPU

Comments & questions