Everything you wanted to know (and more) about PyTorch tensors

 Table of contents

Abstract

Before information can be fed to artificial neural networks (ANN), it needs to be converted to a form ANN can process: floating point numbers. Indeed, you don't pass a sentence or an image through an ANN; you input numbers representing a sequence of words or pixel values.

All these floating point numbers need to be stored in a data structure. The most suited structure is multidimensional (to hold several layers of information) and since all data is of the same type, it is an array.

Python already has several multidimensional array structures—the most popular of which being NumPy's ndarray—but the particularities of deep learning call for special characteristics: ability to run operations on GPUs and/or in a distributed fashion, as well as the ability to keep track of computation graphs for automatic differentiation.

The PyTorch tensor is a Python data structure with these characteristics that can also easily be converted to/from NumPy's ndarray and integrates well with other Python libraries such as Pandas.

In this webinar, suitable for users of all levels, we will have a deep look at this data structure and go much beyond a basic introduction.

In particular, we will:

  • see how tensors are stored in memory
  • look at the metadata which allows this efficient memory storage
  • cover the basics of working with tensors (indexing, vectorized operations…)
  • move tensors to/from GPUs
  • convert tensors to/from NumPy ndarrays
  • see how tensors work in distributed frameworks
  • see how linear algebra can be done with PyTorch tensors

Slides

Both versions open in a new tab.
In the web version, use the left and right arrows to navigate the slides.

Video

Comments & questions