|

Pytorch Introduction

Pytorch is an open-source machine learning framework originally developed by Meta AI. In September 2022, Meta shifted management to the newly created PyTorch Foundation, a subsidiary of the Linux Foundation

PyTorch 2.0 has been released on 15 March 2023.

PyTorch

PyTorch is now a popular choice among AI developers and Researchers. If you are familiar with Python, then diving into PyTorch will be fairly easy. Pytorch uses core Python concepts like classes, structures, and inheritance.

PyTorch is known for its dynamic computational graph, which allows for efficient and flexible computation.

PyTorch

PyTorch is an open source machine learning (ML) framework based on the Python programming language and the Torch library used for applications such as computer vision and natural language processing. They provides two high-level features: Tensor computation (like NumPy) with strong GPU acceleration.

Key Benefits of using PyTorch

Some important properties of PyTorch are:

  • Rich documentation, tutorials, and a large community at PyTorch.org community.
  • Pythonic: Easy for Python developers to learn, as it is written in Python and integrated with popular Python libraries like NumPy for scientific computing, SciPy, and Cython for compiling Python to C for better performance.
  • Automatic Differentiation:
  • Dynamic Computational Graph Support: One of the major benefits of using PyTorch over other deep learning frameworks is the dynamic computational graph support.
  • Cloud support: Well-supported by major cloud platforms, providing frictionless development and easy scaling. There are many different platforms and tools that can be used to develop and deploy PyTorch models on the cloud.
  • Easy Debugging: PyTorch is deeply integrated with Python, so many Python debugging tools can be easily used with it. 
  • Excellent GPU Support: With the power of GPUs, you can speed up the computation of your deep-learning models.
  • Eager mode and Graph mode: PyTorch provides the flexibility to use both modes based on the project’s needs. Eager mode allows for more Pythonic and interactive programming, and graph mode for better performance and deployment capabilities.

Who uses PyTorch

  • Facebook: PyTorch is from Facebook, So obviously you can expect Facebook to use its own deep-learning framework.
  • Tesla Autopilot: Tesla uses PyTorch for Autopilot, their self-driving technology.
  • Microsoft: PyTorch is the primary framework to develop models that enable new experiences in Microsoft 365, Bing, Xbox, and more.
  • OpenAI:  OpenAI standardised their primary framework for development as PyTorch. They are an AI research and deployment company, so PyTorch makes sense.

Pytorch installation

PyTorch can be easily installed through their official website. PyTorch installation. After installation, you need to import it.

# Importing PyTorch
import torch

#Check PyTorch version
print(torch.__version__)

PyTorch CUDA support

PyTorch provides support for CUDA.

CUDA

To accelerate the training of our model, we can make use of hardware accelerators like GPUs (Graphical Processing Units). If you have NVIDIA GPUs, then you can use Compute Unified Device Architecture(CUDA) API.

CUDA is a parallel computing platform and application programming interface (API) developed by NVIDIA for general computing on graphical processing units (GPUs). With CUDA, software can use certain types of graphics processing units (GPUs) for general-purpose processing, an approach called general-purpose computing on GPUs (GPGPU). This speeds up computing applications.

torch.cuda library is used to set up and run CUDA operations. It keeps track of the currently selected GPU, and all CUDA tensors you allocate will by default be created on that device.

Check CUDA availability

Before you start using CUDA, you need to check if you have CUDA available in your environment. You can check it by using the torch.cuda.is_available() function.

torch.cuda.is_available(): Returns a bool indicating if CUDA is currently available.

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
device

device(type='cuda')

# check CUDA version
print(torch.version.cuda)

Similar Posts

Leave a Reply