It is free and open-source software released under the Modified BSD licence.
PyTorch is an open source machine learning library based on the Torch library. It is used for a variety of applications such as computer vision and natural language processing.
PyTorch is primarily developed by Facebook’s AI Research lab (FAIR). There is also an ecosystem built around this framework, comprising various libraries developed by third-party teams: PyTorch Lightning and Fast.ai (facilitating model learning), Pyro (Uber’s probabilistic programming module), Flair (for natural language processing) and Catalyst (for DL and RL model learning).
Clients: Toyota, Salesforce, Stanford University, Udacity
- Toyota Research Institute Advanced Development, Inc. (TRI-AD)
Using PyTorch on Amazon EC2 P3 instances, TRI-AD has reduced machine learning model training time from days to hours. “We continuously optimise and improve our computer vision models, which are critical to TRI-AD’s mission of achieving safe mobility for all with autonomous driving”.
Pinterest has 3 billion images and 18 billion associations connecting those images. The company has developed PyTorch deep learning models to contextualise these images and enable personalised interactions with the user.
Autodesk, a leader in 3D design, engineering, and entertainment software, uses deep learning models for use cases ranging from exploring thousands of potential design alternatives, semantically searching designs, streamlining engineering construction processes to optimise rendering workflows.
Hyperconnect uses PyTorch-based image classification in its video communication application to recognise the user’s current environment.
Most frameworks such as TensorFlow, Theano, Caffe and CNTK have a static view of the world. One has to build a neural network, and reuse the same structure again and again. Changing the way the network behaves means that one has to start from scratch.
With PyTorch, we use a technique called Reverse-mode auto-differentiation, which allows you to change the way your network behaves arbitrarily with zero lag or overhead.
Although this technique is not unique to PyTorch, it is by far one of its fastest implementations.