Intel Optimizations for Deep Learning Frameworks

Overview: This session helps users to learn Intel’s optimizations to TensorFlow and PyTorch. The introduction involves methodologies of Intel optimizations, installation guide, how to use Intel optimizations via exercises. In this session, we will deep dive into Intel’s optimization methodologies, like operator optimization, operator fusion, parallelism, vectorization, on most widely used deep learning frameworks TensorFlow and PyTorch. Meanwhile, we will also briefly introduce oneDNN, a library that is designed specifically to accelerate performance of common deep learning operators, such as convolution, pooling, on Intel platforms. Demonstrations will be delivered for both TensorFlow and PyTorch to show case how users install Intel optimization for these two frameworks, how to use them, as well as how they contribute to performance boost on Intel Xeon scalable processors.

Key Takeaways: Intel contributes to performance boost with Tensorflow and PyTorch on Intel platforms through a bunch of optimization methodologies, such as operator optimization, operator fusion, parallelism, vectorization and so on. Through the demonstrations, you will learn how to install and make usage of these optimizations.

Agenda:

45 min: Introduction of Intel Optimization for TensorFlow, including demo

45 min: Introduction of Intel optimization for PyTorch, including demo

If you are an AI Developers, ML Engineer, Data Scientist, Researcher this session is for you!

×


Register for the oneAPI DevSummit hosted by UXL:

Register Now