Intel oneDNN AI Optimizations Enabled as Default in TensorFlow

What’s New: In the latest release of TensorFlow 2.9, the performance improvements delivered by the Intel® oneAPI Deep Neural Network Library (oneDNN) are turned on by default. This applies to all Linux x86 packages and for CPUs with neural-network-focused hardware features (like AVX512_VNNI, AVX512_BF16, and AMX vector and matrix extensions that maximize AI performance through efficient compute resource usage, improved cache utilization and efficient numeric formatting) found on 2nd Gen Intel® Xeon® Scalable processors and newer CPUs. These optimizations enabled by oneDNN accelerate key performance-intensive operations such as convolution, matrix multiplication and batch normalization, with up to 3 times performance improvements compared to versions without oneDNN acceleration.

Read Article
Newspaper Article • May 25, 2022
×


Register for the oneAPI DevSummit hosted by UXL:

Register Now