oneAPI DevSummit for AI 2023: The Key Takeaways

Dive into the dynamic world of AI innovation as we recap the highlights from the oneAPI DevSummit for AI 2023. From groundbreaking drop-in optimizations to cutting-edge AI tools, immersive workshops and thought-provoking discussions, this event delivered a glimpse into the future of AI technologies and trends. 

Read on as we unveil the key takeaways that shape the landscape of AI development to propel your skills forward.

The event provided an exceptional hub of learning for researchers, data scientists and developers to gather profound insights. It featured an array of technical talks, hands-on workshops and live demonstrations that showcased the remarkable capabilities of oneAPI tools. Esteemed experts unveiled the latest trends, technologies and techniques in AI and oneAPI, creating a platform for in-depth exploration. Attendees were treated to enlightening sessions covering a spectrum of subjects, including seamless optimizations within popular frameworks and libraries, AI tools for end-to-end development, and engaging workshops that showcased the power of AI-optimized hardware. Panel discussions also offered participants a chance to interact with industry leaders to expand knowledge, forge connections and stay at the forefront of AI and oneAPI advancements. 

Shaping the Future of AI with PyTorch

In this fascinating keynote, the Linux Foundation’s Program Manager and Data Scientist, Lucy Hyde, who specializes in machine learning, shared insights into the significance of PyTorch in the field of artificial intelligence and its impact on various applications. Hyde emphasized PyTorch’s role in accelerating the analysis of vast amounts of data, especially in digital forensics and technical exploitation for government agencies. By using PyTorch for object detection and text classification, Hyde’s team was able to improve the efficiency and accuracy of data analysis, aiding in the pursuit of individuals involved in terrorism and human trafficking.

Also highlighted in this talk was the recent growth and milestones of the PyTorch Foundation—now housed at the Linux Foundation—which focuses on aiding users in navigating the PyTorch ecosystem, attracting talent and fostering open-source AI tech. The PyTorch community saw remarkable growth in early 2023, with a 36% surge in commits, 2,000+ new contributors, and an 18% rise in community input.

Hyde also introduced PyTorch 2.0, highlighting its enhanced developer experience, flexibility and performance improvements. PyTorch 2.0 optimizes deep learning workflows, enhances hardware acceleration and simplifies the parallelization of computations across multiple GPUs. 

The speaker went on to talk about generative AI and touched on PyTorch’s role in speeding up the generation process, which enables faster training of large models. Hyde also discussed the concept of parallelism and how PyTorch provides tools and libraries to harness the power of multiple GPUs effectively.

Additionally, Hyde introduced PiPPy, a repository aimed at pipeline parallelism in PyTorch, which allows training models on larger datasets. PiPPy streamlines the parallelization process and its potential in real-world applications.

Unlocking and Accelerating Generative AI with OpenVINO™ Toolkit

Generative AI is a type of technology that can create images, text, or even solutions to problems on its own. It has become very popular recently because it can do these tasks extremely well. However, using this technology can be difficult because it requires a lot of computing power, and getting it to work in real-world situations can be tricky.

In this tech talk, AI SW Architect and Evangelist at Intel, Ria Cheruvu, explored the role of OpenVINO™ in accelerating the end-to-end process of Generative AI building, optimization and deployment.

Cheruvu began by explaining generative AI’s core concept, which involves generating and manipulating data at a large scale with flexibility. The speaker highlighted various creative applications of generative AI and its growing market. Cheruvu continued by introducing the specific example of Stable Diffusion, a popular application for high-quality image generation, and identified common pain points in generative AI, such as large model sizes, high memory usage, slow inference speed and limited hardware flexibility. Cheruvu emphasized that OpenVINO aims to address these challenges by reducing model size, memory usage and improving inference speed while enhancing hardware flexibility.

Cheruvu continued by giving a live demonstration of text-to-image generation using Stable Diffusion on different hardware, showcasing the benefits of OpenVINO. The speaker explained that OpenVINO allows developers to convert and optimize models for various Intel hardware, including CPUs and GPUs. Cheruvu briefly discussed the components of the Stable Diffusion pipeline, including the text encoder, image information creator and image decoder. Cheruvu also shared code snippets to illustrate how to convert models to OpenVINO format and run them for high-performance inference.

The speaker encouraged the audience to explore OpenVINO’s capabilities for generative AI and provided QR codes for resources and documentation, and mentioned upcoming trends in large language models and shared information about the support for various models in OpenVINO. 

Using PyTorch to Predict Wildfires

The pressing menace of forest fires took center stage in this workshop given by Bob Chesebrough, Technical Evangelist at Intel, and Rahul Unnikrishnan Nair, Machine Learning Architect. 

The speakers delved into the potential devastation caused by wildfires that impacts ecosystems, wildlife and human lives. They emphasized the role of machine learning in prediction and mitigation. While traditional methods have been used for prediction, machine learning offers significant improvements in accuracy. 

Chesebrough and Nair conducted a workshop demonstrating how to use image analysis to predict the likelihood of forest fires based on historical data from the MODIS dataset (Moderate Resolution Imaging Spectroradiometer). They employed a oneAPI-powered extension for PyTorch to optimize and accelerate the PyTorch-based model. The focus was on fine-tuning a pre-trained ResNet model using aerial photos. The workshop covered various components, including utility functions, a trainer class, a model class and a metrics class. 

The use of machine learning in predicting forest fires can help mitigate their impact by providing early warnings and enabling timely evacuation of affected areas. By analyzing satellite images and other data sources, machine learning models can identify patterns and predict the likelihood of a fire outbreak. This information can be used to allocate resources more effectively and take preventive measures such as controlled burns to reduce the risk of a wildfire .

The session concluded with the creation of a confusion matrix to assess the accuracy of the predictive model.

Intel Cloud Optimization Modules

In this informative talk, Ben Consolvo, AI Solutions Engineering Manager at Intel, discussed insights into cutting-edge, cloud-native open source reference architectures, strategically designed to aid developers in constructing and implementing optimized AI solutions across top cloud services providers (CSPs) like Amazon Web Services (AWS), Microsoft Azure and Google Cloud Platform. Each architecture, a comprehensive module, was accompanied by a full instruction set and the entire source code, made available on GitHub for seamless exploration.

Consolvo introduced the Intel Cloud Optimization Modules (ICOMs), which are open-source code bases containing AI software optimizations tailored for the various CSPs mentioned above. These modules are designed to assist enterprise Cloud AI developers in optimizing their workflows. Consolvo outlined the technical stack used, including Intel Xeon CPUs, AI frameworks like PyTorch and Hugging Face Transformers, DevOps tools like Kubernetes and Docker, and cloud services from different CSPs. Each ICOM package consists of essential components: a GitHub repository with detailed implementation steps, a white paper providing a comprehensive overview, a cheat sheet for quick reference, video tutorials for practical guidance, and office hours for one-on-one assistance.

The speaker briefly introduced a few specific ICOMs, including those for SageMaker integration on AWS, Kubernetes deployment on AWS, Azure Kubernetes Service (AKS) for Azure and Kubeflow pipelines. Each of these modules caters to different cloud environments and use cases. Consolvo also highlighted a landing page where you can explore and access all the ICOMs, register for office hours and connect with the DevHub Discord community for ongoing discussions and support.

Overview of other notable sessions

In a talk titled “TraffiKAI: An AI-Powered Solution for Efficient Traffic Management,” the speakers introduced TraffiKAI, an AI and ML solution addressing traffic congestion and emergency vehicle challenges. TraffiKAI comprises Dynamic Traffic Signaling and Emergency Vehicle Detection components, aiming to improve traffic efficiency. It utilizes the Intel AI Analytics Toolkit with oneAPI’s Deep Neural Network Library (oneDNN), integrating with TensorFlow for enhanced model performance. Dynamic Traffic Signaling adapts signal lights based on lane density using object detection algorithms. Emergency Vehicle Detection combines audio and video analysis for precise identification through ensemble learning. In essence, TraffiKAI seeks to enhance traffic systems and management through dynamic signaling and accurate detection.

Another notable session titled “Multiarchitecture Hardware Acceleration of Hyperdimensional Computing with oneAPI” included discussion of Hyperdimensional Computing (HDC), a machine-learning method inspired by cerebellum data processing. The research presented in the talk focused on harnessing the power of HDC through the utilization of oneAPI libraries with SYCL, which allows for the creation of specialized accelerators on various hardware platforms, including CPUs, GPUs and FPGAs. The primary objective of the research is to assess and compare the performance of HDC training and inference tasks across these hardware platforms. The study evaluated the Intel Xeon Platinum 8256 CPU, UHD 11th generation GPU, and Stratix 10 FPGA, revealing some noteworthy findings. Notably, the GPU implementation emerged as the fastest for training HDC models, emphasizing its ability to rapidly process and learn from data. On the other hand, the FPGA implementation demonstrated the lowest inference latency, signifying its efficiency in making quick predictions based on previously trained HDC models. This research ultimately provides valuable insights into optimizing HDC-based machine learning applications for specific hardware configurations, offering promising solutions for high-dimensional data processing challenges.

You can watch all the fascinating and enlightening sessions held at the oneAPI DevSummit for AI 2023 here to get a comprehensive and immersive learning experience that will enrich your professional growth and development.

Be part of the oneAPI Community, and participate in the AI Special Interest Group (SIG), which hosts discussions and presentations focused on the functions and interfaces needed to enable applications using machine learning and deep neural networks alongside more traditional algorithms for AI. The oneAPI specification defines the oneDNN (w/ Graph), oneCCL, and oneDAL API as building blocks for deep learning applications and frameworks.

Check next oneAPI events, and stay updated by joining the oneAPI Community on LinkedIn.

×


Register for the oneAPI DevSummit hosted by UXL:

Register Now