Level Zero: Latest Developments

In March of 2020, Intel released the Level Zero spec, a low-level programming API for heterogeneous computing architectures such as GPUs, FPGAs, and other architectures. Level Zero is intended to provide explicit control needed by higher-level runtime APIs and libraries.

Libraries like oneMKL, oneDPL, etc., are part of oneAPI, but oneAPI is not implemented using these libraries. The oneAPI programming language is a combination of SYCL and a set of SYCL extensions used to define data parallel functions and to launch them on accelerator devices. oneAPI includes a set of libraries and toolkits for machine learning, deep learning, computer vision, and other AI processes and frameworks.

The increased control Level Zero provides, such as device discovery, memory allocation, peer-to-peer communication, inter-process sharing, kernel submission, asynchronous execution and scheduling, synchronization primitives, metrics reporting, and system management interface, makes it easier to access and program heterogenous hardware. The current implementation supports Intel graphics and in the future aims to support multiple architectures.

 

Where Level Zero Fits and What Role it plays

The objective of Level Zero is to provide direct-to-the-metal interfaces to offload accelerator devices. Its programming interface can scale to any device needs and can be utilized to support more language features such as function pointers, virtual functions, unified memory, and I/O capabilities.

Frequently, Level Zero is not used directly by oneAPI developers, although it is used to implement oneAPI middleware and language runtimes. As illustrated in the diagram below, different hardware types with the corresponding drivers (Target Software System) sit at the bottom level; this means that GPUs, FPGAs, and AI architectures other than Intel hardware could potentially be programmed using Level Zero. Above the Level Zero API sits oneAPI, SYCL, and all toolkits. Level Zero provides services for loading and executing programs, allocating memory, device discovery/management, and it controls communication with the device. 

While initially influenced by other low-level APIs, such as OpenCL and Vulkan, the Level Zero APIs are designed to evolve independently. And while initially influenced by GPU architecture, the Level Zero APIs are designed to be supportable across different compute device architectures, such as FPGAs and other types of accelerator architectures.

 

Flexible Memory Allocation

Developers want to utilize the entire suite of capabilities on available hardware to increase performance and efficiency. Level Zero provides more explicit control for the higher-level runtime APIs and libraries. Today Level Zero is low level, however, and unified runtime incorporates higher level functionality that is common to language runtimes. Memory is visible to the upper-level software stack as unified memory with a single virtual address space covering both the Host and a specific device.

The API allows allocation of buffers and images at device and sub device granularity with full cacheablity hints. Buffers are transparent memory accessed through virtual address pointers. Images are opaque objects accessed through handles.

The memory APIs provide allocation methods to allocate either device, host, or shared memory. The APIs enable both implicit and explicit management of the resources by the application or runtimes. The interface also provides query capabilities for all memory objects.

 

Level Zero, oneAPI and Intel oneAPI Toolkits Offer an Alternative to Single-Vendor Solutions

Computer architecture is advancing quickly, which makes it challenging for developers in our now data-centric world.  Different architectures have traditionally required using unique tools, languages, and APIs to optimize performance and accelerate workloads.

oneAPI and Level Zero are helping developers navigate a diverse hardware ecosystem by providing a common set of tools, libraries, and frameworks that are interoperable across heterogeneous systems. Level Zero provides the hardware abstraction layer, which enables open software stacks to operate across multiple vendors and accelerator architectures.  This is reducing and will ultimately eradicate vendor lock-in, which confines developers to single solutions.

Alongside oneAPI, Intel offers a reference implementation of oneAPI with a set of toolkits that contain advanced developer tools to help accelerate workloads. The oneAPI Base toolkit, for example, includes the data parallel C++ compiler, optimized libraries, and a DPC++ compatibility tool to assist in migrating CUDA code to DPC++ code, as well as advanced analysis and debug tools that enable developers to analyze, optimize and debug applications across architectures.

The DPC++ compiler delivers parallel programming productivity and enhanced performance across CPUs and a variety of accelerators, and the C++ compatibility tool assists with a one-time CUDA code migration for kernels and API calls to create new DPC++ code. This automates most of the code porting process.

The base toolkit also includes performance-optimized libraries across several domains, such as math, data analytics, deep learning, threading, and video processing. It includes the optimized Intel distribution for Python and additional performance libraries. Along with the base toolkit, Intel offers add-on toolkits that target specialized workloads for high-performing compute (HPC), IOT, rendering, deep learning frameworks, and more.

Together, Level Zero, oneAPI, and Intel oneAPI Toolkits all work to enable companies to build their own implementations of oneAPI for heterogeneous hardware. 

 

Level Zero Facilitates Open Innovation 

Level Zero is an open standard, which many believe is much more beneficial than proprietary systems because it leverages contributions from the developer community, and it allows developers to choose what languages work best for them within the software and hardware stack.

“Looking forward, we are leveraging our experiences with Python, Julia, and Java to provide better language runtime support in Level Zero,” stated Robert Cohn, Sr. Principal Engineer, Intel Development Tools Software, in a recent, self-penned article. 

With the growth of graphics processors, several heterogeneous programming paradigms have been proposed in the last few years. The Intel Graphics Compute Runtime for the oneAPI Level Zero Driver is currently an open-source project providing compute API support for Intel graphics hardware architectures. While initially influenced by GPU architecture, the Level Zero APIs are designed to be supportable across different compute device architectures, such as FPGAs and other types of accelerator architectures—in contrast to CUDA, which is only supported on NVIDIA graphics cards.

Cohn concluded his article by stating that Intel is “tirelessly improving and augmenting open-source software for the oneAPI platform, ensuring it integrates into popular frameworks, and working with the ever-growing community of developers.” 

No matter how the future of accelerator development tools turns out, we can be sure that Level Zero is an advantageous way to access heterogenous hardware, as it grants developers the opportunity to fine-tune applications to utilize resources maximally, as well as boost portability of high-level applications that run on heterogenous hardware. 

oneAPI

Get Involved and Review the oneAPI Specification

Learn about the latest oneAPI updates, industry initiative and news.  Check out our videos and podcasts.  Visit our GitHub repo – review the spec and give feedback or join the conversation happening now on our Discord channel.  Then get inspired, network with peers and participate in oneAPI events.

×


Register for the oneAPI DevSummit hosted by UXL:

Register Now