Support from the community for SYCL is growing, with some of the most powerful supercomputers in the world (including Aurora, Perlmutter and Frontier) adopting the programming model for cutting edge research. By migrating your code from CUDA to SYCL it’s not only possible to still target Nvidia GPUs, but it’s also possible to deploy to a wider set of GPUs from different companies including Intel and AMD. This hands-on workshop will introduce the basics of how to set up your development environment to use SYCL to target Nvidia GPUs using oneAPI and what you need to know to migrate your code from CUDA to SYCL. Find out how to use incremental porting by using interoperability for native CUDA kernel code and libraries and learn the fundamentals needed to get the full performance with SYCL. In addition, learn how you can call CUDA libraries such as cuDNN or cuBLAS directly, or via existing SYCL libraries such as oneDNN using oneAPI for CUDA.