AI compiler support is one of the crucial enablers for any hardware system. It allows efficient mapping and execution of models on the device. The UXL interfaces enable unified access to different devices, which is a key requirement for building retargetable AI compilers. Therefore, end-to-end compiler support for the UXL ecosystem will provide the foundation for various devices. In this talk, we will present our AI compiler based on MLIR and its integration with UXL concepts. Along with the compiler, we provide a lightweight runtime, which runs on the device and enables model execution. We utilize Level Zero in our runtime to efficiently call kernels and manage memory. Any hardware that supports Level Zero can be controlled by our software out-of-the-box. Our initial targets for this are Intel GPUs, with a compiler that can generate SPIR-V code from different ML frameworks like PyTorch or TensorFlow.