Optimizing Model Inference on CPU

Optimizing Model Inference on CPU

Today, almost all quality control and assurance are done completely manually. Whole teams sit and go through image by image to find potential issues, spending up to 30% of their effective time on this one task. The AI Consensus Scoring at Hasty aims to find potential issues, and then a user can accept or reject our suggestions. The setup relies on GPUs because otherwise, it takes too much time to produce results. But taking into account the cost of GPUs it was decided to investigate oneAPI alternatives as a possible way forward.

Bogdan Zhurakovskyi
×


Register for the oneAPI DevSummit hosted by UXL:

Register Now