The World AI Cannes Festival

Lights, Camera, Action … the red carpet is out and 2 days of World AI Cannes Festival (WAICF) gets started in the gorgeous, sunny French Riviera. But behind the glamorous location is the serious topic of Artificial Intelligence and all aspects covering use cases, implementation, ethics and of course the many benefits achievable.

With an estimated 1,000 delegates attending and an impressive list of around 200 exhibitors, there was an excellent balance between presentations from industry-leading experts and opportunities for valuable discussions and networking. . An interesting observation was the high proportion of women representing and presenting at the highest levels, an encouraging indication of the AI industry, a balance which is unfortunately countered by many other conferences I attend.

My mission was to explore and encourage the use of open source and open standards-based enabling software. The recent formation of the Unified Acceleration Foundation (UXL) bringing together cross-vendor, cross-architecture libraries using SYCL stimulated interesting discussions., The crucial one being how to bring truly cross-vendor capabilities to existing, well-known frameworks such as LLaMA and PyTorch – but more often it becomes clear that implementations seek to use their own frameworks, applications and algorithms. There was certainly a strong desire for programmability, and this will grow as the main markets and major companies want to evolve their own differentiated AI solutions and achieve multi-supplier portability.

The 4 major silicon suppliers were highly visible, with Intel, AMD, NVIDIA and IBM making their marks as highly credible solutions into leading performance systems. It was pleasing to see Tenstorrent promoting their RISC-V solution and how the open instruction set hardware has the potential to grab a chunk of NVIDIA’s dominant position. DELL was promoting their local AI compute use case using LLaMA locally to quickly digest many, many pages of pdf documents and generate a highly credible executive summary without any cloud offload.

There is no doubt of the more-than-Moore’s Law advancements that have been experienced in AI over the last decade. From LLM having over 100 billion parameters, equating to models needing minimum 200GB putting huge demands on processors for compute, fast storage and fast interconnect. This is trending towards a trillion parameters to give these systems much more intelligence and touch on mature human qualities. Add to this the hardware advances, mostly based on GPUs with some other special AI accelerated processors being evaluated, then programmability and long-term updates becomes a headache. Hence the need for a strong software architecture that can keep up with this aggressive evolution of AI. Exciting times.

Meta AI’s well renowned Yann LeCun started by highlighting “ML really sucks”. It’s not even close to humans and animals, but machines will eventually surpass human intelligence in all domains, probably in decades. Meta AI also highlighted that a 4 years old child would still have seen 50x more data than any LLM today, and he continued to assert that AI platforms must be open source (of course I agree with that sentiment).

The topic of open source was emphasized elsewhere as a route to ensuring the evolution of AI was achieved without fragmentation, one of the biggest risks to the industry. However, EAIF emphasized that Europe is not helping open source in AI, suggesting that embracing open source handling was already broken. LLM models are becoming bigger and the trend is the bigger they are, the more closed they are, with a higher level of trust needed in models when they need 1025 FLOPS for training.

The number of market segments not using AI is becoming rare – almost everything is already influenced and the few remaining areas will soon feel the benefit of AI. But caution was widely offered to ensure companies progress with control and planning when implementing into their systems. Fragmented and unplanned strategies lead to confusion and broken/conflicting solutions being offered by larger companies.

Little was said about hallucinations, a topic that will plague many market segments. Hallucinations are when the AI machine is trained on inaccurate data or it gets freedom to fill the gaps, it generates inaccurate responses and therefore trust is lost. Worse still when AI generated output is then used as training or input to other machines which in turn will believe the previous data to be good and hence output even more inaccurate results…a spiral of ever-increasing inaccurate results. Where does that end?

The future of AI is of course encouraging, it would be strange to conclude with any other message at an AI conference. But risks being flagged are plentiful, with hallucinations and misinformation at the centre of it all. VDE wraps it by saying innovation (equals novelty + benefit) and responsibility (equals the understanding, risk mitigation and accountability) are impossible without a human role, intelligence and creativity – this statement is at least trying to tell us humans are still needed in this AI world.

Codeplay Software Ltd has published this article only as an opinion piece. Although every effort has been made to ensure the information contained in this post is accurate and reliable, Codeplay cannot and does not guarantee the accuracy, validity or completeness of this information. The information contained within this blog is provided “as is” without any representations or warranties, expressed or implied. Codeplay Sofware Ltd makes no representations or warranties in relation to the information in this post.
×


Register for the oneAPI DevSummit hosted by UXL:

Register Now