Intel has unveiled a partnership with the IOC and the introduction of a Recovery Augmented Generation (RAG) solution based on Generative AI (GenAI). To support more than 11,000 athletes from different linguistic backgrounds during the Olympic Games, Intel has developed AthleteGPT
Learn more
Intel hosted a roundtable discussion with key representatives from the International Olympic Committee (IOC), Seekr, and Red Hat. The discussion focused on the benefits of an open AI ecosystem for developers and businesses in response to the challenges posed by the rise of artificial intelligence. The event was led by Justin Hotard, executive vice president and general manager of Intel’s Data Center and AI Group, and included insights from Kaveh Mehrabi of the IOC, Steven Huels of Red Hat, Rob Clark of Seekr, and Bill Pearson of Intel.
Key Announcements
Intel unveiled exciting collaborations, including a partnership with the CIO and the introduction of a Retrieval Augmented Generation (RAG) solution based on Generative AI (GenAI). These initiatives highlight how open AI systems, using Intel Gaudi AI accelerators and Intel® Xeon® processors, enable developers and businesses to address AI challenges.
Intel underscored its commitment to making AI accessible through its collaboration with the IOC. The company aims to foster an innovative environment that allows developers and businesses to create custom AI solutions that drive real-world outcomes. By taking an open and collaborative approach, Intel seeks to push the boundaries of what is possible for athletes and customers.
AthleteGPT: Improving the Olympic experience
To support the approximately 11,000 athletes from different linguistic and cultural backgrounds during the Olympic Games, the IOC, in partnership with Intel, has developed AthleteGPT. This chatbot, integrated into the Athlete365 platform, uses Intel’s Gaudi accelerators and Xeon processors to handle athlete requests and provide them with information on demand. AthleteGPT aims to simplify the athlete experience in the Olympic Village in Paris, allowing them to focus on their training and competition.
The importance of GenAI solutions
Deploying GenAI solutions comes with challenges such as cost, scalability, accuracy, and security. RAG plays a critical role in GenAI by enabling businesses to securely leverage proprietary data, improving the accuracy and speed of AI results. This improvement is critical in today’s data-driven landscape as it enhances the quality and utility of AI applications.
Intel’s collaborative approach involves AI platforms, open standards, and a comprehensive ecosystem of software and systems, enabling developers to create customized GenAI RAG solutions. The progress presented at the event underscores Intel’s commitment to delivering robust and scalable generative AI solutions to multiple vendors.
How the GenAI RAG solution works
Intel is working with industry partners to develop an open source, interoperable solution that makes it easy to deploy RAG. The GenAI solution, built on the Open Platform for Enterprise AI (OPEA) foundation, provides enterprises with a ready-to-use, production-ready approach. It is designed to be flexible and customizable, integrating components from a catalog of offerings from various OEMs and industry partners.
The solution integrates OPEA-based microservice components into a scalable RAG framework, deploying Xeon and Gaudi AI systems. It scales efficiently using orchestration frameworks such as Kubernetes and Red Hat OpenShift and provides standardized APIs with security and system telemetry.
Most large language model (LLM) development is based on the PyTorch framework, supported by Intel Gaudi and Xeon technologies. This support makes it easy to develop on Intel AI systems or platforms. Intel worked with OPEA to create an open software stack for deploying RAG and LLM, optimized for the GenAI turnkey solution and built with PyTorch, Hugging Face libraries, LangChain, and the Redis Vector database.
Meeting the needs of developers
OPEA offers open source, standardized, and modular RAG pipelines for enterprises, supporting various compilers and toolchains. This foundation accelerates AI integration and delivery for unique vertical use cases, opening up new possibilities in AI.
Intel offers a complete solution with the GenAI turnkey solution and the complete enterprise AI stack, which addresses the challenges of deploying and scaling RAG and LLM applications. By leveraging Intel’s AI systems and optimized software, enterprises can fully utilize GenAI efficiently and quickly.
Go forward
Access to the latest AI computing technologies remains a challenge for enterprises. Intel is creating new opportunities for AI services based on GenAI and RAG solutions through strategic collaborations with industry partners and customers. Intel announced a new Coalition for Secure AI (CoSAI) with Google, IBM and others to strengthen trust and security in AI development and deployment. Additional demonstrations of Intel’s unique approach to AI systems will be showcased at Intel Innovation on September 24-25.