Researchers from the School of Engineering and Applied Science at the University of Pennsylvania (Penn Engineering) discovered alarming security flaws in AI robots.
The study, funded by the National Science Foundation and the Army Research Laboratoryfocused on the integration of large language models (LLM) in robotics. The results reveal that a wide variety of AI robots can be easily manipulated or hacked, which can lead to dangerous consequences.
George Pappas, UPS Foundation Professor at Penn Engineering, said: “Our work shows that at present, large language models are simply not secure enough when integrated into the physical world. »
The research team developed an algorithm called RoboPAIREwhich achieved a 100% jailbreak rate in just a few days. This algorithm successfully bypassed safety guardrails in three different robotic systems: the Unitree Go2 quadruped robot, the Clearpath Robotics Jackal wheeled vehicle, and NVIDIA’s Dolphin LLM autonomous driving simulator.
Of particular concern was the vulnerability in OpenAI’s ChatGPT, which governs the first two systems. Researchers demonstrated that by bypassing safety protocols, an autonomous driving system could be manipulated to speed through pedestrian crossings.
Alexander Robey, a recent engineering Ph.D. from Penn. graduate and first author of the article, emphasizes the importance of identifying these weaknesses: “What is important to emphasize here is that systems become more secure when you discover their weaknesses. This is true for cybersecurity. This is also true for AI security.
Researchers say fixing this problem requires more than just a software patch. Rather, they call for a global reassessment of how Integration of AI into robotics and other physical systems are regulated.
Vijay Kumar, Nemirovsky Family Dean of Penn Engineering and co-author of the study, commented: “We need to address intrinsic vulnerabilities before deploying AI-enabled robots in the real world. Indeed, our research develops a verification and validation framework that ensures that only actions consistent with social norms can – and should – be taken by robotic systems.
Prior to the study’s release, Penn Engineering notified affected companies of the vulnerabilities in their systems. Researchers are now collaborating with these manufacturers to use their findings as a framework to advance testing and validation of AI security protocols.
Other co-authors include Hamed Hassani, associate professor at Penn Engineering and Wharton, and Zachary Ravichandran, a doctoral student in the General Robotics, Automation, Sensing and Perception (GRASP) Laboratory.
See also: The Evolution and Future of Boston Dynamics Robots
Want to learn more about AI and Big Data from industry leaders? Check AI and Big Data Exhibition taking place in Amsterdam, California and London. The full event is co-located with other premier events including Intelligent Automation Conference, BlockX, Digital Transformation WeekAnd Cybersecurity and Cloud Expo.
Check out more upcoming enterprise technology events and webinars from TechForge here.