A research team at The Chinese University of Hong Kong (CUHK) has successfully developed a new AI-driven surgical robot automation technology and, for the first time globally, completed in-vivo animal validation of multifunctional surgical automation. This technology can perform multiple tasks during surgery, marking an important step toward autonomous and AI-assisted operations.

The research, a collaboration between CUHK’s Faculty of Engineering and Faculty of Medicine (CU Medicine), was led by Professor Dou Qi from the Department of Computer Science and Engineering. The research team developed a new form of embodied intelligence that allows surgical robots to “see”, interpret, and act based purely on visual input. Unlike traditional surgical automation approaches, which depend heavily on external sensors or pre-programmed instructions, CUHK’s system relies solely on real-time endoscopic images and advanced AI algorithms to navigate complex surgical environments. The result is a robot that doesn’t just follow commands – it makes decisions based on what it sees, much like a human assistant would.

Professor Dou said: “We’ve designed a system that doesn’t just take instructions – it understands the surgical environment through vision, learns from experience, and acts with precision. It’s like giving the robot both eyes and a brain. This is the first step toward a truly intelligent surgical assistant – a reliable, tireless ‘third hand’ that works alongside surgeons to enhance safety, efficiency, and outcomes in the operating room.”

At the heart of the innovation is the integration of visual foundation models, reinforcement learning, and visual servoing. These technologies work together to create a fully data-driven, vision-based control system that can perform a variety of surgical tasks with precision and efficiency. The AI system was trained using SurRoL, an embodied AI simulator. Through a process known as “zero-shot sim-to-real transfer”, the robot was able to apply what it learned in simulation directly to real-life surgical tasks, without any additional training in the physical world.

At the heart of the innovation is the integration of visual foundation models, reinforcement learning, and visual servoing. These technologies work together to create a fully data-driven, vision-based control system that can perform a variety of surgical tasks with precision and efficiency. The AI system was trained using SurRoL, an embodied AI simulator. Through a process known as “zero-shot sim-to-real transfer”, the robot was able to apply what it learned in simulation directly to real-life surgical tasks, without any additional training in the physical world.

The research team has successfully developed a new AI-driven surgical robot automation technology and, for the first time globally, completed in-vivo animal validation of multifunctional surgical automation.
(from left) Professor Dou Qi, Dr Yip Hon-chi, Professor Samuel Au Kwok-wai and Professor Philip Chiu Wai-yan

The robot was deployed in the Sentire® Surgical System, a platform designed with AI-readiness in mind. During the live animal trials, which were conducted in a pre-clinical setting that closely mimicked real surgical environments, the system successfully completed several complex surgical maneuvers. These included tissue retraction, gauze picking, and blood vessel clipping – all fundamental components of minimally invasive surgery. The AI performed with consistent accuracy, even when faced with variables such as different lighting conditions and varying tissue appearances.

Dr Yip Hon-chi, Assistant Professor from CU Medicine’s Department of Surgery, oversaw the in vivo testing and emphasised the significance of the achievement: “This marks a true turning point in surgical robotics. The system’s ability to generalise across diverse surgical tasks and adapt to changing environments underlines its potential to become a reliable assistant in the operating room. Rather than replacing surgeons, the AI can serve as a “third hand”, carrying out routine but important tasks, reducing workload, and allowing surgeons to focus on more complex decision-making.”

Healthcare systems worldwide are facing pressures from an aging population, a shortage of surgical manpower, and increasing demand for minimally invasive procedures. By automating highly repetitive surgical tasks and enabling more efficient teamwork in operating rooms, this innovative system is expected to help alleviate these burdens and improve patient outcomes globally, ushering in a new era of smarter, safer, and more collaborative surgery.

Video: Presentation and demonstration