Experts in the field of robotics from Johns Hopkins University (JHU) recently took part in a webinar to share insights into work they are undertaking that utilises AI to further automation in surgical robotics systems.
According to Russell H. Taylor, a John C. Malone professor in the Department of Computer Science at JHU, in dealing with surgical robots, complementary capabilities between human and machine are really what is being dealt with.
"We have some common capabilities, but machines are good at things that we are less good and vice versa," Taylor says.
"We want this partnership to achieve the best of both worlds, and it's important to realise that the robot is only one part of an intervention; you really are dealing with an information infrastructure and a physical infrastructure in the OR and throughout the entire treatment process and the hospital."
In terms of the relationship between AI and robotics, Taylor's view is that there are two factors to consider: the way in which humans can tell the machine what it is supposed to do in a way that the machine can understand, and how humans can be sure the machine is able to do what it has been told to do, and will not do something it has not been told to do throughout a surgical procedure.
Taylor notes that the current paradigm in robotic surgery is for a surgeon to manoeuvre a robot using handles while watching the operative outcomes through a stereo endoscope.
"But all of the knowledge and planning beyond that very simple task specification and execution is in the surgeon's head."
The remark leads into Taylor's point that the emergent paradigm in robotics, with the rise of AI, is in being to take greater advantage of the fact a computer is able to assist in many ways between the physician, the robots, tools, and the patient.
He explains: "Surgeons can still do all of the hand over hand control, but they can also begin to ask the robot to provide them with more information, from sensors to the enforcement of safety barriers during surgeries.
"For this to work, what is crucial is that the computer controlling the robot and the physician need to have some shared situational awareness of what is going on with the patient, the tools, and the system, and what the task is."
Taylor's prevailing view is that robots in surgery have to be thought about in terms of what a surgeon is trying to do, and the information and physical environment in which the robot is providing assistance within.
"We can improve consistency, safety and quality, but in the end, for all of this to be valuable, you need whatever technology or autonomy is available to solve clinical problems to result in better clinical outcomes or more cost-effective processes."
In 2022, a team at JHU published research on the Smart Tissue Autonomous Robot (STAR) that performed the first laparoscopic surgery, without any human help, on pig models.
At the time, senior research author Axel Krieger, assistant professor of mechanical engineering at Johns Hopkins' Whiting School of Engineering, said the team's findings showed it was possible to automate the reconnection of two ends of an intestine - one of the most intricate and delicate tasks in surgery. In performing this procedure, STAR produced significantly better results than humans performing the same procedure.
More recently, Krieger's team has been looking at an AI-based learning approach, facilitated through JHU's transformer, that can improve with more data.
Krieger likens the transformer to the backbone architecture used in large language models (LLM) like ChatGPT.
He explains: "In our case, we are using robotic action as an output, and learning how to perform fundamental surgical tasks like lifting tissue, needle pickup, and knot tying."
This research, which JHU recently presented at this year's CoRL (Conference on Robot Learning) conference, which took place in Munich, Germany from 6-9 November, centres around 'imitation learning'.
"The transformer learns by watching humans do procedures. We do different demonstrations of surgical sub-tasks and give those to the transformer learning model, which can then, fully autonomously, execute on them."
Krieger says that STAR achieved complete autonomy from being shown around 500 demonstrations of procedures including knot tying or picking up a needle.
"What's also exciting is that STAR is robust to retry behaviour, so if something goes wrong - such as a tool getting knocked out of STAR's gripper - the architecture recovers and continues to perform, for instance, knot tying, without any error.
"Of course, this is all on kind of a suture pad level, and not yet clinical. What we've been continuing to explore over the last couple of months is whether this architecture works for real surgery."
Along with recent advancements in deep learning and AI capabilities, foundation models are emerging as key drivers of automation in surgical robotics because they allow for more flexible task formulations and data inputs.
Foundation models are equipped with AI-based deep learning capabilities and can be trained on vast and broad, or specific, datasets and applied to a wide range of use cases.
"Foundation models unlock the power of AI analytics for variable tasks that we would otherwise have needed to train and develop specific models for," says Mathias Umberth, John C. Malone associate professor at JHU's department of computer science.
Since 2017, Umberth and his team have been working on a foundation model, with a particular focus on X-ray image analysis in guided surgery, to develop paradigms and frameworks that can be used to fully automate the generation of surgical training data, in-computer, to scale the data generation pipeline.
"This paradigm has been enabling us to generate immense amounts of perfectly annotated training data that documents surgical processes, some of them old and that we already perform in surgery, and some of them new ones that we would like to be able to perform in the future, and use this data to generate sophisticated AI models that we can then use to analyse intraoperative data and drive automation," Umberth explains.
For surgical application, Umberth's team has developed a foundation model called FluoroSAM, which interacts with X-ray images and can segment arbitrary structures in those images.
"We've been using this model to drive automation in surgery, and along with a language model, we have essentially built a fully autonomous system of a robotic X-ray device, for orthopaedic and endovascular applications."
On a screen, this system can visualise requests made by a clinician during surgery. In practice, a surgeon may request a view of the right femur. From here, the system will automatically interpret the prompt, interpret the image, and move to the corresponding location on a patient. The system can also respond to other prompts like a surgeon's request for visualising the segmentation of a muscle.
"These systems that now can leverage these foundation models as the back end to analyse complicated images and act on top of them, are really going to be one core enabling factor for driving the adoption of autonomy."
In closing, Umberth says that his team is interested in determining how the rise of autonomy will affect the responsibilities of surgeons and OR staff in the reality of operating rooms 10-15 years when, in all likelihood, all ORs will at least partly consist of autonomous systems.
"This is not simply about how we can build and enable technology that is autonomous and can achieve and perform at the level that we need in order to make patients healthier, but we also need to think about how the introduction of this type of technology changes the overall ecosystem that is healthcare."
Robotics and automation in surgery seem inevitable. In the quest for less invasive surgical procedures, surgical robotics are having an impact on procedures both straightforward and complex. While full automation is not quite a reality and may not even be the desirable outcome moving forward, for now, a symbiotic relationship between human and robot in the OR appears to be the best way forward. Robots may be a long way off from replacing a skilled surgeon, but in the future OR, it appears certain that surgical robotics systems will augment the field of surgery, to enable surgeons to continue doing what they do best, only with greater efficiency and insight.
"Boosting automation in robotic surgery with AI" was originally created and published by Medical Device Network, a GlobalData owned brand.