Random Image Display on Page Reload

Would you trust a surgeon trained with AI? CHEO and Carleton developing cutting-edge simulator

Experts in health-care technology and ethics say a new laparoscopic simulator that uses AI to give feedback could have wonderful benefits for students, but warn it should be used with caution — even for training — to avoid losing public trust.

Technology allows surgeons to train in laparoscopic techniques without supervisor watching over them

A woman wearing a hijab and brown blazer stands over an electronic device in an engineering lab. She is holding two surgical tools that are connected to the device.

When doctors performs laparoscopic surgery, they can't rely on their own eyes.

Instead, laparoscopic surgeons make a small incision and insert two long tools to work inside the patient's body, guided only by a camera.

It's notoriously challenging, but laparoscopic surgery has become increasingly available because it's less invasive and leaves less scarring, and still allows the surgeon to pivot to open surgery if needed.

Studies have shown health-care professionals believe there are gaps or deficiencies in laparoscopic training, however. During the COVID pandemic, the need for surgical supervisors became particularly acute.

A photo of a surgical suite. The angle is looking past the doctor, holding two long laparoscopic tools, and focused on the nurse, who is intently looking down at the work.

Surgeons Ahmed Nasr, a division chief at CHEO, eastern Ontario's children's hospital in Ottawa, and Georges Azzie, a program director at Toronto's SickKids, began to brainstorm a solution.

They partnered with biomedical engineering students at Ottawa's Carleton University to develop a laparoscopic simulator that uses artificial intelligence to assess the progress of surgical trainees.

The students learn to hold the tools and practise the motions with real-time feedback from AI, which has been trained using data from experts.

Experts in health-care technology and ethics say this could have wonderful benefits for students, but warn that incorporating AI into health care should be done with caution to avoid losing the public's trust.

A photo taken over the shoulder of a woman wearing a brown blazer. Focus is on a laparoscopic tool being held in her right hand. We can see the screen she is looking at in the background.

A small box with big potential

The simulator machine is a small box fitted with a camera, motion sensors, hand-held laparoscopic tools and a tablet.

Carleton engineering students Huda Sheikh, Atallah Madi, Esraa Alaa Aldeen and Youssef Megahed have been working on the simulator during their final year.

They built six exercises, like transferring rings between pegs, that help trainees develop the hand dexterity they'll need for real laparoscopic surgery. The machine limits range of motion and senses the smallest gestures.

As the trainees work, feedback pops up in real time. They're warned if they move too quickly, for example, or keep their tools too far apart.

A photo of a yellow board with six pegs. A few of the pegs have rings around them. Two metal claws are holding a ring between them in the middle of the board.

Sheikh said AI reduces the need for supervision. The teacher doesn't need to hover.

Nor could a human instructor give such detailed feedback Sheikh said, and the more feedback the trainee receives makes training faster and more efficient.

A graph with two references on it. Red lines show how the user was operating the laparoscopic tools and blue lines show how the reference expert was operating them, to compare the differences. There are feedback tips on the right.

'Guardians of a public trust'

This project isn't the first to use AI for laparoscopic simulation training. One such device has already debuted in the U.S., and another is in development in the U.K.

But it's still a relatively new approach: Those projects were publicized in the last 12 months.

Sheikh said she thinks patients would prefer a surgeon "trained on expert data" over someone who didn't receive constant feedback — provided the AI being used makes correct judgments at least 80 to 90 per cent of the time.

Two video feeds of pegboards side by side, with pointy tools being used in both.

Neurosurgeon Sunit Das, an associate professor at the University of Toronto and affiliate with its Ethics of AI Lab, agreed that simulation training has "tremendous benefits," but advised caution.

"We're really guardians of a public trust," Das said. The question with AI tech is: "How do we maintain that trust?"

Technology and ethics researchers told CBC their concerns regarding the technology include the number of trials, the black box problem, AI expert assessments, bad habits and bias from funding sources.

Nasr said his team has taken ethical concerns like these into consideration during the development process.

A man in a navy blue suit and tie sits at a picnic table. There are trees and grass in the background, and a building in the distance.

He said so far, his team understands the choices the AI is making and hasn't found that the trainees develop bad habits. As for conducting numerous trials, that will wait until the simulator is fully developed.

Because funding for the project comes from grants, he said the researchers want it to be as cost-effective as possible, without jeopardizing safety.

"I think it's 100 per cent ethical as long as we are supervising the training," Nasr said.

Is AI simulation training worth it?

Studies in the U.K. and Canada, among others, have detailed how practicing in a simulated setting can improve surgical performance.

Das and Nasr agreed that trainees benefit from simulation because it gives them room to make mistakes.

The history of medicine has always been "about integrating new technologies into what we can offer patients," Das said, but "that integration has always been mediated by human hands."

A photo over a woman's shoulder as she uses a machine. The screen is visible, which shows six pegs and three coloured rings.

He doesn't think that will change, and Nasr agreed.

"You cannot take the human out of the equation," Nasr said. "The way I look at AI is to help physicians in their daily work, and it will stream the process and make us more efficient."

The Carleton project will be inherited by the next generation of students as the current team graduates. Nasr expects the development stage to finish in the next year, allowing them to start testing.

When AI simulators are ready to be used in training, it will fall to accrediting institutions such as the Royal College of Physicians and Surgeons of Canada to decide if they're up to the task.

"There is a lot of work to be done, but there's no question in my mind … that [AI] is the future," Nasr said.

*****
Credit belongs to : www.cbc.ca

Check Also

Jensen Huang Wants to Make AI the New World Infrastructure

By Zeyi Yang Business Dec 3, 2024 8:51 PM Jensen Huang Wants to Make AI …