Our mission is to increase endometriosis awareness, fund landmark research, provide advocacy and support for patients, and educate the public and medical community.
Founders: Padma Lakshmi, Tamer Seckin, MD
×
Donate Now

Applying AI in Surgery -Paul J. Chung, MD FACS

Applying AI in Surgery -Paul J. Chung, MD FACS
International Medical Conference

Endometriosis 2024:
Elevating Sampson’s Century Legacy via
Deep Dive with AI

For the benefit of Endometriosis Foundation of America (EndoFound)

May 2-3, 2024 - JAY CENTER (Paris Room) - NYC

Disclose this cannot will not be a comprehensive survey of the field. And again, all opinions are my own. Now, artificial intelligence, we hear about it so much. I found this definition from Encyclopedia Botanical, which I thought was interesting. It says that AI is the ability of a digital computer or computer controlled robot to perform tasks commonly associated intelligent beings. What I find interesting about this definition is it's not really telling us what AI is, it's more telling us what it does. And also it has this idea of two modalities, one of information processing and the other doing something. We often see these images of robots and ai, they kind of get middle muddled together. But there are some distinctions at play here, and especially when it comes to surgery because we're not just an intellectual field are doers. But more importantly, there's this idea also of general ai.

It's a somewhat controversial idea of machine in the image of man. And I would say that for the applications of AI and surgery, we're not really talking about that. We're talking about more and narrow AI or task specific algorithms that have applications in our field. And that's really what I want to focus on today. Now, we might ask ourselves, why are we hearing so much about ai, particularly in the past five or six years? And this graph from nature I think maybe helps us understand that, that it's really been in the past five or so years that we've seen algorithms starting to hit or surpass human baseline performance on many different tasks. Particularly when we think about computer vision tasks, image recognition, we've already heard a lot about this previous talks. We also think about language comprehension of the need for tech space understanding. And we're starting to see that these algorithms are hitting that level and so we can perhaps understand why we're seeing these applications more and more.

But today I want to also in terms of clinicians, and we started to talk in terms of AI and ethics and also what is it that we as clinicians are trying to do. This is my own mental model. This is by no means comprehensive or complete and all models are wrong, but some hopefully are useful. But I would say that as clinicians, what is it that we're trying to do? We've already talked about detection. We're trying to find what is the patient's past. In current state, we're trying to do documentation. There's already been plenty of discussion about the need for more data. And one of the hallmarks of that is that sometimes we don't know what questions to ask or we just don't have time to document things efficiently. We don't know how to do this. We're strapped for time. And so perhaps this is where AI can help us.

I'd like to think also about discussion. The panel today, the session started off with a discussion about ethics and being patient-centric. And this is potentially a place where AI can help because we need to discuss what it is that we're doing. Why is it that we're doing this with the patients, but not just with the patients but with other care providers as well as with our students, our residents and fellows, teaching them what it is that they need to know. And that is also a part of the discussion of medicine, but also decision making. How can we create plans that optimize the current and future state of the patient? And this is where a lot of thought has been put into, these are the applications of ai. And finally, I want think about doing again, this is where I think the idea of information processing and robotics gets kind of muddled together, but this idea that can technology, can robotics mixed with AI systems do things autonomously.

Now, there are some things I do want us to think about that as we go down these, the ladders, we find that it requires increasing amounts of sophistication and also that these tasks all build upon each other. That detection and documentation, if we can do that well, can lead to better discussion and decision making and so on. And this is a theme that I wanted to think about, but also think that it's the items on the higher rungs that are, we're saying these are more and more production ready. These are more mature use cases of AI systems in medicine and surgery. And as we go further down, we're going more into the realm of research or even maybe science fiction. But as we think about detection, again, our authority shown so many different ways in which this is being used. We think about, again, computer vision.

It was one of the more mature tasks we see that I'm not going to name any companies, I'm going to try to avoid that as best as I can. But we see that there are FDA approved algorithms that can identify brain lesion, brain bleeds, pulmonary embolisms injuries, likewise in pathology, using it to help identify cell types at a larger scale, at a scale that humans cannot perform. But also when we think about intraoperatively, we see that there's a higher level of sophistication needed because typically in the or, we're not dealing with static images, we're dealing with videos, we're dealing with hundreds of thousands or millions of frames all strung together. And these are images that are connected over time. And we can see though that there are use cases, active research of systems that have been used in production or being experimented upon, and tasks such as the identification of lesions as we saw or anatomical structures potentially would've saved that patient from having an IVC injury if there was such a thing.

But again, these are things that are coming up within surgery, although we're not seeing it fully active. But again, the potential for help is absolutely there when we think about documentation. And one of the things that I believe, again, this is my opinion, but I believe the reason why this is becoming more exciting is because of this idea of multimodality that we had typically AI systems that dealt with a single type of inputs like text or images. But now especially in the past year, we think of certain companies that are looking at video and integrating them together. We're seeing certain health systems now testing some of these systems such as Stanford, where they're using this thing which is called ambient ai, that it's going to listen, it's going to use audio clues, it's going to use visual clues that perhaps the discussion between the patient and the care provider can just be that it's just a natural discussion.

And then in the background, there's going to be this tool that's going to gather data, maybe even prompt us as to what we should be asking. As it was pointed out, it was a big problem that especially myself as a general surgeon, maybe there is a patient who's coming who is presenting signs and symptoms that maybe I don't recognize as being associated with endometriosis, but if this system was there, maybe it could prompt me to go delve deeper into that. Likewise, in the operating room, we can also see this multimodal type of systems that are taking audio visual video inputs and sort of trying to synthesize what is happening in the operating room. Perhaps if we have these multiple input streams, we can identify not only when bad things happen, but perhaps we can also predict when they might happen to prevent them from happening in the first place.

But also discussion. This is I think something that's really come to the foreplay, especially in the last, since November of 2022 when chat JPT came out. And this idea of large language models and chat bot systems really came into the realm of public consciousness. And we're seeing a lot of this explosion of research that's dealing with large language models. This is by no means again comprehensive, but we're seeing use cases such as using LLMs to see can they create better output compared to surgeons. And typical lead answers, yes, unfortunately in terms of generating documentation or informed consent. And this also ties into what was spoken about earlier in terms of being patient-centric, having the care and the benefit of the patient in mind that perhaps using this for educational material, the average education level or reading comprehension level of America is about eighth grade. And sometimes perhaps we're using language that's overly sophisticated.

And so using the system to see can we create educational material or even informed consent material, which is easier to comprehend because if they cannot, our patients cannot understand it. How can we say that this is a really informed consent or this is truly educational? Likewise, when we think about the use cases in terms of not only discussing with patients or other care providers, but also with our students and residents and fellows using this as a means of teaching. And it's I think, not hard to think about the fact that if we can use this as a way of teaching, it's not hard to bridge a gap towards decision-making because teaching is sort of this, it's a simulation of what's going to happen. But if we have a system that could do that, then why can't we do that with decision making? And we see that again with decision making.

There are systems like this that have just recently been published, and this is again not going to mention the company that created it, but you can obviously Google it. But a ME is a research AI system used to create diagnostic medical and reasoning and conversations. And this is like a chat bot. And what the researchers here did was they used the objective structured clinical evaluation material or os e material they created into a data set that looks like this chat bot type of interaction. And then you can train a system which in one sense critiques itself. So you have a generate and you have another system that's critiquing the output and you have this iterative cycle of updating of the system. And then what they found was that in many cases the output was better in terms of having better diagnostic accuracy, having better management plan creation.

The escalation plan was better, some cases the output was more empathetic. And again, this was again using the setting of training. But we can also think about it in terms of decision making and how this might be used. And again, if we think about multimodal processes, not only can you now what if you can take audio and video input and use this as another input stream, increase such a system. Finally, when we think about doing, again, this is more, this is far, far from being used actively. And I think what's important here is that there are researchers who are first and foremost trying to think about how do we categorize this? How do we think about this? First of all, because is there a way to structure our thinking around this? And so these researchers looking at their regulatory, ethical and legal considerations, they created this sort of, I guess framework of thinking that maybe there are six different types of buckets of autonomous systems that we need to think about.

And I would argue that we're really at 0.0 and 0.1. No autonomy, I'm a robotic surgeon, but my robots are not doing, I'm fully in control. There are systems where maybe the robot has helped guide what I do, but it is still fully me who is doing it. But then there are other systems that look at task autonomy, test specific type of robotics. We'll see that there have been research into doing things like bowel and acidosis is based totally on robotic systems. Also conditional autonomy. What if we treat the robot, this robot slash AI system as if it's a resident or a fellow, let it do it, let it do its task. But when it looks like we need to intervene, we take over. Or high autonomy, almost like pilots on airplanes. You just let it go. But the human is always there towards what I think probably most people who are in this space think as the holy grail, full autonomy.

We just have a robot that really is truly a robot. And again, there are, there's research into this. Again, we're still fully in the field of research. None of these things are actively being used yet, to my knowledge. I could be wrong, but I don't know of them. And we think of this group who looked at creating a system that does this using to do bowel and SMCs, and this is a busy slide, but the most important part here is that it really builds upon some of the foundational things that it's relying on computer vision. Because if you can't recognize what it's seeing, if you can't understand what it is that it's dealing with, how could it do it? It also is building upon work that's been done in robotics and just having servos and arms that can move with the amount of dexterity required.

And so all of these things are working in parallel and likely as we continue along, as we see the levels and these specific tasks working better and better in parallel, that the composition of them will lead to something like this. Like actual robotics being used, AI systems being used to control robotic systems. Now, I do want to think about challenges because again, the future's not all rosy. The fact is that AI is hard. It's got a high data and high infrastructure requirement, and quite frankly, most health systems just aren't there. Healthcare is very far behind the rest of the industry. So this is definitely the main technical hurdle is that if we don't have systems, we don't have baseline proficiency, we can't expect to use AI systems and your should we really. But we also have this other issue recently in the New York Times. There was this article about measurement issues.

And this is coming into the foreplay now at the forefront now with AI systems because there's no real good way of measuring them. We have systems that they have benchmarks that are released, but we don't really know what those benchmarks mean because when you actually use 'em, maybe it sounds great on paper, but it's not really true. But that's also something that we see all the time in medicine as well. But more importantly, again, going back to ethics, this is the most important thing. And as the esteemed Dr. Ian Malcolm once said in that movie, which is perhaps apparel book in movie, which is app parable for our present day, the scientists were so preoccupied with whether they could, they didn't stop to think whether they should. And where are we with that right now we're already seeing in this investigative report by ProPublica, we're seeing that AI systems are potentially already being used perhaps against us.

And this report, they found that some systems were being used. So they were performing almost 300,000 denials over a period of time such that the amount of time used to review each claim was about 1.2 seconds. Recently, UnitedHealthcare came under class action lawsuit for such things as well where the plaintiffs alleged that the defendants denied payments and claims using an AI system that is alleged to have a known error rate of 90%. And so again, it just draws back to what we talked about earlier, that ethics, the care of the patient has to be at the forefront at the center. But it does feel at times like we're in this kind of arms race, this AI arms race, and unfortunately we're sort of in this space now where we have no choice but to run in this race whether we like it or not. And just closing, I want us to think about what some legal experts, some legal scholars at Stanford, they renovated this, was also published in New England Journal of Medicine.

We're in this new space, this place where we don't really know what to do that the case law, and there's really no precedent for this. There's only been about 51 cases, we don't really know what to do. But they also make this observation that at the end, that it could be that not adopting these technological tools could actually be viewed as harmful decision that they could actually be viewed as not leading the standard of care. So just in closing, I like to say that AI is here. Again, we're seeing increasing levels of sophistication. It's here to stay whether we like it or not. And at the same time, I think as long as we take a patient-centered centric approach, having the care of the patient front and foremost, we shouldn't be afraid of it, but we should at least be aware of these things. I want to thank you for your time and retention today.