Our mission is to increase endometriosis awareness, fund landmark research, provide advocacy and support for patients, and educate the public and medical community.
Founders: Padma Lakshmi, Tamer Seckin, MD
×
Donate Now

ART/IVF/infertility and endometriosis - Arielle Bayer, MD

ART/IVF/infertility and endometriosis - Arielle Bayer, MD
International Medical Conference

Endometriosis 2024:
Elevating Sampson’s Century Legacy via
Deep Dive with AI

For the benefit of Endometriosis Foundation of America (EndoFound)

May 2-3, 2024 - JAY CENTER (Paris Room) - NYC

Hello everyone. My name is Anton Netter. I am a gynecologic surgeon in Marcell and I also work in Carron for my PhD for this project. My research mostly focuses on the recognition of endometriosis with artificial intelligence, but today I will talk to you about artificial intelligence in general, in medicine, in surgery, and for endometriosis in particular. I also wanted to start by thanking the Congress organizing Committee for giving me the opportunity to come and speak to you today. And of course, I want to apologize in advance for my French accents. So what is artificial intelligence? Obviously, I don't have time to give you only a limited number of clues, but the definition is to give machines the ability to perform tasks that typically require human intelligence and it works with algorithm. So if you do research on ai, you will have to work with engineers and mathematicians, which is unusual and a bit difficult for surgeons like me. It also requires a lot of data. You cannot expect any result when working with AA without an astronomical amount of data, much more than what you would use for a traditional study.

Here you can see that AI covers a large number of different fields and techniques, so we're not talking about one thing in particular, but a vast area that is currently developing very rapidly and which tries to mimic human intelligence. In particular today, we will focus on image recognition and the matching vision. In fact, you probably use artificial intelligence all day every day, sometimes without even realizing it when you want to unlock your phone with face id. When you take the car and want to know the quickest way, or even if you listen to the playlist suggestions on your streaming application. But do you use it when you perform medicine or surgery? The answer is probably no or very little, but this is very likely to change in the coming years. So as you can see, the reality is that artificial intelligence is relatively little used in medicine. To date, the number of studies is exploding and it is very that it will become a major area of research within a few years.

Here are some examples of possible uses. I won't go into detail, but you can see that all areas of medicine will soon be concerned, such as diagnostic, diagnostic assistance, automated reading of imaging exams, planification of care, personalized prescriptions, robotic surgery, and so on. Our goal for this specific project was to use artificial intelligence to recognize endometriosis lesions during laparoscopy. We found this objective interesting for two reasons. Firstly, the problem of recognizing endometriosis has long been known as well as the delay in diagnosis. Secondly, laparoscopy lends itself perfectly to image recognition by artificial intelligence. Since the image is transmitted directly to a monitor on which information can be added, AI is first and foremost a learning process. You have to teach the machine to do the same thing as a human. In the case of our work, this meant going back to the drawing board and asking the question, how do you recognize endometriosis in laparoscopy?

What are the different aspects? We therefore carried out a literature review covering all the data available for the visual identification of endometriosis. Most articles are from the eighties and nineties, and there are over 60 different terms used to describe endometriosis. It is very difficult to differentiate between these terms. For example, for black lesions, the terms blueberry powder burn, dark brown fibrotic, brown fibrotic, black bluish were used. Do this term really describe different lesions. It didn't seem very reasonable to teach the machine to recognize 68 different endometriosis lesions, but neither was it possible to group all lesions under the same label since. It's also obvious that not all lesions look alike, and this creates confusion for recognition. So we created this ontology for superficial endometriosis with four classes and nine subclasses that we felt were relevant by checking our video databases to ensure that each lesion could correspond to an unequivocal class. It should be noted that this classifications is not particularly intended to become a reference, but was created specifically for this artificial intelligence project. The choices we have made are debatable and in particular we use the term subtle when some others have used the term atypical. At the end of our review of the literature, we felt that it was difficult to define a typical endometriosis lesion given the great diversity.

Here is the complete classification. We differentiate four visual classes, superficial visions, deep and ovarian. These classes can be used to define 11 visual subclasses. Again, the choices were made that we've made are obviously debatable, but are particularly suited to this project. We are in the process of trying to publish this literature review on the visual aspect of endometriosis. It seems to be particularly difficult to publish this type of data despite the considerable amount of work involved. Our next step was to annotate surgical images, showing the computer where the lesions were located and what type of lesions there were. We knew from previous experience that we would have to annotate a very large number of images, probably 10 of tens of thousands. So we had to standardize a method. Obviously, there are many ways possible of doing this. Should we use still images or videos? Should we also say on which organ the endometriosis is located?

Should we surround the lesions very precisely or simply show the area in which it is found? Do we use an exhaustive or a simplified classification? There are a number of different issues at stake. On the one hand, we need veracity, so the annotations have to be as precise as possible, which inevitably takes a lot of time, but you also need a high volume of annotations and therefore a certain velocity which you can achieve by simplifying the process a little, but losing a little veracity. So we have to find the right balance between the two to define the method, we applied Delphi with proposals. For example, here we proposed to locate the lesion in simple rectangular boxes, which is very quick but not very precise.

The consensus was reached on 13 proposals by 14 international experts. This made it possible to clearly define the matter to be used. This slide shows how videos are collected from various expert centers. The centers must also specify the Indian end classification for tions. So we know the stage and location and there is an anonymization system, and the videos are transferred to servers. In Clearmont Fair Home videos are annotated by residents who have undergone specific training in the use of software and obviously in the recognition of endometriosis legends. They then annotate around 10 images for each second of video, and the annotations are checked by experts. To date, five international centers have provided us with surgical videos and we have annotated over 100 endometriosis videos and over 50,000 images here. How the first metrics results of the algorithm, which objectively are rather disappointing, the measures of precision and recall, which are the algorithm sensitivity and specificity are quite low.

To be honest. I was a little discouraged when our engineers sent me these results after hundreds of our spent annotating endometriosis lesions. But then I watch the algorithm recognition videos, and in the end, the results don't look too bad from a surgical point of view. As you can see, the algorithm recognizes the lesions on the video, but not on all the frames, which is sufficient for practical surgical existence, but also explains why the numerical results are not so good. Here you can see another example on a more complex image. It's clear that not all lesions are recognized on all images. Nevertheless, almost all lesions are recognized at some point in the video, which is very encouraging. We are therefore in the process of redefining ways of measuring the sensitivity and the specificity of the algorithm at the scale of an entire video so that they correspond more closely to what is expected in reality.

Here's one last example to show you that it could work even in difficult cases with deep endometriosis. We also recently carried out live trials in the or and the results were very encouraging. Potentially even better, since the surgeon will bring his camera closer to the element detected by the algorithm as death increasing sensitivity. So is this the future of surgery? Yes, it is, but for the moment, the algorithm is far from being usable in practice, although the first tests in the OR are promising, it is likely that the first step will be visual indications on the laparoscopy monitor to assist the surgeon, for example, to detect the origin of bleeding or the path of the ureter. We are still a very long way from autonomous robot surgery, which obviously is the ultimate goal. Our best immediate chance of improving endometriosis surgery is by investing in better training for surgeons. So while we can seek to improve our technologies in the or, we must not neglect research into surgical education. This is our best hope of improving surgical quality in the short term. So thank you very much for your attention, and do not hesitate to write to me if you have any questions. Thank you.