DiscoverGround TruthsFaisal Mahmood: A.I.'s Transformation of Pathology
Faisal Mahmood: A.I.'s Transformation of Pathology

Faisal Mahmood: A.I.'s Transformation of Pathology

Update: 2024-07-28
Share

Description

Full videos of all Ground Truths podcasts can be seen on YouTube here. The audios are also available on Apple and Spotify.

Thank you for reading Ground Truths. This post is public so feel free to share it.

Transcript with audio and external links

Eric Topol (00:05 ):

Hello, it's Eric Topol with Ground Truths, and I am really thrilled to have with me Professor Faisal Mahmood, who is lighting it up in the field of pathology with AI. He is on the faculty at Harvard Medical School, also a pathologist at Mass General Brigham and with the Broad Institute, and he has been publishing at a pace that I just can't believe we're going to review that in chronological order. So welcome, Faisal.

Faisal Mahmood (00:37 ):

Thanks so much for having me, Eric. I do want to mention I'm not a pathologist. My background is in biomedical imaging and computer science. But yeah, I work very closely with pathologists, both at Mass General and at the Brigham.

Eric Topol (00:51 ):

Okay. Well, you know so much about pathology. I just assume that you were actually, but you are taking computational biology to new levels and you're in the pathology department at Harvard, I take it, right?

Faisal Mahmood (01:08 ):

Yeah, I'm at the pathology department at Mass General Brigham. So the two hospitals are now integrated, so I'm at the joint department.

Eric Topol (01:19 ):

Good. Okay. Well, I'm glad to clarify that because as far as I knew you were hardcore pathologist, so you're changing the field in a way that is quite unique, I should say, because a number of years ago, deep learning was starting to get applied to pathology just like it was and radiology and ophthalmology. And we saw some early studies with deep learning whereby you could find so much more on a slide that otherwise would be not even looked at or considered or even that humans wouldn't be able to see. So maybe you could just take us back first to the deep learning phase before these foundation models that you've been building, just to give us a flavor for what was the warmup in this field?

Faisal Mahmood (02:13 ):

Yeah, so I think around 2016 and 2017, it was very clear to the computer vision community that deep learning was really the state of the art where you could have abstract feature representations that were rich enough to solve some of these fundamental classification problems in conventional vision. And that's around the time when deep learning started to be applied to everything in medicine, including pathology. So we saw some earlier cities in 2016 and 2017, mostly in machine learning conferences, applying this to very basic patch level pathology dataset. So then in 2018 and 2019, there were some studies in major journals including in Nature Medicine, showing that you could take large amounts of pathology data and classify what's known to us and including predicting what's now commonly referred to as non-human identifiable features where you could take a label and this could come from molecular data, other kinds of data like treatment response and so forth, and use that label to classify these images as responders versus non-responders or having a certain kind of mutation or not.

(03:34 ):

And what that does is that if there is a morphologic signal within the image, it would pick up on that morphologic signal even though humans may not have picked up on it. So it was a very exciting time of developing all of these supervised, supervised foundation models. And then I started working in this area around 2019, and one of the first studies we did was to try to see if we can make this a little bit more data efficient. And that's the CLAM method that we published in 2021. And then we took that method and applied it to the problem of cancers of unknown primary, that was also in 2021.

Eric Topol (04:17 ):

So just to review, in the phase of deep learning, which was largely we're talking about supervised with ground truth images, there already was a sign that you could pick up things like the driver mutation, the prognosis of the patient from the slide, you could structural variations, the origin of the tumor, things that would never have been conceived as a pathologist. Now with that, I guess the question is, was all this confined to whole slide imaging or could you somehow take an H&E slide conventional slide and be able to do these things without having to have a whole slide image?

Faisal Mahmood (05:05 ):

So at the time, most of the work was done on slides that were fully digital. So taking a slide and then digitizing the image and creating a whole slide image. But we did show in 2021 that you could put the slide under a microscope and then just capture it with a camera or just with a cell phone coupled to a camera, and then still make those predictions. So these models were quite robust to that kind of domain adaptation. And still I think that even today the slide digitization rate in the US remains at around 4%, and the standard of care is just looking at a glass light under a microscope. So it's very important to see how we can further democratize these models by just using the microscope, because most microscopes that pathologists use do have a camera attached to them. So can we somehow leverage that camera to just use a model that might be trained on a whole slide image, still work with the slide under a microscope?

Eric Topol (06:12 ):

Well, what you just said is actually a profound point that is only 4% of the slides are being reviewed digitally, and that means that we're still an old pathology era without the enlightenment of machine eyes. I mean these digital eyes that can be trained even without supervised learning as we'll get to see things that we'll never see. And to make, and I know we'll be recalling back in 2022, you and I wrote a Lancet piece about the work that you had done, which is very exciting with cardiac biopsies to detect whether a heart transplant was a rejection. This is a matter of life or death because you have to give more immunosuppression drugs if it's a rejection. But if you do that and it's not a rejection or you miss it, and there's lots of disagreement among pathologists, cardiac pathologists, regarding whether there's a transplant. So you had done some early work back then, and because much of what we're going to talk about, I think relates more to cancer, but it's across the board in pathology. Can you talk about the inner observer variability of pathologists when they look at regular slides?

Faisal Mahmood (07:36 ):

Yeah. So when I first started working in this field, my kind of thinking was that the slide digitization rate is very low. So how do we get people to embrace and adapt digital pathology and machine learning models that are trained on digital data if the data is not routinely digitized? So one of my kind of line of thinking was that if we focus on problems that are inherently so difficult that there isn't a good solution for them currently, and machine learning provides, or deep learning provides a tangible solution, people will be kind of forced to use these models. So along those lines, we started focusing on the cancers of unknown primary problem and the myocardial biopsy problem. So we know that the Cohen’s kappa or the intra-observer variab

Comments 
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

Faisal Mahmood: A.I.'s Transformation of Pathology

Faisal Mahmood: A.I.'s Transformation of Pathology

Eric Topol