Our guest this episode was Dr. Muhammad Mamdani, a Canadian professor, pharmacist and epidemiologist, known for contributions to pharmacoeconomics, drug safety and application of data analytics and artificial intelligence to medical systems. He is the Vice-President of Data Science and Advanced Analytics at Unity Health Toronto, and the founder and Director of the Temerty Centre for Artificial Intelligence Research and Education in Medicine (T-CAIREM). He also is the Director and founder of the Li Ka Shing Centre for Healthcare Analytics Research and Training (LKS-CHART), a healthcare data analytics program based at Unity Health Toronto.
Dr Mamdani joined Reuben Hall to discuss his work in digital health, data analytics, and artificial intelligence.
“Wouldn’t it be great if busy clinicians could work with computer scientists, engineers, and statisticians to really make some headway into this AI space? And this notion of bringing very multidisciplinary groups of people together to learn from each other to advance the field of AI was really exciting to me.” Muhammad Mamdani
Watch the podcast video now, or download from any of your favourite podcast players.
Find Moving Digital Health on Apple Podcasts and Spotify, and subscribe to the MindSea newsletter to be notified about future episodes.
Read Transcript:
Reuben (00:05.908)
Welcome to the MindSea Podcast series, Moving Digital Health. Our guest today is Muhammad Mamdani. Muhammad is the Director at the Temerty Center for Artificial Intelligence Research and Education in Medicine, or T-CAIREM. He is also the vice president, data science and advanced analytics at Unity Health Toronto. Thanks for joining us today, Muhammad.
Muhammad Mamdani (00:29.709)
Thank you. It’s a pleasure to be here.
Reuben (00:32.454)
Maybe you could tell us a bit about your background to start.
Muhammad Mamdani (00:35.714)
Sure, so I have a bit of a mixed background. I did all my training in the United States and went to school to get a doctor of pharmacy degree, so a PharmD. And I actually decided I wanted to kind of advance my understanding more on the quantitative side of things because when you actually look at healthcare and we make decisions as clinicians, it’s often processing a lot of data and information.
So, you know, if we look at things like how the process usually works in healthcare, it’s about diagnosis, prognosis, treatment, and then of course, you have to communicate your understanding of things. And when we look at these kind of categories of things, how do we make a diagnosis? Some diagnoses, pretty straightforward, but a lot of them aren’t. And you have to consider a whole bunch of parameters when you’re making a diagnosis.
And when you look at how good we are at making diagnoses, some would argue that in many cases, we’re actually not that great at it. Sometimes we actually lack the information we should have, and we didn’t collect the information we should have. Sometimes we just have trouble processing the information that we do have in front of us. So an example of this is something like asthma. When we look at the diagnosis of asthma, there have been several studies, some in the United States, there’s one in
in Ottawa here in Canada where they brought in, I think it was about 600 or so patients who were diagnosed with asthma and many of them being treated for asthma. And they said, you know what, we’re going to do proper spirometry and lung function tests and do all the assessments and make sure that you actually have asthma. It turns out about a third of patients actually didn’t have asthma. So it’s, you know, we struggle a bit there. Once you’ve made the diagnosis, all right, well then how do you treat the patient?
Muhammad Mamdani (02:28.566)
Well, first, I think we kind of get a sense of how well is this patient going to do? And if because the patient’s going to not do very well very quickly, then we’re going to be very aggressive with our management, maybe pick therapies or treatment strategies that may have a lot of side effects, but we know are super effective or if we know they’re maybe they’re going to do okay, maybe we’ll be a bit more gentle, pick something that maybe isn’t as effective, but doesn’t have nearly as many side effects.
But study after study after study has shown how poor we are at prognosis. And then of course there’s treatment, you know, conditions like depression. Oftentimes many clinicians when they have patients with depression, they’ll have maybe 15, 20 different drugs to choose from. How do you pick the one you think is going to work? And a lot of times we just guess. We say try this one. Patient will come back in two to four weeks. We’ll reassess. If it doesn’t work, we’ll try something else.
There must be a better way of doing this. That’s what I’ve always struggled with when we go through the clinical process is how do we actually make this easier and better for patients and easier and better for clinicians? And that’s where I think if you look at some of the literature, the average complex medical decision certainly involves considering hundreds of parameters. Some would think one statistic out there was about a thousand different parameters. And if you look at…
studies in psychology, Miller from the 1950s was able to conclude that the average human can process seven plus or minus two things at the same time. So you can imagine it’s a bit of a gap, right? So it’s not really a fair fight. So this is where as a clinician I was thinking, okay, how can we do better? I want to know more about data and numbers and statistics because that’s what’s going in my head in terms of diagnosis, prognosis, and treatment. So then I went and did a more quantitative degree at the time. I was also interested in how people make decisions from a financial perspective. So health economics was the big thing then. So I actually went and did a fellowship in Pharmacoeconomics Outcomes Research, but at night I would take classes to actually get a full-fledged Master of Arts degree in economics and of all things, econometric theory. So very quantitative based mathematical proofs was the big thing.
Two years that I’ll never get back, but it was a great learning experience. And then I went over to Harvard to do a master’s in public health and statistics and epidemiology. Because again, I wanted to understand the mechanics of data and numbers and how we make decisions. And then I came back to Toronto and my first job was in the research sector and I actually worked at a major academic teaching hospital in Toronto and did clinical work as well.
Then I actually just evolved and said, all right, you know what, I want to get more into understanding data and how we make decisions and evidence and all of that sort of stuff. So I then shifted over more to research and this whole AI thing came about. I said, you know, that could probably help quite a bit. And started getting pretty deep into the machine learning AI space.
Reuben (05:44.224)
Well, that’s quite the background. You can really see the thread of that, the continuous searching for understanding and how things work and how they can be better. And how does that translate to your role at T-CAIREM?
Muhammad Mamdani (06:01.462)
Yeah, so at T-CAIREM, I have been very fascinated with this, with data in general and statistical models and such, and been developing a much stronger interest in machine learning and artificial intelligence. I’ve been so impressed with how machine learning models and the AI space, I guess you could say, can really get at complex relationships in data. That’s where I said, you know, it would be great if we had a more consolidated approach to this, because…
It’s very much a team sport. Most clinicians, they’re not computer scientists. They don’t deal with statistics or data on a day-to-day basis, but computer scientists do. And so while clinicians have all sorts of issues and problems and challenges with data and ingesting information and all that sort of stuff, computer scientists often live and breathe this, this sort of thing. So do statisticians, they live and breathe data.
A lot of engineers actually will go into data analytics as well. Wouldn’t it be great if busy clinicians could work with computer scientists, engineers, and statisticians to really make some headway into this AI space? And this notion of bringing very multidisciplinary groups of people together to learn from each other to advance the field of AI was really exciting to me. And it just so happened that…
the Temerty Faculty of Medicine at the University of Toronto was able to secure a very generous donation from the Temerty family here to actually launch a centre focused on AI and medicine. And I was fortunate enough to be able to become the director of that centre. And the centre really focuses more on education, research, building the infrastructure to have health-related data so people can actually learn off of them and to fuel that next generation of leaders in AI.
Reuben (07:55.78)
Okay, and can you talk about some of the specific innovations or projects you’re working on at T-CAIREM?
Muhammad Mamdani (08:03.834)
Sure. We do have some really neat innovative research projects that we do that are more focused on foundational stuff, like for example, building models that will predict survival after a lung transplant. And again, there’s many examples like this that we’ve actually funded from a research perspective. But what we’re shifting a little bit more towards is translation grants. So how do you actually take these technologies, these solutions, and bring them into clinical practice. How do you actually deploy them? So the money won’t really go towards developing the algorithms. We’ll assume that you’ve already developed the algorithm, you validated it, and it’s kind of ready for deployment. The money then is towards, how do you integrate it into a current IT system? How do you get permissions? How do you actually integrate into a clinician workflow? How do you do all of that in terms of human factors and change management and all that sort of stuff?
To then get into also, well, you’ve deployed it, now let’s evaluate the outcomes. Let’s see the actual tangible benefits or failures that these solutions will bring. That’s what we’re getting much more excited about. So the project that we recently funded was around in-vitro fertilization. There’s a very kind of deliberate approach around identifying the sperm that will be most highly successful in terms of joining with the egg. And so how do you identify that sperm? And there’s actually AI algorithms now.
that have been developed to actually identify that in a much more efficient manner than we’re used to doing now. And that algorithm has actually already been tested and validated. Now it’s being funded to actually be deployed into a fertility clinic that I think will really end up changing the game for a lot of physicians and also patients more importantly.
Reuben (09:52.328)
Excellent. And as you mentioned, translating research into practice can be really challenging. Maybe you can tell me a little bit more about that process.
Muhammad Mamdani (10:06.282)
Yeah, it is certainly very challenging. And it depends on what sort of an AI algorithm or machine learning algorithm or whichever data-related algorithm you want to deploy. Because many of them, especially when we look at clinical prediction models, are dependent on people using them. So you can have the best model in the world. But if people don’t use them, it really won’t matter. So I can give you a few concrete examples.
Muhammad Mamdani (10:36.818)
My other hat that I wear is Vice President of Data Science and Advanced Analytics at Unity Health Toronto. And we have a very mature AI program, Applied Health AI program, where our team of 30 data scientists is tasked with developing and deploying solutions into clinical practice. So we have now over 50 data-driven solutions that are running in our hospital as we speak.
Muhammad Mamdani (11:04.694)
And the model that we’ve had to employ is actually a bit of a different model. It starts off with our data science team not being the one to ask questions. That’s something that we insist on. The questions have to come from our busy clinicians and our decision makers who deal with the realities of medicine day to day. So these are the ones who can say,
This is an issue, but it’s never gonna fit into my workflow, but don’t even bother with it. Whereas this is an issue that is really important to me. It bothers me every day, it wastes a lot of my time, or I’m seeing patients die because of this. Can we fix this, please? I am so vested that I will put in my time and energy to work with you if we can just fix this. Those are the problems that we wanna.
Reuben (11:54.664)
Yeah, and that’s one thing that blows me away actually is the commitment from some of these physicians to take on being a part of these cross-functional teams on top of their day-to-day process, because they live it every day and they see the potential for a better solution and a better quality of care that they’re just able to give above and beyond. That always amazes me.
Muhammad Mamdani (12:22.518)
Absolutely. It’s actually heartening to see the level of dedication and commitment of these individuals who genuinely care about their patients just wanting the best for them. And if they see that there is a reasonable solution in sight, they actually do spend the time that’s needed to create it. So our model actually that we use at Unity Health is quite simple. Anyone can propose an idea. You have to fill out an intake form to structure it.
But you can actually say, this is the problem I’m struggling with. And what we ask in our intake process is, number one, tell us about the problem. Can you quantify it for us? Tell us how big the problem is. If you can’t quantify it, we have data scientists. We can pull some data and actually try and quantify the problem that you’re talking about. The next section is, tell me about how you envision an AI solution helping. How would it work? How would it put into your workflow if it gave a prediction or it automated x or y?
How would it change what you do? What are the interventions? And are the interventions effective? The next section basically asks probably the most important question in the entire forum. What are the outcomes you’re going to change? Why are we doing the solution, right? And your options are actually limited to a few things that are important to us as a hospital. Your options are as follows, death, readmission, length of stay, human effort, cost, and other.
And if you click Other, and that’s the only box you click, you’re automatically deprioritized. We want to be laser focused on doing things that are meaningful for the hospital, for the patient, for the clinician. And then, of course, if they say, I think this thing will actually reduce deaths, then we ask by how much? 10%, 20%, 15%. What’s your best guess? Because if you tell us 10%, when we deploy it, we’re actually going to monitor it. And if it doesn’t hit the benchmark of 10% that you’ve said is important, then we’re either going to shut it down or we’re going to have to revisit, why did we not meet the target? Because we want that meaningful impact and we don’t want to waste resources.
Reuben (14:28.676)
There’s always an opportunity cost to every initiative, and you have to manage your resources wisely, of course. It’s really interesting how that’s, it must help solve the adoption problem too, because you’re involving the physicians right from the beginning. They’re initiating the process essentially through the intake.
Muhammad Mamdani (14:52.03)
Yeah, absolutely. And we also insist that they not only initiate, but they’re actually part of the entire process. So we set up bi-weekly meetings. And our clinical teams, our data science teams, our deployment teams are expected to attend every single one of these meetings. And so they’re in it throughout the whole process, from the idea to giving the data scientists at mini med school on what the issue is medically, to the data scientists giving a mini boot camp on machine learning to our clinicians so they understand what we’re doing, and then really getting into purposeful, effective model development. And then of course, once we actually deploy the solution, that because they’ve been so ingrained in the process, they become our champions and bring along their colleagues to make sure the adoption is there.
Reuben (15:40.72)
And have you found that the whole staff at Unity Health essentially is kind of bought into this process because that’s just the type of place that it is or there’s some people that are not always 100% on board?
Muhammad Mamdani (16:00.33)
Yeah, it’s an interesting question. So we actually have a few things at work in our favor. The first thing is that at Unity Health in particular, we have declared AI to be a core strategic pillar. So yeah, we’re the only hospital in the country that has said, AI is one of our core strategic pillars. So we have eight strategic pillars. Three of them are core pillars, and AI is one of those three. So from top down, it’s actually AI takes a priority throughout the institution.
Muhammad Mamdani (16:29.462)
The other thing that really helps is when we go through the process of clinician engagement, uh, we require not only the clinician to say, I think it’s a good idea, but sign off from their division, their department, and their program and medical directors to say, we also think this is a big problem. We all commit to working on it. If you take the side.
Reuben (16:49.788)
And have you had success expanding some of these solutions outside Unity Health as well to other hospitals?
Muhammad Mamdani (17:00.51)
Yeah, it’s a good question. I would say we’ve had a failure in trying to expand it beyond Unity Health, and more than happy to talk about that. Again, we have about 50 of these solutions that are running in our hospital. Just to give you an example of things, I’ll pick on a clinical example. We have one solution that predicts if somebody is going to die or go to the ICU in the next 48 hours.
Reuben (17:04.244)
Ha ha ha!
Muhammad Mamdani (17:25.15)
And it runs every hour on the hour. So it’s constantly monitoring our internal medicine patients. And it says red, yellow, and green. Is this patient going to die? If it’s red, it’s a high risk. It pages the medical team. It’s all automated. And we’ve deployed it in October of 2020. So almost three years of experience. We’ve seen a substantial drop in mortality because of this solution. Yeah, that was fairly complex. But it’s running up and now, been running as we speak, and it’s been quite successful. But there was one that wasn’t successful in terms of external deployment. So we developed an algorithm that predicts how many patients are going to come to our emergency department, and we predict in advance. So what the algorithm does is it takes your four years of historical data and looks for all sorts of patterns. We have it scraping the web. So it’s updated regularly. It scrapes the web.
And it knows, oh, we’re forecasting a snowstorm for tomorrow night. It scrapes the web for city planning data. So is there a long weekend coming up? Are there events happening that we should be aware about? And then it predicts three to seven days in advance. How many patients are going to come to the Emergency Dept? So today is Thursday. We can tell you Saturday from noon to 6, there’ll be 82 patients waiting in the Emergency Dept.
10 of them will have mental health issues, 12 of them will be harder to treat and the rest will be easier. Our accuracy is typically between 94 to 96%. So what, yeah, it’s neat. And actually it’s funny because if you look at the literature in the 1980s, there was somebody who published something in the British Medical Journal, who was a physician in the UK who said, you know, emergency department volumes, they’re pretty predictable. And yeah, he was right. They are pretty predictable. But.
Reuben (18:58.176)
Impressive.
Reuben (19:14.412)
But did COVID throw off the data there? Because if you’re looking at four years historical, there’s probably some outliers there.
Muhammad Mamdani (19:23.23)
Yeah, so we actually created this before COVID. And we deployed it before COVID, and we use it for planning purposes. So if we knew it was going to be a bad day, let’s say, two days from now, we ask a few more nurses to come in, get our docs prepped to say, hey, you may need to work another hour or two if that’s OK. And then COVID hit, and we didn’t have an emergency department problem anymore because not many people were coming to the hospital. And so it worked out because, number one,
Reuben (19:48.264)
Yeah, fair enough.
Muhammad Mamdani (19:52.006)
It wasn’t really being used because we didn’t have an issue with patient volumes, but it did throw the projections off for about a week or so. And then it recalibrated and it was fine again, but it did disrupt it for sure. And the reason why I’m bringing up this example around external transferability is it was being it was pretty accurate. I present this at a conference, about a dozen hospitals come up to me afterwards and say, we want this to.
And we said, of course, you can have this. And about a half a dozen of them followed up and said, no, we’re actually serious. We really want this. So we said, OK, fine. We sent them code. And of course, the response was, well, as hospitals, we don’t have people who know Python or are the coding language that you use. And we said, OK, well, maybe you could just send us timestamp data because that’s all it really needs. And we’ll send you the predictions back. And we can run it for you every day if you like. And so we did that. And they came back saying, well, we don’t understand the model out there.
Well, then do we create the dashboards for you? Do we maintain it for you? Like, we can’t do this as a single hospital. It’s not sustainable for us, right? Yeah. Exactly. And so we quickly realized this is not the right thing to do for us. We can’t do this in a responsible manner. So this is where we were scratching our heads thinking other people should be using these solutions. And it’s just so happened that the former head of AI for TD Bank
Reuben (20:57.136)
Yes, yes, you’re not a SaaS company.
Muhammad Mamdani (21:19.17)
wanted to step down and do something new and different. And we talked with him and said, hey, would you like to kind of help create something that will actually focus on taking some of our stuff and deploying it in other hospitals? And so a startup was launched called Signal One in April of 2022. And their focus is taking some of the work that we do and creating some new solutions to actually deploy them into other hospitals.
Reuben (21:46.476)
That’s excellent because like you mentioned, that’s the last step, right? Is, you’ve proven that it works in your hospital, but you’re not implementers. You need someone that champion to spin that off into a commercially viable product that can scale and support multiple organizations.
So that’s, yeah, that’s a, I guess a failure that sounds like it turned into a success in the end.
Muhammad Mamdani (22:20.81)
Yeah, I think so. I quickly realized the need for private sector collaboration and we pivoted, but it took that failure to realize, wow, yeah, we need to pivot.
Reuben (22:31.688)
So how many other, you know, successful innovations that you have that are just waiting for a champion to scale them out?
Muhammad Mamdani (22:41.898)
Yeah, it’s a good question. So again, we have over 50 of these things just at our hospital alone. And I think startups need to be laser focused. And so there’s only a couple of few that they can take on. We’re hoping there’s more and more, but I do think there’s capacity to actually not only commercialize some of the rest of our solutions, but also to create new ones with the private sector, because they’re going to come at it with different blends.
So what we’re exploring now is actually, can we bring in private sector right from day one to say, hey, you know what, you have expertise in X, our clinicians and our data scientists have expertise in Y, why don’t we work together to create something that will be meaningful, that will be impactful. And if it works and really helps our patients, private sector, you have the resources to productize it. So go off, run and productize it and take to the rest of the world.
Reuben (23:35.944)
Yeah, that’s something I’ve thought about in the startup world before is the redundancy of multiple startups trying to solve the same problem. And of course, one way that’s good because not all of them are going to survive and theoretically the best solution is going to win. But when you look at private sector and public sector trying to solve the same problem,
maybe there is that opportunity to collaborate and to leverage the strengths of each side.
Muhammad Mamdani (24:12.13)
Absolutely. At least at our hospital, we’re public sector. We don’t have the resources and quite frankly, the expertise to be able to productize and commercialize. That’s just not what we do. It’s not a core business of ours. But on the flip side, we see startups. And I think one of the stats out there is 97% of digital health startups will fail. And when we look at some of the key reasons of why they fail, there’s three things that kind of jump out. The first is,
They didn’t tackle the right clinical problem. I can’t tell you how many times I see people working on things that are just so irrelevant clinically because they didn’t have that clinical insight or they didn’t think things through operationally. In fact, there was.
Reuben (24:56.429)
And is that on the startup side or on the academic side as well?
Muhammad Mamdani (25:00.918)
Both, both. For example, there was a study that was published, a systematic review that looked at over 400 AI types of research initiatives during COVID that always were trying to fix problems during COVID. And the lens they looked at it with was, how many of these things could actually be deployed into clinical practice? And the answer was zero. So, I think we start off in many cases with the wrong question.
The second thing that people lack is they lack the clinical environment. So they don’t have that direct day-to-day interaction with the clinicians who are on the ground saying, this is the most dumb thing I’ve seen or wow, this is really working really well. Then of course, the third thing is data. A lot of startups, so many of them really lack the data beneath. They use these research data sets that really don’t reflect real world data sets at all.
Muhammad Mamdani (25:58.57)
And then they have models that don’t translate, Bob, because you built it on the wrong data set. So this is where we thought, look, I mean, we have a living lab literally at our hospital with a proper environment for AI, with clinicians, with a process, with data that’s available. Why aren’t we leveraging this with private sector to be able to co-create solutions that will change the world?
Reuben (26:24.752)
Mm-hmm. It does sound like kind of the perfect situation, an incubator for solutions. Maybe you could talk a little bit more about how you prioritize and evaluate the different innovations to either wind down or move ahead with and put resources behind.
Muhammad Mamdani (26:49.846)
Yeah, it’s a great question. So certainly on the academic end, we have a whole international adjudication panel of academic experts and such that goes through a fairly rigorous process around the quality of the study and more on the academic side of things, again. On the application side at Unity Health, the way we typically do things is we go through that outcome metrics box and we look at the estimated magnitude benefit.
So somebody comes in and says, we have about a year backlog in terms of projects, so a year waitlist, because we just have so much volume to work on. So how we prioritize is to get a sense of, well, this project seems to have this sort of an impact. They’re saying they’re going to drop mortality by 20%. Whereas this project is saying they’re going to save about $20,000 a year. We’re going to take the 20% reduction mortality one. So we make those decisions based on the perceived impact of the project.
But we also consider how cumbersome is it going to be, how much resource intensity is it going to take to develop the project, but also to maintain the solution. Because we’re not about a one and done. Right?
Reuben (28:00.088)
Yeah, and I’m sure the change management, too, like, because there’s the cost in that aspect as well.
Muhammad Mamdani (28:05.759)
Absolutely.
Muhammad Mamdani (28:11.374)
Completely. So we really try to balance impact and feasibility. Those are the two big things that we really consider. And then the way we decide on sunsetting projects or having another look at projects, I mean, we’re constantly monitoring our algorithms. In fact, our data scientists take call. So for some of our algorithms, they’re literally updating every minute. And so if something breaks in the data pipeline at 2am and there’s hundreds of users and it’s critical to business, what do we do?
Muhammad Mamdani (28:41.366)
They’re going to have to wake up at 2 AM and fix it. So you have to have that whole infrastructure in place. So this is where, if we find something’s going to be really labor intensive, that may factor into the decision. But if we deploy the thing, and we were told, we’re going to drop mortality by, let’s say, 10%. We deployed the thing, and now we’re monitoring, and we’re seeing actually it’s not a drop in 10%. It’s more like 2% to 0. It may not be anything, really.
Then at that point we say, all right, what are we doing wrong? Or do we just get this wrong? And do we just shut it down now?
Reuben (29:19.245)
That’s a tough call.
Muhammad Mamdani (29:20.734)
Yeah, yeah, it’s often tough to take away things that people have been so passionate about that they’ve worked on for such a long period of time. But I think the explanation has to be, it would be irresponsible for us to continue using it because it’s making no difference and we’re spending so much time and energy doing.
Reuben (29:40.468)
With my background in user experience and design, I always think about the human factors part and the actual user interface as well. I’ve seen some AI dashboards that are pretty rough. I’m like, well, of course no one’s using it. You can’t tell what’s going on here. There’s all these crazy pie charts and stuff.
What kind of work do you do to ensure that the outcome of the AI is then translatable and easily digestible by the clinicians taking in that information?
Muhammad Mamdani (30:21.75)
Great question. So at Unity Health, we have four core teams in our data science team. The first team is data integration and governance. And these are the data engineers and the data architects and modelers who are responsible for getting data out of really messy systems and getting the data to be usable in a meaningful way for ML or AI algorithms. The second team actually builds the models. So they don’t spend a lot of their time cleaning the data and scrubbing it, but they spend more on just developing really good models.
It’s actually our product development team. Now that’s headed up by an artist. And it’s an artist who went into the Master of Science in applied computing. Very talented. And his passion is not so much on developing ML algorithms. His passion is around how humans interact with them. So his interest is really from an end user perspective. And he has folks who have expertise in design, in human factors, and software development on his team.
So they do extensive sit downs with end users. And actually, this is where we usually start on a project, not about the data or the algorithms. We start with, what’s the problem that you’re working on and what sort of solution are you envisioning? So we can actually then step back and say, all right, so for this sort of a solution, what would it look like? What would it feel like? Does it fit into your workflow? We’re gonna look at your workflow analysis by, we have some operations research engineers as well. We’ll actually really map out how this would fit, what you would do, how people would react. They talk about things like, tell me about the font and the color and how much information we’re gonna put on. Yeah.
Reuben (31:58.28)
Yeah, love it, love it. This is a, I’d love to talk to this person. It’s…
Muhammad Mamdani (32:04.466)
Yeah, he’s spectacular. But yeah, but to your point, we can have an incredible model. But if the interface is awful, it’s not going to be used. And sorry, the fourth team, just for completeness, is our product management and deployment team. And they’re focused on getting all of our data scientists, it’s like herding cattle sometimes, to really kind of sit, talk, have firm timelines, milestones deliverables, a certain discipline to project management.
Muhammad Mamdani (32:32.022)
But also are on the floors and talking to our clinicians and our end users and really doing change management work to get solutions into clinical practice.
Reuben (32:40.656)
Well, it sounds like you have a great team there. And I love hearing the real world examples. And I know you’ve talked about a really couple of amazing ones already, but is there any other projects you could tell us about that you feel have had a really big impact in changing outcomes?
Muhammad Mamdani (33:02.794)
Yeah, there’s actually several. So I’m gonna pick one outside of our hospital and then go back to my hospital. At Princess Margaret Hospital, there’s this whole approach to, Princess Margaret is a large cancer hospital. They’re among the best in the world, fantastic clinicians there, but a lot of patients that have cancer will undergo radiation. And radiation is tricky because you’ve got to get the spot right in terms of where you’re putting the radiation treatment on. If it’s too big, you’re affecting tissue or spaces around the cancer, and that’s not great for them because you’re killing cells there. If it’s too narrow, that’s not good either. If it’s too deep, that’s not too good. So it usually takes considerable time with multiple people to develop radiation treatment plans. Oftentimes hours and hours, sometimes days to develop a good treatment plan for a patient. They’ve developed an AI algorithm that will do this within seconds.
And it’s found to have 88% agreement with clinicians. So it just saves a lot of time and energy where people can now just validate the approach rather than really trying to create it from scratch. And that’s, I believe, still up and running at Princess Margaret. Yeah, yeah, it’s neat. At our hospital, I’ll give you an operational example. Our emergency department nurses came to us and they said one of our biggest challenges is assigning our nurses to the different zones in the emergency department. And we said, really? That sounds kind of boring. And he said, no, it’s actually really important because it’s one of our top three stressors. And there’s all sorts of rules that we have to have. So when we assign the nurses in the different zones in the emergency department, you can’t have all junior nurses in one zone and all senior nurses in another. So you have to have a junior nurse with a senior nurse with a team lead.
Muhammad Mamdani (34:59.574)
You can’t have the same person working with the same team in the same zone over the past 40 hours. There’s all these rules. And it takes us two to four hours every day to develop these assignments. And when we actually look at our error rate or repeat rate where we violate these rules, it’s over 20% of the time. So we created a solution where it’s a very nice interface actually.
To your point, where, yeah, where people, when a new nurse comes on, when you hire them for the first time, they get inputted into the system, but then you’re done. And it just kind of quote unquote watches where everyone works and who they work with. And what it does is it then actually says, okay, based on all your rules, I’m going to put out your schedule for the next, let’s say four days. And if somebody calls in sick, you can just pull out that person, say sick, click a button, it will redo the assignment.
We deployed this several years ago, and within short order, we were able to see the time spent go from two to four hours every day to under 15 minutes. And the repeat rate or error rate go from, I think it was about 21% to under 5%. It’s just basically saving time for people so we can spend more time doing work with patients rather than with spreadsheets and paper.
Reuben (36:18.544)
Yeah, and that’s exactly the type of kind of, like you said, it’s not exciting necessarily, but it improves the efficiency of the system, gives people time back, and now they’re using that time more effectively.
Muhammad Mamdani (36:34.638)
Completely. Another example, if you like, is, yeah, we’ve actually, we published this one is looking at hypoglycemia rates. So many of our patients have diabetes and patients who are diabetic, especially who are in the hospital can see these sudden drops in blood sugar levels or blood glucose levels. And oftentimes that results in brain issues like confusion.
Muhammad Mamdani (37:02.71)
You can end up in a diabetic coma if it’s really serious. So these things can have fairly serious complications for our patients. And it’s just very unpleasant to have a sudden drop in blood glucose levels and you’re just disoriented. So how often does this happen? Happens fairly often. So when we asked our clinicians, what’s a good metric, they said, you know, we don’t want any period of time where there’s more than 5% of patients experiencing hypoglycemic events.
And we said, all right, so let’s say we look back at the past six months, if, I don’t know, what percentage of weeks, if we look back, do you think more than 5% of your patients are having hypoglycemic events? And they said, probably about 10% of the time. And we said, okay, that’s a lot. So we have an algorithm that generates a list every day and it gives our nurse practitioners a list of patients who are at highest risk for experiencing a hypoglycemic event.
So now the nurses know, okay, maybe if I have a limited time, I’m gonna tackle these patients, check their insulin, educate them a bit, make sure that they’re not on any interacting medications or subjected to things that will affect their blood sugar levels. So we deployed this and again, it generates a list every day and the statistic is that it went from 9.1% of weeks where more than 5% of patients are experiencing a hypoglycemic event to zero.
Reuben (38:32.169)
Wow.
Muhammad Mamdani (38:33.01)
Which is great for patients. Now it’s not to say that nobody has hypoglycemic events, they do, but that benchmark of more than 5% has been reduced significantly. We’ve seen a variance shrink considerably in terms of affecting patients with respect to hypoglycemic events. And there’s many more, but I can stop there and give more if you like.
Reuben (38:55.232)
Well, those are great examples. And one thing I did want to touch on is I know you’re hosting an AI and medicine conference at T-CAIREM as well. That’s October 12th to 13th in Toronto. Maybe you could talk a little bit about that and, you know, the types of speakers or subjects we’ll be talking about.
Muhammad Mamdani (39:05.656)
Yes.
Muhammad Mamdani (39:22.782)
Yeah, it’s called Ideas to Impact. And that’s, again, where we’re very focused on, all right, how do we actually develop AI solutions that not only are going to be, of course, academically rigorous, but that can be deployed into practice where we can see tangible benefits for our patients? So that’s the whole theme around translation, but also students and education. That’s really big for us as well.
Muhammad Mamdani (39:51.106)
So the T-CAIREM Conference, it’s the first one we’re having. It’s October 12th to 13th, as you had mentioned. It’s at the Toronto Intercontinental Hotel. And we have speakers from across the country, internationally as well, our keynote speaker is Dr. Leo Celli, who’s at MIT and Harvard. He’s gonna be flown in here to give our talk on October 12th. We have all sorts of events, like a panel discussion around
What are the big things that are affecting AI and how will they result in changing the care that we give to our patients? Things like generative AI, like chat GPT, how’s that changing the landscape of what we do? Risk prediction models where we have Dr. Mulvermost and some incredible work in the Toronto community around deploying AI models. How is AI driving automation? That’s the other thing that we really wanna talk about as well. But then having a dedicated session on
What have we learned? Can we go through some trials and tribulations of deploying AI? Tell us your war wounds and your scars of how things have not worked, or maybe how things have worked, and celebrate your successes. We want to hear stories where we’ve actually really kind of done things with AI. What were the ethical considerations? Did you look at all sorts of bias considerations in your algorithm? Practical things when we actually deploy. The next thing that we really want to focus on is regulating AI in healthcare. How’s AI going to be regulated?
Muhammad Mamdani (41:18.95)
What are the struggles that we’re having now with Health Canada, the FDA, saying, all right, you know what, if we actually want these solutions to be commercially and wisely available, how do we manage them to make sure that they’re not harming patients? And then, of course, lots and lots of student engagement. Our day two is largely dedicated to our learners and people who are really bringing in cutting edge solutions and innovative thinking around this space.
Reuben (41:47.58)
Excellent. OK.
Muhammad Mamdani (41:48.646)
Oh, I’m sorry. There’s one other thing. There’s a shark tank as well. Day two, where we engage the startup community to say, pitch us your ideas. And there’s a, a monetary prize that is available to the winner to help them advance their mission and the startup.
Reuben (42:07.3)
Excellent. Well, it sounds great. And thank you so much for joining me on the podcast today, Muhammad. It was a pleasure speaking with you.
Muhammad Mamdani (42:16.462)
Fantastic, thank you. It’s wonderful to be here and really appreciate all the work that you do.
Reuben (42:21.916)
All right, and thanks for everyone who’s listening to the Moving Digital Health podcast. If you enjoyed the conversation, please go to movingdigitalhealth.com to subscribe to the MindSea Newsletter and be notified about future episodes. And we’ll also have the link to the T-CAIREM conference there as well.