In this episode of Moving Digital Health, MindSea CEO Reuben Hall talks with Dr. Raza Abidi, Professor of Computer Science and Professor of Medicine at Dalhousie University.
With decades of expertise working at the intersection of healthcare and knowledge management, Dr. Abidi has been applying his understanding of artificial intelligence (AI) and machine learning (ML) to healthcare projects for over 25 years.
Dr. Abidi locates the recent evolution of digital health within the necessity of making patient data available at the point of care. According to Dr. Abidi, the next step is to determine what can be done with the information we’ve amassed. He explains why other countries and regions may be ahead of North America on this front, addresses the remaining challenges that must be faced, and outlines the potential payoff for tackling these challenges.
Dr. Abidi shares some exciting AI-related projects currently in progress at Dalhousie University, including how data-based ML can provide decision support around both predictive modeling and patient stratification for risk assessment. Other applications range from early detection of disease to prevention of precious resource waste, from individual patient care to population-wide health projects.
Despite AI’s vast potential to improve healthcare, many of its tools—at least in their current states—are not inherently appealing to physicians. Dr. Abidi and Reuben dig into the challenges of working with black box models and discuss how explainable AI could improve physician trust. Dr. Abidi proposes a few additional uses of such devices, and shares some of the developments he’s most eagerly anticipating in the near future. Despite Dr. Abidi’s vast expertise in AI, machine learning, and health informatics, he presents these complex topics in an engaging and accessible way that leaves listeners better informed about the current and future state of digital healthcare. We thank Dr. Abidi for joining us to share his experience and insight, and we hope you’ll enjoy this conversation as much as we did.
Subscribe wherever you get your podcasts and join the MindSea newsletter to be notified about future episodes!
Welcome to the MindSea Podcast series Moving Digital Health. Our guest today is Dr. Raza Abidi, professor of computer Science, professor of medicine and director of NICHE Research Group, Dalhousie University. Thank you for joining us today, Dr. Abidi. Could you give us a brief overview of your background?
Dr. Raza Abidi (00:00.25)
Sure. First of all, thanks a lot for the opportunity to speak at this particular forum. And my background is pretty straightforward. I’m a computer scientist by training. So my Ph.D. was in computer science, specifically in neural networks. And then over the years, I migrated into health. So it has been almost a journey for close to 20 plus years where I have actually been working by using my machine learning and AI knowledge into applications that help to.
And what drew you into working with health specifically?
Dr. Raza Abidi (00:01.10)
Right. So that’s an interesting story. I was in Malaysia from 95 to 2001, and I was working as a university professor in Malaysia. And round about, I would say 96 or 97, the Malaysian government had a mega project which was called the multimedia super corridor, in which they were developing mega applications for different disciplines such as health, education, manufacturing and so forth.
So my dean was selected to lead the health portfolio and we were in computer science, so we had no clue what health needs were and what would be required to do. So he then asked me and said, Well, you work in AI and then you do a lot of decision support, which I see is applied in health.
So why don’t you come and join me? So I joined him in developing the Malaysian telemedicine blueprint and then there were four authors and I was one of them. And working with the Ministry of Health and a number of different clinicians that got the understanding of what health informatics was. So I’m not formally trained in health informatics, but over the years I gathered what health informatics needs are and what the theoretical background is.
So that’s how I entered into health and health informatics. And that journey still continues.
I see. So you’ve been a researcher or a research professor for over 25 years, So you’ve seen that evolution of digital health for quite some time now. What stands out to you as the, you know, the impact to the large trends over that duration?
Dr. Raza Abidi (00:03.07)
So that’s an interesting question in terms of trying to understand the evolution of digital health. I think the onset of digital health, that was a need to collect all the patient information and then make it available at the point of care. And for a very long time, and even to this day, I think most of the digital health focus is on making the patient information available if it is collected from different, I would say, resources or different databases such as the primary care.
Then you look at the drug information systems, then you look at the facts and so forth. So how do we bring it into a medical record, which is the electronic medical record that has been the primary driver of digital health for a very, very long time. Now, I think a lot of that has been achieved. It is so now the impact that we are seeing is looking at what we can do with this information?
What is it that are the services which are possible that can be built on this foundational element, which is what we call as the data layer? So guess I think now the exciting services are coming out and we have to a very large extent done with the data collection, data sharing and the data accumulation work that have been going on.
Now there are still challenges. I’m not going to say that there is still a number of integration challenges that are challenges within systems, within the institutions. And then if you look at what Canada Health Information was trying to do a number of years ago was how can we data be shared across different jurisdictions? So that, again, is a challenge. There are some solutions.
But I think we have from a mindset standpoint, we have moved from that data intensive digital health that to more knowledge and services oriented data. Digital health. Mm hmm.
Yeah, that’s a really good point. Even though we’re still not there in terms of, you know, collecting all the data and having all that, the different care centers communicate with each other, we’ve still evolved to be able to, you know, to make use of that data and to really amazing ways, specifically with AI and machine learning, which is some of your focus.
Can you tell us about your project using AI and machine learning for decision support?
Dr. Raza Abidi (00:05.56)
So we have a number of projects, some completed and some ongoing which are using artificial intelligence to a very large extent. And I just wanted to qualify over here. When I say the term artificial intelligence, I mean the both sides of artificial intelligence, which is the knowledge based systems and the data based systems. So when we talk about machine learning, it is largely the data centric of the data driven systems that we are talking about.
So in that particular space, the decision support that is largely being pursued is around two different, I would say, problems. Number one is predictive modeling. And number two is patient stratification for risk assessment. So some of our projects that are in this particular space, I can give you an example of one in which we are looking at two ICU intensive care unit, patients that actually admitted to looking at the prognosis of these patients over a period of time.
Typically, when patients are admitted in an ICU, there is a risk score that is being calculated at the time of admission. But that risk score basically tells about the survivability of the patient as opposed to what should be the condition after a temporal period of 12 to 24 hours. So that score doesn’t help the physicians determine what the care plan should be, what are the resources that they should be anticipating or proactively arranging for this particular patient.
So we are looking at machine learning algorithms in order to develop these predictive models that will determine the condition of the patient at 24 hours to 12 hours, 24 hours, 72 hours, and then 6 hours before discharge. So the point over here is not to just look at mortality, but how is the progression of this patient over time so that the physicians can determine the treatment plan accordingly?
So that is one project that we are doing. And the decision supports this. That’s another project which I think is extremely interesting is, and it is something that we don’t think about. It is around the blood transfusion. So if we think about it, blood as a product cannot be purchased on the market. It is basically a donor has to provide blood.
So blood is extremely precious for it to be maintained. And in blood transfusion, typically what happens is that blood needs to be transferred from one center to another, depending on the needs. There is a request, and the blood is being transferred over there. Secondly, there is a blood bank which is storing these blood units, and there needs to be an inventory count of how much blood they have of each type and also what are their expiries.
So we are using artificial intelligence in this particular case to detect that whether a blood unit is potentially heading for what we call as a discard. And if we can figure out that this blood unit is heading for a discard, then it is quite possible that that blood transfusion staff can pick up that blood unit and use it in a much more appropriate way and save that particular blood unit.
So we are able to actually predict that the blood unit is going for a discard at least three steps before it is actually going to go for a wastage or a discount. So that is another completely different approach where we use Markov models to predict wastage ahead of time. So these are one. Then we are looking at completely different one where there is a face recognition that is being used for detecting a disease called acromegaly.
So acromegaly is a thyroid related disease that a hormone disease which basically affects the shape of facial features. But this distortion of the features happens over a long time. So people don’t necessarily notice that it’s not a day to day thing. It happens over years, too. And sometimes people may think that that just because of aging or some other factor, so they don’t really catch it.
But then when you catch acromegaly, it is rather too late. So what we are doing is that we are using facial recognition technologies, which is supported by machine learning, whereby we have developed an app where the individual are basically given the app and they take a picture of their face and it is a dedicated picture and it goes through a facial recognition exercise in which we basically determine that whether this individual’s current picture has any distortions from their last year’s picture or whatever the previous one was.
And in that way, we are able to determine whether there are any signs of acromegaly or not. So this is going to be a long term process. Individuals taking their pictures maybe six months every year. And we are able to compare it with the cohort of pictures we have using machine learning algorithms and see whether there is a discernible distortion in the facial features.
Yeah, I see that as something that even a device manufacturer like Apple or Samsung could have built into the issue. So because, you know, over the years you’re taking so many pictures of not only yourself but of family members.
And it could say, just like it does with the Apple Watch, hey, we’re seeing something we’re detecting a possible risk here. You know, you should go, you know, talk to your physician. So it’s not diagnosing, but it’s just suggesting that there might be a problem and that you should talk to a doctor to investigate.
Dr. Raza Abidi (00:12.40)
Exactly. I think it’s basically flagging that there is a potential onset of the disease and that the magical way here is what is considered distortion.
So that’s where our algorithms come in. So aging is taking into account rate. Some predispose to conditions of an individual is taken into account, and then we figure out that there is some discernible distortion and somebody’s facial features. So there are many things like this, like we have a decision support system that is looking into how groundwater and or rather well water which typically in Nova Scotia is used a lot in rural communities is affected by environmental factors, especially arsenic, and arsenic is a cariogenic, which leads to a number of different cancers, specifically prostate cancer, breast cancer and so forth.
So we are looking at the disposition of arsenic in well water leading to cancers. And the way we are doing it is a very unique way of doing it. Is there is Atlantic Path is a project. It’s the Canadian pan-Canadian project, which is looking over a number of at least 20 to 30 years that follow up with patients that, as they move through their life and they monitor different things.
So we are taking toenails as the biomarker. Because toenails actually grow slowly and they cannot be contaminated. If you take fingernails, then they can be contaminated with soil, maybe nail polish and so forth. So you lose that to the intensity of the biomarker for arsenic over there. So we look at toenails, they are then analyzed by toxicological various series of checks of the devices.
And then the data comes to us and we are using machine learning to, first of all, detect that whether this person has a propensity towards acid, towards cancer. And then the next thing that we are looking at is that what kind of cancer it would be.
So there are three kinds of cancers that we are looking at right now. We are looking at prostate cancer, breast cancer and melanoma.
And is there an existing connection between arsenic and those cancers already?
Dr. Raza Abidi (00:15.35)
Yes, it’s a very profound connection has already been noted at the studies. And these are the studies that if the concentration of arsenic is at a certain amount. Right. It then it is leading to cancers of these types. So instead of going after individuals, we can figure out at an environmental level because that water would be used for the whole population.
Yes. And so what are the actions we can take? And I guess how can we remove the arsenic from the water.
Dr. Raza Abidi (00:16.13)
So what this is then going to be and actually it is a population health project. So once it has been detected that in this community or in this geographical region or we have noted that the concentration of arsenic is of a level, that it is of concern. And these are the types of cancers that we are noting over here.
Then the population health or public health advisory and educational programs and even risk assessment tools are to be developed so that people can actually monitor the level of arsenic. And at the same time, that is at the level that they should be concerned about the onset of cancers.
Yeah. All of these projects sound amazing because, you know, you just think of how much of an impact they can have when a population, you know, once these connections are identified and then we can take action to solve the problem, it can really have a big impact in looking at the blood management solution. Like you said, it’s so precious that you even a small improvement in the efficiency of you know, utilization of the existing blood, you know, can have a big impact for the people that need it.
Dr. Raza Abidi (00:17.41)
Exactly. And I think that these are all very impactful and I would say directly connecting with the healthcare system efficiency. And I think that’s what the target is. How can we actually use decision support to impact the operationalization of health care services?
And when you talk about the system in the ICU to predict outcomes there. What are the big data points that are feeding…
Dr. Raza Abidi (00:18.14)
Oh, that’s a very good question because I didn’t allude to it, but this is a very interesting exercise that we are doing. So we are looking at three different data sources which are then integrated into one data point or what we call as a vector, which is going to be used to train the models. So first of all, of course, there is data that is coming from critical care, like all of the monitors that are in an ICU.
To of course service you a sampling it at a rate that becomes feasible. So that is one thing. Then ICU patients actually are being investigated for various blood markers on a very routine basis. So the second data that we are getting is pathology data. And the third data that we are getting is radiological imaging. So we would be combining radiological imaging pathology data. Largely, it is bloodwork, too, and the critical care data into one data point, and that would be used to train the models.
So we are not just looking at the critical care or the pathology, which is typically how some people would be doing it. But this is a much more integrated exercise set for decision support that we are taking.
And at the end, I guess the goal is to help physicians, you know, better prioritize patients as their needs change.
Dr. Raza Abidi (00:19.54)
Exactly. So what we really want is to proactively determine what is the cause of the condition or the disease for each patient. So we are predicting 12 hours ahead to it gives enough time for the ICU physicians to determine what would be the most optimal treatment plan. And let’s suppose that after 12 hours it is indicated that this person would need intubation or this person would need a surgical procedure.
Then instead of making a decision and then rushing around to figure out whether these resources are available to them, we would then be able to actually have these resources available at the time of need because we would have some lead time to prepare for any surgical procedure or intubation or any of this activity that is resource intensive.
So with all of these projects, obviously there’s multiple stakeholders involved from the health authorities and clinicians and government. Do you find it’s difficult to, you know, get everyone on board or on the same page to to move these types of project forward?
Dr. Raza Abidi (00:21.18)
It is challenging to begin with. But I think if all of us have the same objectives, then people come together quite quickly. So if the project is initiated by our colleagues in medicine or the physicians obviously they have noted a particular need or they are looking for a solution, of course they are not as technically savvy to figure out what that solution is.
So that’s why we would be working in collaboration with them and we would be saying, okay, if you can explain what the problem is, then we will help you develop a solution. You would have to find as the resources, typically the resources for machine learning kind of the project would be what is the data needed and how can we actually get it So they would help us in not way. For another kind of decision support, which is using knowledge based technologies such as clinical guidelines of clinical workflows.
Then again, we would be asking them, Can you please interpret certain term and the logical conceptual components of these guidelines. So there is challenging the beginning where we need to understand each other’s way of working. But I think if there is a common objective. The teams gel very, very quickly. Now, there is a requirement over here that it is important for us for maintaining, I would say, boundaries around each other’s work.
So the technology component largely relates to what is the model building to how would we be doing it? That remains our domain to then give it to them for validation and they would validated and then give it back. So it goes back and forth. We won’t tell them how they do that thing and they won’t be telling us how to do it right.
If we are doing the things in the right way. Secondly, terminology is very, very important lexicons. But do I understand what are the clinical variables that they’re giving us? What do they mean? What do their measurements mean? What are their ranges? So we would understand that. And likewise, the clinician also make a significant effort in understanding how we explain our working of the models.
So if we are giving an outcome in terms of a second diagram, we would be expecting them to understand how this diagram actually works and they make an effort to understand it. If we do showing results like that which have certain matrix of performance, then there would be understanding. Okay, so this is a better result, This is a better model compared to the other one.
So I think it goes as a learning exercise as well. But like I said, as we are all working towards a common goal, we get together very nicely.
So you recently hosted the artificial Intelligence Conference, and I went to some of the sessions and one of the topics that I know you’re working on as well is explainable AI. And I think that’s connects to what you’re talking about is making sure that clinicians can understand the tools that are being built. Maybe could you talk about explainable AI?
Dr. Raza Abidi (00:25.11)
So I’ll rewind a little bit over here. Before machine learning became so popular, like decision support systems and clinical decision support systems were still being developed using knowledge driven technologies, and these would be split systems that they would be having. Rules to me would be using ontology as a knowledge model. Now for these knowledge based systems that are intrinsically explainable because they are using natural language in terms of the description of the logic and the rules.
So when the outcome is determined by a decision support system, it’s pretty straightforward to ask it. Okay, what was the reason for this particular outcome and the knowledge based system. We call it a justification trees or reasoning trees would very simply say rule number one was used. Rule number five was used. And rule number ten was used. You can go and read rule number one, Rule number five and rule number ten.
Oh, yeah, that makes perfect sense. These are the rules that would include this particular outcome. So there was no issue around interpretation, interpretation of the outcome. Now, when we started using machine learning models in the early days, they are basically black box models like you push the data from the input and you get an output, which means something happens.
Or what something happens is a point of contention, especially in medicine, where decisions to be made which are affecting lives. Clinicians were always very that I think I can and they would say I think I can agree with the outcome, but I would really want to know what were the reasons behind this outcome. And machine learning could not give a suitable answer.
They would simply say that what the model learned from the data and it is giving you a certain outcome. So that is where the explainable AI actually came in, which was basically you’re trying to open this black box and trying to decipher the logic that a model actually uses in order to derive this particular outcome. Now, there are a number of techniques that are being developed.
Like the most simplistic technique is what we call feature importance. So what the explainable AI algorithm would simply do is that for a certain outcome that it has generated, it would turn around and tell the user that these were the most important attributes from your data. So if your data had maybe 50 different variables or attributes in it, it would say the most important one was X, the next one was Y.
So the physicians can actually get a sense of, okay, so these were the critical attributes that were used in the outcome. And if they are the critical ones, they would have a degree of confidence. But if they see something funny in those attributes, why are you using this in the decision? Then they would have a suspicion about the output.
So that is the most simplistic way of looking at explainable AI which is feature sensitivity analysis of feature importance. Now we are moving and this is where our research comes in, is that we are moving beyond that. Because that is basically telling us which features were used and which were supposedly important in the outcome. But it is not explaining the relationship between the features.
It’s also not explaining the underlying conceptual associations between those features and the outcome. Why is it that if this feature, like suppose the glucose level is high, why is it affecting the outcome that why is not there? It is just saying that this glucose level was high. Now you figure it out yourself. If it is high, then it means this, this.
But we are looking at, explaining the why weight in order to explaining the why, you now need to have domain knowledge. How does this actually function as a biological system? So we are basically creating knowledge graphs and these are specialized two domains. The one that we are currently working on is largely into chronic kidney diseases.
So basically developing a knowledge graph or a knowledge model that encapsulate knowledge about what is chronic kidney disease, what are the risk factors for it, what is the genetic information to that or the genomics that is contributing to it? What are the comorbidities? What treatments would actually be affected by it? What are the reasons for certain right conditions?
So we capture all of that in the knowledge graph. And then when we look at the model, we can now explain to using that knowledge in a very narrative form that because this was very high and this has an association or this concept causes or exacerbates the condition, that is the way you are seeing this particular outcome, We can now get a narrative text of why a particular outcome is being given in a language that is related to the to the problem domain, in which case was is chronic kidney diseases.
And of course, this helps to build trust.
Dr. Raza Abidi (00:31.29)
Exactly it is all about trust. If the clinician sees that our explanation is close enough to what they have been trained and what they have experienced like then they would take it. Otherwise they would always have a suspicious eye towards what the outcome is. So that is what the entire reason for explainable AI is, is to build trust.
And once we build that trust, then the uptake of that decision model would actually be high. Otherwise it would be used on and off, and they would always be going to have a second opinion, right?
Yes, of course, if it’s spitting out a recommendation and the specialist is just ignoring it because he’s thinking or they are thinking, well, last time it didn’t make any sense. So I’m not going to, I’m just going to trust my gut because I’m the expert. And you know that black box machine, you know, doesn’t know what it’s doing. And so then once that trust is lost, then, you know, it’s not helping. It’s not helping patients, not helping anything.
Dr. Raza Abidi (00:32.42)
And you see, once that trust is lost, it has an effect that we don’t really understand. Machine learning models are basically pattern recognition models to recognize inherent patterns in the data and use that information to regress towards an outcome. Now, it is quite likely that the outcome which they are predicting, is for a rare condition and they have picked it up.
The model has picked it, but the explanation was not sufficiently rich enough or not explicit enough to convince the physician that this is a viable and this should be treated as a legit outcome because it was rare. The physician also does not have enough exposure to this, so the physician might consider this as an incorrect outcome and let it go.
And that is a missed opportunity to pick up something that was also a rare nature for which the physician did not have enough exposure, but the model picked it up. So if my explanation was a good one, the physician can put two and two together and say, Oh, that’s that. That’s an interesting finding. I didn’t think of it this way, but the explanation actually makes a lot of sense, right? So let me consider this line of reasoning as well.
And then the usefulness of the AI can depend on the clinician that’s using it as well. If, if the AI is giving you a response and that lines up with a specialist that has many years experience in their field, they can say, okay, yeah, does that make sense? If the AI is giving a recommendation to let’s say this is a fairly junior clinician, there in a remote place where they don’t have access to that specialist, then it could possibly have a much bigger impact on those types of situations.
Dr. Raza Abidi (00:34.59)
Exactly. And even if you don’t want to use it for clinical use, the decision support system, to take in your example of working with junior clinicians or clinicians who don’t have access to senior more experienced colleagues, such a decision support system and we have also asked for it and might go and do a project can be used as an educational device, to enrich it.
You basically have trained the model using real data, but then we would be using quite a number of simulated patient cases, and those simulated patient cases can actually be derived by experts and then use the decision support model to run on those simulated cases, which may cover a wide variety of patient scenarios. And the outcomes could actually be a learning point for the junior physicians that this was the clinical clinical scenario.
And this is the outcome. So now let us look at it. What is the reasons for this outcome so that then the explanation that we are providing is actually educational content? There’s an opportunity for them to basically learn certain clinical scenarios that they may not necessarily be exposed to because, I mean may not have been given the opportunity to comment on because they were more senior people in that institution. So now they have this opportunity to use this as an educational device and see what would be a potential recommendation for a certain clinical condition.
Yeah. So as a learning tool, it can have a big impact as well.
Dr. Raza Abidi (00:36.57)
Exactly. It can be, so you don’t use it for clinical purposes, but use it as an educational tool because the model has encapsulated the knowledge about the discipline about that particular problem. So how do you actually explicate that knowledge to start giving more complex cases? And usually cases, rare cases like to and let’s see what the model actually comes out with and use that explanation as the learning content.
So one of the topics at the conference was the quality of the data as well. So, you know, the output of the AI is only as good as the data that is being put into it. Maybe you could talk a little bit about the process of, you know, taking a raw data source and preparing it to use with an algorithm.
Dr. Raza Abidi (00:37.53)
So data quality is definitely an issue. But then prior to injecting data into the machine learning model, there is the whole process of data science and data preprocessing, which involves looking at missingness of the data outliers in the data misinterpretations of the variables that are out there. So there is a process that can actually be incorporated prior to preparing the machine learning model.
Now this does not account for data collection errors. if there are measurement errors in the device that was taken. Your blood pressure has a problem. I was at the clinician’s office the other day and the clinician was telling me that our blood pressure machine on the systolic is 15 points high. So I said, Was you still using it? He said, Yeah, I know that is 15 points high, So I do the subtraction then now I know what the real value is.
Now, if that device is being used as a data collection, then obviously there is a problem. So measurements of the data coming from a device or coming from an observation is something that AI obviously cannot handle it. At Dal we can overcome that particular problem.
The second thing is that data is sometimes we don’t realize that more data is not necessarily what you are looking for. For example, if I take a look at my ICU project to the ICU project, it has monitors connected to the patient and they are giving real time readings to. Now do I need real time readings?
Probably not. Right, because the reading of my temperature didn’t change in one second. So if I take it as sampling of one second, I am basically creating a lot of data which is not changing over time right? And it’s just simply adding more complexity to my data later on, like a better sampling rate, maybe checking like picking up that data every 10 minutes would have been a better choice as opposed to collecting data on the one second sampling rate.
So sometimes what we do is that we inadvertently misuse the data. We, for example, like the data coming from a survey. Now, is that a good data source? Probably not. Right, because it is objectively weak, right, there is no measurement criteria over there. Right. There’s a liquid scale. How are you feeling today? Well, if I woke up early,I would be saying, I feel horrible. I had a good night’s sleep then. I would say I was feeling quite good.
So the objectivity is not there. And those kind of measurements. I think the problem largely is the measurements. Now, when we bring it into our model and in the pre-processing stage, we can look at what are the ranges of the data that is coming in and they could be normalized to what if they are out of range to a range that can be actually assumed to be the right one. So they are in a correction mechanisms.
But we don’t want to do that. Secondly, clinicians don’t like imputation in other areas that we would impute the data. So if there’s missing values, we’ll take a look at the variables, but the variables are their values and we can impute that this could be potential value for it.
Yeah, making some assumptions.
Dr. Raza Abidi (00:42.26)
Exactly right. And there are some good imputation algorithms that would do that. And they give you realistic values, but we have to report that this was not the real data. It has imputed values in it. So the point is that I usually consider data collection as the weak link in the whole process. How was this data to be collected as opposed to what the models of what the algorithms are capable of doing.
And this is where the bias comes in too.
Dr. Raza Abidi (00:43.04)
Exactly all of those things come over there. Now sometimes, bias is their right, but it is not introduced intentionally. I doubt any researcher or any clinician would introduce bias intentionally. There would be very few cases for it, but if the thought process is not there. Then it would creep in and if it creeps and then figuring it out did that whether it was intentional or whether it was inadvertently brought in.
And secondly, from a data standpoint, is this bias going to affect the performance of the model or not? That is a different question versus looking at it from an ethical standpoint that whether this bias is there in the data. And so there are two separate things from an ethical standpoint. If a certain population is not well represented, then it is a bias towards one population versus another.
But the question is that if I take that data and put it into a machine learning model, does the model feel that there is a bias over there or the model has the capacity to overcome the sharpness in one population and it can do it and it’ll still perform fine. So I think that the biases comes at two different levels, which we need to understand. Is it an ethical bias or it is a bias that would affect the performance of the market.
Right, because with some data, let’s say there was a gender bias, you know, male versus female. And there it was skewed towards one of the in the data. So it doesn’t necessarily mean that the you know, the outcome of the algorithm is not going to apply to the other gender just because it was skewed or it might be just as good.
But we don’t know. It might also have a big effect depending on, you know, those types of things. So I think it’s a really good point to need to filter for the impact of a biased dataset.
Dr. Raza Abidi (00:45.41)
Yeah, but you see, when data is being collected, if there is a population of 2000 patients and 40% of them is of one gender and 60% of them is another gender, then there is a 20% gap between the two that cannot be overcome because your population was 2000 patients. Now the question becomes is this difference in the population size of a significant, to influence on the outcome?
And that’s a different question. But making it 50/50 sometimes is just not possible. We have a dataset in which there are certain conditions that are so rare that they would always be rare.
They would always have a very small proportion in the data. Right, now what can we do about it? That’s a completely different set of the modeling. But the point being is that there was no intention of bias, it is just the nature of the problem that we’re dealing with, that there are certain outcomes which are rarer than the others.
So you’ve worked and implemented many real life digital health solutions. You know, is there a mindset or kind of a philosophy or approach you, you know, you take to these types of projects when you when you start like, is there a specific vision for what it’s going to be? Or do you start to say, hey, here’s a problem and we’re all going to solve it and see where it goes?
Dr. Raza Abidi (00:47.37)
I think there are two kinds of projects. One is problem solving, right? This is an interesting problem you just said, and if I can use our algorithms or our expertise to solving it then it becomes the proof of it basically is a demonstrator project. The other kind of projects is that, no, we need a real application.
This is an intervention that has to be implemented in the health care system. So I think the approach is completely different for both of them. Like the first one is a very exploratory approach in which we are simply saying this is a problem, let us explore different solutions and then we can compare what actually is the most efficient or the most optimal solution.
The second one is a much more directed approach over here in which we would be taking a really a software engineering methodology and saying that you need to develop an application. So these are the processes, these are the steps that we would have to undertake because this is an application, then it has to have a very viable and trustworthy validation step.
Who is going to validate it is a clinical trial that is going to validate it, or is it a consensus among experts that is going to validate it. Then it follows an implementation science exercise. Then how would it be implemented within the clinical workflows of the institution of that particular department or of that set of physicians who are going to use it?
So I think it’s a very targeted, goal oriented approach that is being taken and endorsed projects which are to be treated as a point of care or a point of need intervention.
And the goal is set upfront. This is what we are looking for, right? And then you basically see a methodology that would work to achieve that goal.
Okay. And what are some of the main barriers that you encounter with some of these projects and do things ever just get stalled? And if not, find a way forward?
Dr. Raza Abidi (00:49.52)
Well, the main barrier is I think the initial onboarding is a barrier where people do not understand what would be what is this particular methodology or the methods that are going to be used. And our side, we are unable to understand what the physicians are trying to achieve by this particular project or understanding of the data.
So those are initial onboarding problems that we have. In the midst of the project. I think the main barriers are do we have the right methodology for the evaluation of our outcomes? Because development, it is not the issue, but once we develop it, looking at its clinical efficacy, looking at its clinical correctness and looking at its clinical utility, looking at this clinical usability to these are all evaluation points that need certain standards, that need certain metrics, and it needs certain expertise in order for it to be moving forward towards the product that can be rolled out.
And I think the barrier is at that particular point. Do we have the experts that can validate it? Right? Do we have the test data that can actually be used to validate it? Do we have the time and the resources to run it as a one year pilot study or a six month pilot study? How would we actually do it? Do we have the right metrics to evaluate it?
And even if we do all of that, try to be able to establish enough reliability and trust in this application for it to be put in clinical use. So I think those questions towards the end start reading their head and how do we navigate through them becomes the challenge.
And how we’re taking a solution that has been validated in one sorry, validated in one context, let’s say it’s at one hospital or one clinic and say, okay, this works. You know, we know it’s good. The scientific data is there and how to roll that out and then implement a larger scale.
Dr. Raza Abidi (00:52.16)
So that is a scalability question to that and whether the solution is scalable to another health environment or institution to it is… At least your starting point is is different over here because the product or the model that is a decision support model, it has already been developed. Now it is how do we actually operationalize it into the clinical workflows of that institution.
So it becomes a more clinical mentation exercise as opposed to the model building or the development of the project from scratch. And most of these applications, the ones that we develop, they are agnostic to site and and clinical, I would say, workflows. So they do have the ability that if I’m moving it to another clinical workflow, then changing a set of parameters actually allows us to incorporate this application into a different clinical workflow.
So that is always a possibility to… Now in the different environment you’re working with different people, it requires a different set of training, it requires a different set of change management, a different set of knowledge translation. But I think there is an ability more or less for moving your applications from one environment to another one to developing applications that are diagnostic to do a specific ecosystem.
And I know that the research and work you’re doing at Dalhousie is very cutting edge. But are there other countries or institutions that you look to that you say, hey, they are really, you know, leading the technology here. They’re doing it right and we can learn from them.
Dr. Raza Abidi (00:54.20)
I think and I’ll say this because I say this a lot in public as well, Europe is way ahead of us. Right? And when I say us, I mean North America. I don’t mean just Canada. Right. So in terms of digital health innovation and digital health, that adoption to Europe has always been ahead of us. Maybe it is because of the size of the countries or the resources that they were able to put into the healthcare system.
But there is a natural tendency for innovation and adoption too. And they have a first huge complex problems. So when we were busy over here in Canada to figuring out that what our electronic medical records should look like and how would it be connected with different departments within an institution, they had already sorted it out to be able to now building support systems on those electronic medical records, they were building risk assessment tools on those ones.
They were looking into complex problems that were around that spatial temporal analysis of the data. They were looking into co-morbidities, they were looking into risk factor assessments. So I think they have always been the head. One example is that this conference, which I just hosted, is traditionally a European conference. And I think this is the second time it was brought into North America, the first to 2020 it was brought to US, and then I brought it to Canada.
So it clearly shows that there is a community in Europe that is solving heart problems in this area. And we do look at certain centers for very specialized digital health or even AI and health research in Europe.
And specifically, you know, universities that you’ve collaborated with or on other projects.
Dr. Raza Abid (00:56.37)
Oh, yes, we collaborate with universities in Italy. We have collaborated with universities in Spain that have been collaboration with universities in the UK. So they are all doing good work. Each one of them have their own specialization because each university has a group, just like I have a group, right? And we are focusing on certain areas. So yes, we do have these collaborations, sharing of information just today I was looking and I was asked, can you help us solve a problem in knowledge graphs, because this is maybe a stack now we’ll look into it and then help them with that.
And where does it lead? I think we’ll have to figure it out. So I think there are collaborations across the different countries and between us and European partners, which leads to interesting solutions because there are the two sides of research. One is fundamental research and one is applied research, and it is the fundamental research that is supported by these collaborations because that’s maybe asking the hard questions.
Right. Okay. So, you know, we talked a little bit about the history of digital health. You know, what you’re working on right now, now looking ahead into your crystal ball over the next couple of years, what are the innovations you’re excited about or hoping to see?
Dr. Raza Abidi (00:58.09)
So I don’t have that particular crystal ball. I’m just looking at where the trends are. I think one cannot discount the utility and applicability of artificial intelligence in health. It is coming from all sides, whether you’re looking at and as decision support models such as predictive analytics or even looking at surgical robots. At the end of the day, behind it, there is a strong component of artificial intelligence.
But what I really feel is that the way things are moving, there is a shift towards the applications that are in the hands of patients and giving them the resources and the ability to self-manage their condition. And this is largely in the realm of what we call as e-therapeutics. Now, it doesn’t mean that all the data is going to them, but there are applications that are coming and this is basically a trend that moving health care from a tertiary center further towards primary care that was happening 20 years ago and it’s still out.
But from the primary care even further down the road to the principle party, either the patient or the individual themselves. So I think that is a strong, I would say, trend that is coming up like that. It then brings into play an area that is growing very, very rapidly and that does virtual care, remote monitoring of individuals. What we saw during the pandemic, it was regarded as virtual care, which was basically tele health care.
Right. physicians were contacting their patients by phone or by Zoom and so forth. Like, I’m not talking about that. I’m talking about using their lifestyle data that individuals may have to and doing remote monitoring of their functional status, doing remote monitoring of their cognitive abilities, looking at whether they are getting fragile over a period of time or not.
Right. And these are… Frailty is another thing that is being looked into. So lifetime monitoring of individuals to determine that what health needs they may have in the future, what health education can be provided to them, how they be supported to self-manage their condition while staying at their home. And I think you have also seen this trend over here that it’s retirement homes and then growing older at home is now becoming the trend that is the case.
Then you need some monitoring, some remote monitoring that helps them to provide just in time information and just in time health therapeutics to help them. So I think that is a big area that is coming up in genomics, too, and working towards precision medicine, drug discovery. I think that is now possible given the amount of data that we have collected in our ability to sift through it in precision medicine is really coming very, very strong.
And I think there are some very unique opportunities for mental health therapeutics now and unique ways of figuring out that whether a person has a need for a mental health therapy or not. So it’s not just the didactic interviewing between the and a psychiatrist. It is not that anymore, but rather looking at how they are functioning, how their mood is, how they are interacting in the community, the social interactions and so forth.
It is now developing a picture, a mental health picture of an individual that can then be used as a precursor to whether the person needs mental health therapy or not, and what kind of therapy, whether the person is depressed or not. There’s a sense of anxiety in this individual. And that’s, I think, those nontraditional ways of figuring it out is what the most exciting element is in terms of mental health.
So I think these are some of the things that are happening. I don’t see that there is a lot of innovation or energy into the traditional digital health, which was a loner, electronic medical records or care coordination on the facts and things like. I think we have enough of that to work forward into services that are reaching out to individuals.
Yeah, at MindSea, we’re very excited about the potential of digital therapeutics as well and building those tools that empower patients to take better care of themselves in between their visits with the doctor, because it’s about that kind of, you know, 24/7 care all the all the the actions they can take to better their health in between the consultations.
I really appreciate you taking the time. You have a lot of wisdom to share, and I’ve certainly learned a lot in the conversation. Thank you so much for joining us today, Dr. Abidi. And thanks for everyone listening as well. If you enjoyed this conversation, please subscribe to the MindSea newsletter. Be notified of future episodes.