Topics Covered in Episode 41 of Moving Digital Health (Will Falk of Rotman School of Management):

  • Why AI adoption is outpacing healthcare policy (1:35)
  • Generative AI is following a new adoption curve in healthcare (02:36)
  • Consumer AI health tools are going direct to patients (04:47)
  • Guardrails as the safety layer for AI health searches (09:59)
  • General vs specialty AI tools in clinical decision support (12:36)
  • How AI in clinical decision support is changing clinical workflows (16:41)
  • What AI copilots reveal about the limits of EMR-centric integration (19:11)
  • How major EMRs are responding to rising AI adoption (24:52)
  • The missing link in consumer AI health tools: system-led guardrails (30:17)
  • Why policy paralysis is the biggest AI risk in Canadian healthcare (36:42)
  • How AI is reshaping interoperability in Canadian healthcare (38:57)
  • Educating the next generation of health leaders (43:11)

Read Transcript:

Reuben Hall (00:01)
Welcome to Moving Digital Health, a podcast series from MindSea Development. I’m your host, Reuben Hall, CEO of MindSea. Each episode, we sit down with leaders and innovators in healthcare to hear their personal stories and explore how they’re moving digital health forward. Today, I’m joined by Will Falk, one of the most respected voices in Canadian health policy, serving as an executive in residence at the Rotman School of Management and a senior fellow at the C.D. Howe Institute.

He bridges the gap between the boardroom, the classroom, and the hospital floor, helping us understand not just the technology, but the systems that govern it. Welcome to the show, Will.

Will Falk (00:39)
Hey, thanks Reuben. I’m delighted to be here.

Reuben Hall (00:42)
Would you start by telling us a bit about your background?

Will Falk (00:45)
Sure. Long story short, I’m a recovering management consultant. I spent about 20 years, started in New York, came home to Toronto. Most people in Canada know me as a digital health guy, but I actually started as an academic medical center strategist in the US. Came home and did the digital health journey. I retired back in 2017 and since then, I’ve done projects on a pickup basis as I’ve wanted to. During the pandemic, I got back involved with public policy, which was my first love before I did my MBA and have been experimenting with artificial intelligence for the last three years, a whole bunch, and I’m sure we’ll get into some.

Why AI adoption is outpacing healthcare policy

Reuben Hall (01:35)
Yeah, so you really sit at the intersection of policy and practice. What is the biggest disconnect you see between what policy policymakers think is happening in digital health and what’s actually happening on the ground?

Will Falk (01:48)
Um, I actually think the policy folks, by which I mean both in Ottawa and at the provinces are pretty clear-eyed about stuff. mean, if you, if you take it back, uh, before the Carney government, I think bill C-72 was a good piece of legislation. It would move the ball forward. It’d be great to see. I think the thing that’s challenging all of us is the pace with which generative AI has come into the digital health world. I mean, it’s pretty amazing, Reuben, when you think about the impact that it’s already had in only three years. And there’s some real regulatory challenges that governments need to understand and take on on that.

Generative AI is following a new adoption curve in healthcare

Reuben Hall (02:36)
So what are the specific impacts that you’re seeing?

Will Falk (02:42)
Well, the first thing I would say is that the adoption path for artificial intelligence and specifically here I’m talking about generative AI. So large language models, which really started in late 2022 with the public release of GPT 3.5, which was usable. And that of course has improved dramatically. I would say we’re, you know, we’re at least two orders of magnitude better capability than we were just three years ago. But that adoption hasn’t happened the way digital health tools used to be adopted. We think of adoption as being large projects, maybe provincial or InfoA sponsored, large Epic and Oracle and Metatech installs, or at the practice level, large lifts. What’s happened in AI is that consumers and clinicians have adopted, directly, technologies as complements to their existing practice. So they’re not substituting directly for old systems, but they’re greatly augmenting capabilities. I guess the two clearest examples of these are ambient scribes and what I call second-screen clinical decision support. And both of those have gone to 25% adoption in under two years. I don’t have to tell you, Reuben, that doesn’t happen in digital health. We don’t see adoption curves like that. So that’s a real change in the way we think about things. And we’re all still, we’re all trying to figure out what to do about that. But in the meantime, doctors are using it, physicians have brought it into practice, and citizens are using it every day to get health advice.

Consumer AI health tools are going direct to patients

Reuben Hall (04:47)
So just recently, OpenAI and Anthropic launched some consumer-facing health tools that connect to Apple Health and personal health records. What’s your reaction to those companies bypassing those traditional adoption methods and going directly to patients?

Will Falk (05:05)
Okay, so I’m going to unpack that because because their new products are a little unclear yet. What is clear and interestingly, I pick on OpenAI because they did a public service, they dropped their usage numbers at the end of 25 and they put those out and they showed and this almost hard to believe, they showed that 10% of Americans use OpenAI for health every single day. And that I think the overall usage numbers are once a week are 120, 140 million. They also showed, and this is fascinating, 70% of usage is outside of office hours, normal office hours, eight to six. And then they also did a deconstruction on rural and remote usage.

And again, they showed that there’s big equity gains, there’s big information gains. So the challenge for us as health system players, in air quotes, is how are we going to deal with patients who are getting this much information and using it? I will say personally, I use ChatGPT regularly. I’m an older guy, I got a few things going on. I use ChatGPT regularly, not to replace my doctor, but to inform me. Now, I said I’d unpack, but maybe I’m talking too long. Do want me to go forward into the new products? Because what I really talked about was before the product launched there. These new products…

Okay, it’s really early days. So this comment may not live that well, but when you look at the chatter right now in groups that should know the new products with the guard rails are not as good as plain vanilla GPT.

Will Falk (07:13)
And that’s really funny, right? Because what that says is that when you put a guardrail around, and I can unpack what a guardrail is if you want, you put a guardrail around a large language model, you may make it safer, but you may actually make it less good at the same time because you’re constraining what it can think about.

Reuben Hall (07:40)
Yeah, it’s that risk reward balance, right?

Will Falk (07:43)
For sure, for sure. you know, I mean, for better or worse, we haven’t accepted paternalism around how we use the internet. I mean, I think it’s safe to say people use the internet in stupid ways and they do things on the internet that they probably shouldn’t. And when you think about it from a citizen point of view, and again, I’m being careful not to advocate because I can tell you with the position that I advocate, I’m just trying to represent the debate that’s going on, because it’s an important one. But if you think about it, chat GPT and AI generally is really becoming an internet replacement technology. This isn’t just a healthcare thing. This is every sector. But in healthcare specifically, the volume of global health sources between the middle of 24 and the middle of 25, dropped by 30%. So 30% fewer people searched on the internet as reported by The Economist magazine. And that’s a big drop, right? Like if I told you that in a one year period in 1980s, newspaper volumes dropped by 30%, you’d be like, holy crap, that’s a big change. And that happened kind of without anyone really noticing and it was before Google started showing that little Gemini box up in the corner, right? So, you know, a lot of us, I mean, I don’t actually have a Gemini license, although I’m gonna get one, because they’re doing terrific things. But, you know, when I do a Google search, I look at the Gemini thing and often I go one shot to the Gemini thing now. I don’t know what you do personally, but that’s Internet replacement and we don’t regulate Internet health advice. Or at least we haven’t historically unless it violates other laws somewhere. So are we going to regulate AI health care advice? We haven’t.

Guardrails as the safety layer for AI health searches

Reuben Hall (09:59)
And so let’s talk about the difference there, right? So what’s the difference between a patient just Googling their symptoms and looking for advice that way, like, you know, what they would get in a web search versus, you know, what they’re getting from a generative AI model?

Will Falk (10:16)
Well, okay, so first off, it’s a caveat emptor world at the moment on large language models. There’s a bit of a trap in your question because I don’t want to put myself in the position of arguing that people should be using large language models for medical advice because I don’t think that. What I think people should be doing is using appropriate guardrails either themselves and how they prompt or that health systems should start providing guardrailed prompts out to citizens so that they know what they’re doing. So for example, and you know, can, if people are interested, they could look at my social media feeds to see some of the examples of these prompts, but you can very easily say, please only look at academic medical centers. Please only look at, you know, the diabetes foundation website or the heart and stroke foundation website. You can constrain choices. You can guardrail on the intake side, you can guardrail on the external side. That’s how the clinician grade services work. So switching topics on you here a little bit, the clinician services, by which I mean open evidence and Doximity, AMBOSS, Dynamed, and soon to be Abridge-Walters-WK, but there’s two big ones and there’ll be more because it’s pretty easy to do. What they do is they guardrail into certain sites and they always provide citations behind the answers they give. And I think that’s a practical way to do it. So I don’t think we should be using plain vanilla LLMs for healthcare advice, but I think people should have choice about how they guardrail. I can give you clinical examples if you’re interested, but the same condition will need different guardrails for different cases and different people, is the shorthand. Guardrailing isn’t hard to do, but not many people are that good at it yet. And so I think it’s the health system’s job to help provide those guardrails, and it’s a job we’re not doing right now.

General vs specialty AI tools in clinical decision support

Reuben Hall (12:36)
Mm-hmm. I think it’s a very good debate and there’s a lot to, you know, it will go on because like you said, there’s no regulation on, you know, medical advice on the internet. So why is this any different? Jumping over to the clinician side for a moment.You talked about the divide between using a general purpose LLM as a clinical tool versus specialty specific AI tools for oncology, urology that have more defined smaller and specific data sets. What are the factors that go into those decisions?

Will Falk (13:23)
So I think there’ll be a whole menu of decision support tools. And there’s a lot of different choices that people will wanna make. The practicing subspecialist may want quite a different tool than someone in general practice or someone who’s covering a general internal medicine or general surgery ward. The two big tools. One of them is actually the technology started in Montreal, the company called Pathway that’s now owned by Doximity. So Pathway sold to Doximity. The Montreal team is still there. And that tool and the other one is OpenEvidence are both very widespread. We have US usage numbers, which say 50% of US clinicians are now using one or both of those tools.

We don’t have numbers in Canada. It’s really widespread. I mean, I got a little town north of Toronto near me, 2000 people, called Arthur, which is where my family health team is. And I was in there and the nurse practitioner was on Open Evidence. So it’s not just the academics who are doing this, but we don’t have clear numbers. If I had to guess, I’d say it’s somewhere between a third and a half of practicing clinicians.

Those tools are much better than unfiltered large language models. How are they much better? They’re much better in that they always provide citations, and they constrain how the questions are asked and answered in some safer ways. But interestingly, they’re still both premised on the idea that the doctor makes the decision or the nurse practitioner, the clinician makes the decision. You have to have a clinical license to get one of those tools, or at least you have to put a valid clinical practice number in, whether that means you have a clinical license or not, I don’t know. But that kind of tool exists for clinicians and that makes a lot of sense. People want to see the citation, they want to know where the drug dosing stuff comes from. They don’t wanna use information that, quote, comes from the internet. And let’s be clear, if you’re using an unrefined, I’ll use that word, an unrefined LLM, you are just asking the internet. Now the internet’s gonna be accurate most of the time, but not all of the time. And you can increase that accuracy level with these guardrails.

You can increase the transparency by making sure citations are available. I want to come back to digital health though, is that okay Reuben? Can I bridge this?

How AI in clinical decision support is changing clinical workflows

Reuben Hall (16:41)
Yes, definitely. Connect the dots for us.

William Falk
(16:42)
Because there’s a point that will be really, there’s a point that’s really gonna be interesting for some of your digital health viewers who are deep in the digital health world. Those tools, are not integrated into EMRs, but they are integrated into workflow. The way that they’re integrated into workflow is fascinating. They sit on a second screen, they sit on a phone, and the clinicians put them in their pocket, and they pull them out, and they often use them between rooms to ask or confirm questions. They’re also non-PHI.

If a clinician wants to put PHI in, which the systems will advise them not to, they can, but that’s their clinical judgment. But there’s no integration back to the existing digital systems. They’re on a second screen. And people in the digital health world have a really hard time imagining that that’s a good thing or that clinicians want it, in spite of the fact that half of US physicians are using the tools in the two years, in the second year of their existence. And that really kind of messes with us, right? Because we all think of CDS, clinical decision support, as being some embedded system that pops up an alert and says, you’re doing something wrong, stop it, right? And these systems aren’t doing that.

They’re pull systems instead of push or nudge systems. And so that’s taking some time for people to think about. There may end up being push systems as well. And many vendors have push systems being developed, but they’re all still in beta. So the pull systems, the second screen systems are in wide deployment and mature.

How mature? Open evidence now has a market cap of $12 billion. And for a two and a half year old product, that’s pretty exceptional. I think it’ll go down, but yeah.

What AI copilots reveal about the limits of EMR-centric integration

Reuben Hall (19:10)
Yeah, so is this just like a transition period where, like you say, clinicians are using these tools on their phone kind of outside of the system?

Will Falk (19:22)
I don’t know, I mean, it’s interesting. So now I’m getting gossipy here, but many of your users will be Microsoft clients. Microsoft has Dragon. Dragon is integrated into Epic, right? The Dragon is an ambient scribe that sits on top of Epic, non-native, but I personally call it semi-native because Epic doesn’t have an announced scribe yet.

So there can’t be a native scribe. So whatever this is, it’s as native as it’s gonna get. There’s an API, it works, et cetera. But Dragon, Microsoft Dragon now has an API with open evidence and about 12 other companies as well. So now let me just really blow your mind a bit here, okay? So we had two product categories, neither tightly integrated.

And I’m just going to assume general familiarity with this audience because this is a digital health audience. So the scribes were some of them had APIs, some of them have APIs in development, some of them are still copy paste. That’s fine. The CDS second screen. I think second screen is good enough because the workflow, you don’t actually want it on your computer. Having it on the second screen is probably good enough because you were interpreting it, discussing it and then bringing it into the practice setting. But Doximity buys pathways. So Doximity had a scribe, they buy pathway, now they have the clinical side. Open Evidence had the clinical side, now they’ve built a scribe. The two things are together. Okay, look at scribes. Scribes are interesting, they’re in-visit support ambient.

But when you look at the constellation of stuff that goes around a scribe, some of my writings you’ll find online, I’ve described almost 30 functionalities. So there’s in-visit functionalities like CDS, like translation, like resource availability, social determinants availability. There’s pre-visit functionality, history and physical connection, assembling the lab tests from in Ontario, we’d say, OLS and the GTA Connect and pulling all that together. And then post visit, we have what we call an e-referral. We have what we call an e-consult. You know all of these things. We have prescriptions. We have letters for the workers’ comp board. We have sick notes. All of those are just artifacts that a scribe can do.

Now you put Scribe, you put CDS together, you start adding 8 or 10 of those functions around. Now I got a front end consumer facing in terms of the physician being a consumer. I got a consumer facing clinical co-pilot and you’ll see clinical co-pilot appearing, started appearing in early 25. Microsoft renamed Dragon, the Dragon co-pilot.

Watch the copilot world. Watch the Canadian companies. Talley’s got a copilot-like thing. There’s a bunch of these that have copilots.

How tight does the integration need to be between a copilot and the backend system of record? We call that backend system of record an EMR.

Reuben Hall (23:03)
Yep.

Will Falk (23:05)
or an EHR or an HIS, but we got three EMRs in ambulatory care in Canada. We got three, three and a half EHRs in hospitals. None of them are the same. It’s like a tiny little overlap somewhere outside of Ottawa where Epic is covered on both. I can’t get you to laugh. Okay.

But those three and three now both have a clinical copilot level on top of them. Clinical copilot may or may not have an API back. Maybe it’s copy paste, but maybe it does. In Ontario now, hospitals are allowing you, if you own an ambulatory clinical copilot, to walk into the hospital and use your copilot, and copy paste into the hospital EMR. One copilot, two EMRs.

Reuben Hall (24:05)
Okay.

Will Falk (24:12)
What if the co-pilots could follow the doctor regardless of their sight of practice? What if I can use the same co-pilot when I’m in my, you’re in Nova Scotia, so whether I’m in my Oracle hospital or my Meditech hospital, because you guys still have those, or whether I’m over in my old, who owns Nightingale now, I guess, Dallas. Anyway, but like if I could use it in all of those practices and it’s the same front end for me.

What’s workflow integration in that world? It’s different than EMR integration.

How major EMRs are responding to rising AI adoption

Reuben Hall (24:52)
Well, and this is the question that comes up is, where are the big players, like the big EMRs, in terms of these products? Why doesn’t Epic have a native scribe? Are they just going to start buying and absorbing some of these other companies so they can buy into their own ecosystem?

Will Falk (25:16)
It’s a great question.

It’s a great question. So let’s just review the bidding on this. Let’s go back over the recent history. If you go back to July, way back to July of 25, company had announced, I’m saying neither company, I’m talking about Oracle and Epic right now. I can do Meditech separately, but it’s a bit different.

Neither Oracle nor Epic had an announced strategy. Both of them had partnerships. Both of them have partnerships, still have partnerships with A-Bridge, big US scribe, market cap’s six billion, but big US scribe and important because Oracle still has that partnership, I believe, for their VA contract, which is 9% of the US business. So that’s not a small API, right? That that’s a big API. And Epic had an API with Abridged and with Microsoft. In the fall timeframe, as OE and Docs and the Scribe start gaining traction, they do announcements at their conferences. They announce that they’re gonna have a Scribe product. They announce that they’re gonna have a clinical decision support. As of this moment, I believe it to be the case that Oracle is in beta for their Scribe product in two Canadian hospitals. I’m not going to name them. I know they offered it to at least five or six and maybe there are more than two. I’m not certain on the number, but they’re in beta. They don’t, I believe, have a full product out as of January 26th, but they will in 26th. Epic is a little behind that and doesn’t have a full product out in either area.

What they did do, but really importantly, they announced that these products will be core to EMRs going forward. So they validated the use case and said that the use case was critical going forward before they launched a product. As I’m sure you and many of your viewers know, that is not a strong position to be in.

You want to have a product before you announce it. If you at all can, having to do a defensive announcement followed by a launch has at least the risk of failing to deliver on launch. And if either of those guys failed to deliver on launch, that’s a problem. It also makes it very hard to claim that you can’t open up an API.

Since you already had an API open and have had an API open for a year for your vendors because you were pursuing a partnership strategy. You know, if you can’t, and again, I’m assuming this level of sophistication with your viewers, I’m sure everyone is familiar with how these vendors monkey with their API strategies to try to sell new product areas and keep everything in house.

Reuben Hall (28:21)
Mm-hmm.

Will Falk (28:41)
It’s gonna be hard on this one to claim that there isn’t an open API. And if I was a Canadian vendor in this space, I would wanna see the specs of that API and have it open for all Canadian vendors. Sorry, that’s a sore point for me. I didn’t mean to get so hot.

Reuben Hall (29:00)
Yeah, you’re really steaming there. No, but it’s a really interesting conversation because as you mentioned, the big EMRs are not necessarily known for being open and collaborative when it comes to their APIs or being less than a walled garden.

Reuben Hall (29:29)
And the piece that they move is extremely slow as well. So, okay, they’re great. They’ve announced that these things are coming. How long? How far are they lagging behind all the smaller, nimbler products? They’re already out there in the field.

Will Falk (29:48)
For sure, and I’ll just add one thing and come back to a question you asked much earlier in our discussion. Everything I just described around co-pilots, that’s all before OpenAI and Anthropic make their early January announcements. And I have no idea what that means for the big EMR vendors. I’ll also point out that I primarily focused on the hospital side in my answer, but you can replay that whole answer on the primary care and ambulatory side as well. You just have to change the names to protect the innocent. So it becomes TELUS Health, Well slash Heal Well, and Acuro slash Shoppers. And same kind of dynamic going on there.

Each of them have a scribe partner that they’ve used, maybe two or three. Each of them are looking at acquiring scribes and building the technology. The simple truth is that scribes and second screen CDS are so useful that they get adopted even if the IT department does nothing. And so the response from the IT department is now not a change management problem in the old sense of how do I get my doctors to use this. It’s how do I incorporate the fact that my doctors are using this into my decade old products that are starting to look a little clunky compared to some of these modern AI fronted applications.

The missing link in consumer AI health tools: system-led guardrails

Reuben Hall (31:44)
Going back to OpenAI and Anthropix, you know, kind of consumer facing health tools, they do make promises about handling PHI and integrating with EHR systems. But again, they’re large general purpose models, trained on those massive data sets. So, you know, how skeptical should healthcare organizations be about adopting those that, you know, less.

Will Falk (32:20)
It’s the question, right? Let’s back it up a bit. When does the care journey start? And when does the care journey end? And where does the health system want to be involved?

A lot of people are using AI on their care journey. Whether they tell their health system or not is a question. Should the health system influence which LLM they’re using and how guardrailed it is? I think yes. I think that they should be making sure it’s HIPAA compliant or PHIPA compliant in Canada. I think that it should be, as we’ve done with some of these other product areas, we need some safe harbors where we can say this is private and secure and this is appropriately guardrailed. I don’t think that that should be done as a ban. I think it should be done as a positive statement first because I think most people understand that using plain vanilla LLMs, meaning unguardrailed LLMs, is probably not a great idea. Right? Just the same way most people understand that doing a Google search and landing on the Hopkins website is different than a Google search and landing on the Goop website. Right? So I think, you know, that guard railing process has to gain some speed. It’s interesting, right? So I’ll link the two conversations. We could just give open evidence and doximity to patients. We could just say, hey, here’s what your doc’s using, you use it too. Why do we not do that?

Reuben Hall (34:16)
But you were saying before that those products assume a level of education and sophistication of the end user.

Will Falk (34:22)
Well, that’s the point, right?

Okay, so great, but what’s the patient equivalent? What’s a grade 12, in your own language, understandable, guardrailed, cited, right? Because a big part of what OE and docs do, I’m talking about the two products, I’m really talking about the category. A big part of what they do is return information with the source noted, so that you know that it’s from the Nova Scotia or Alberta website, right? We can do that, right? Like honestly, you you and I could do that right now in five minutes, right? Like it’s really not hard. People can do that for themselves, but it would be better if the government of Alberta or the government of Nova Scotia, both of whom have really good intellectual property already up on their websites, use that intellectual property to rag, retrievably augment a chat bot that would then respond with the government-sponsored information. That’s not hard to do. That’s a class assignment for one of my master’s classes, okay? And we should be doing that routinely. Why do people not do it? Because even those guardrailed systems, and this, by the way, is gonna be a big problem, I think, for open AI and anthropic. Even guardrailed questions have a high error rate, right? I mean, look, the dirty little truth is that our existing system has a high error rate, right? And so it’s like self-driving cars.

It’s not good enough for self-driving cars just to be better than the average human driver. They have to be a lot better. Now they are now. And so they’re getting adopted, but it’s the same kind of thing, right? And I think that that’s a healthy tension.

Why policy paralysis is the biggest AI risk in Canadian healthcare

Reuben Hall (36:42)
Yeah, and I think you shared with me a draft of a paper you’re working on that’s coming out later in February regarding AI in Canadian healthcare. And in that, you argue that the policy risk facing Canada isn’t reckless adoption of AI, but the bigger risk is paralysis. And given how high the stakes are, why do you think that it’s a bigger risk to stay still and do nothing.

Will Falk (37:14)
Yeah, and I think that it’s clear, right? Like you got a system in which, you know, six million Canadians don’t have primary care. You can’t tell them that they can’t use modern tools to get primary care advice in the absence of that. They should be using good tools, absolutely. But the risk of harm through not using modern information, I believe, is higher than the risk of harm by the edge cases that get all of the attention. Now that paper, which is coming out from the Canadian Standards Association Public Policy Group in late February, will be widely available. And I’m gonna advance some of those arguments and go into the sectoral detail questions. It also addresses regulatory questions, issues of data sovereignty, where we’re going in terms of things like interoperability because the whole interoperability game and you can see this by the way in the anthropic announcement in particular the use of fire json hl7 in the anthropic models is going to change interoperability forever it’s just not going to mean, the word isn’t going to mean the same thing my friends who are way more techie than I am, are really excited about that. It solves things like semantic problems, nomenclature, translation problems, much more readily than what we’ve had before.

How AI is reshaping interoperability in Canadian healthcare

Reuben Hall (38:57)
And is this the case where the AI is just bridging the interoperability gap and like being a translator for.

Will Falk (39:05)
There is some of that and you can see that readily amongst, and I talk about this in the paper, amongst some of our bigger Canadian players. Like if you look at Point Click Care, AlayaCare, and League, you can see how they’re using AI to translate amongst different types. They’re creating multi-agent environments, in which different agents can represent you or answer questions. I don’t want to get too next gen on it, but it’s pretty exciting stuff to see what’s already installed by some of these big Canadian innovators. I think the hospital side for a whole bunch of good reasons is moving a little more slowly, but pretty deliberately. You know, agents are being used in complementary circumstances widely. Substitutive agents are coming in more slowly. They should be because they replace clinician judgment.

Reuben Hall (40:16)
Yeah, so I just want to get you to elaborate that a little bit more because you do really draw the distinction between the complementary AI and substitute and why is that so important?

Will Falk (40:29)
Well, I mean, in part because if you add AI to a workflow as a compliment, meaning it fits within the scope of practice of an existing provider or an existing job description, then the supervision can be done by the human in the loop. And that means it becomes a local adoption system. Where you have a system that replaces entirely or substitutes.

Now you’re in a different place, right? Like if you’re diagnostic or treatment decisions without a human in the loop, that’s a higher standard. And that standard gets into all kinds of regulatory stuff. So most of the early adoption has happened on the compliment side. You can take the compliment thing a long way, right?

Simple examples, public health units writing letters that are checked and approved during 100,000 vaccination letters over the course of a few days. Things like putting HR manuals into chat bots so that employees can talk to the HR manual, ask questions, get sourced answers, cafeteria menus, whatever you want. If you’ve got a knowledge object, that you can use to inform and backstop a large language model, you take the basic capabilities of the large language model and you make it expert in an area. That can be finance, that can be communications. I know one CEO, his 24,000 employees, he puts every every quarterly employee satisfaction survey in and then he and his executives can discuss how they’re doing with the data, discuss it with the chat bot. So that’s a workday application, I think. I’m not sure about that though. And so, you know, these edge cases all fit, back to your question, these edge cases all complement existing workflows. They sit within a scope of practice. They don’t replace something.

Will Falk (42:56)
And so they don’t hit the substitute of bar that we’ve seen with things like some of the radiology systems or when we have a new medical device that replaces old practice patterns.

Educating the next generation of health leaders

Reuben Hall (43:11)
Excellent. I think that’s a very good way to describe it. Now, Will, I know you’re also a teacher. You’re teaching the next generation of health leaders at Rotman. So when you look at the students coming through your classes today, what are their perspective on things? What is the future of the health care system look like through the lens of the people who are just entering it through the education system?

Will Falk (43:41)
Well, and I’ve seen over the last three years, masters level and doctoral level students go from near zero gen AI usage to almost 100% at this point, right? So as a practical matter, when you’re teaching a class, I mean, what I do personally is I always have in my slides and you can see my slide decks online and you can play with these if you’re interested.

I always put rag prompts into my decks. And so I’ll encourage my students to take a PDF of my slides and create a channel that they can talk to and ask for more elaboration. Because any knowledge object can be turned into a chat bot and they can have a discussion. Or you can use the Google tool that does podcasts. You can use that or other things.

You can also expand on lecture notes because everything’s changing so quickly, right? So if I’ve given a lecture that describes things up till the end of 25 and now we’re in 26, I can take the lecture notes and I can write a prompt and say, please update this material. Tell me what’s happened in the last X months since this material was done. Give me an update report based on this.

And that’s a single shot prompt. It’s easy to do, right? So prompting exercises, ragging in teaching materials, updating materials. If you really wanna blow your mind, go and look at the work that they’re doing in Scarborough and at Yale University. So do Yale first. Yale has put the entire curriculum.

They’ve ragged the entire curriculum into LLM, I don’t know which one. And they can now ask questions about the curriculum, but when do we teach, you know, when do we teach about influenza? Find every time and make sure we’re teaching it the same way across the curriculum. Scarborough, which will have a new med school in the next few years, has been looking at a rag-based med school curriculum. Now what would that mean? That would mean that every piece of the choruses would have a ragged element, you’d use, I mean, I’m describing this now, so now I’ll take credit for these words and not blame anyone else. You’d use synthetic patients, synthetic evaluators, and synthetic colleagues, and you’d replace one or more of the vertices in the triangle.

And you get different learning systems depending on which ones you replace. So human doctor, human patient, virtual evaluator, virtual attending is a real-time coach for a young intern. Synthetic patient, human evaluator, human clinician, that’s a second year med school case study, right?

Reuben Hall (47:05)
Right.

Will Falk (47:07)
And if you rag base everything and drop in synthetics in different places, I think you’re gonna blow medical education out of the water. So I’m really excited by that. I use the med school example, but I will point out very quickly that that is true about almost every education system. And I’ve been playing around, just totally hacking this for fun. But in my province, Ontario puts every, all the English and all the math curriculum for grades one to grade eight online. All that intellectual property is just up there. And you can take your own AI and you can query it. You can say, okay, my kid’s in grade three English, tell me what they’re gonna learn this semester.

Reuben Hall (48:03)
Mm-hmm.

Will Falk (48:04)
and it’ll give you a 1500 word essay on what your kid’s gonna learn in grade three English this semester. Now obviously it’s only as good as whatever the curriculum that’s put up there, but anytime you think about a knowledge object, and we haven’t talked a lot about all those EMRs, but ultimately EMRs are just knowledge objects, right? I mean they’re not very good ones, honestly, but if you look at the way League and Eliacare, the Canadian unicorns are using it. They’re taking data and they’re using the data to create, in the case of League, a health story, a MyLeague health story. And the data they’re using, I mean, some of it’s like US administrative data, Reuben. mean, that data is crap, right? Like it’s, you know, it’s like billing data. But you can rag billing data into an LLM.

Will Falk (49:03)
And turn that LLM into a pretty decent chat bot. And if I get it to tell me my health story and I can correct it, that health story then becomes an English language representation of my health that powers, in the case of League, a concierge service that acts on my behalf. So now this is a US use case, but prior authorization.

If I need to do a prior auth, it knows enough about my health thing to fill the forms out for me. How great is that? And none of that’s substitutive. It’s still gonna fill the form out for me and give it to me and I’m gonna read the form, make sure it’s filled out correctly. But even in Canada, filling out insurance forms is craziness. And in the US, thank you OpenAI, we know this, 1.6 million people every day asks Chat GPT about their insurance approvals. Like it’s like a big thing, just insurance approvals is 1.6 million people every day. And those are people who are struggling with bureaucratic nightmares. And Chat GPT is pretty good at bureaucratic nightmares. One final point on this because it’s important.

Equity gets built in here, right? You can easily and natively translate anything you do into OpenAI into at least five languages. I’m gonna do it from memory, six languages. So French, so you’re always bilingual, simplified Chinese, Hindu, Tagalog, maybe it’s Italian and Portuguese. I don’t remember what the five are that we did, but…

If you’ve got a parent or grandparent and you’re trying to explain their medical record to them and you can drop that into Chat GPT and get a Portuguese explanation that they can read, like, how is that not a good thing? Right? I mean, you still got to trust only the English language version, yada, yada, yada, privacy, security, you know, all that stuff. I mean, I can, I can do the same speech that we can all do.

And all of that has to be built in. being able to give your grandmother her medical record, you know, in her language and explain it to her is, you know, because most of us can’t translate, you know, influenza into Portuguese, right? Even if we speak Portuguese. Okay. My head exploded again. I’m sorry about that.

Reuben Hall (51:46)
Well, it’s fascinating to hear you talk, Will. We could go on and on for hours, I’m sure.

Will Falk (51:58)
Well, certainly I can, yes. I hope people will read the paper. I really appreciate the chance to be on with you, Reuben.

Reuben Hall (52:07)
Yeah no, was great to have your perspective and share your wealth of knowledge and what’s happening behind the scenes with healthcare and the adoption of AI as well. It’s fascinating stuff. So thank you so much for joining me on the podcast today, Will.

Will Falk (52:25)
My pleasure. And anyone who’s connected with you who wants to connect on LinkedIn, please feel free. I’m happy to continue the discussion there. Thanks, Reuben. Enjoy.

Reuben Hall (52:35)
Certainly, and thanks to everyone for listening to the Moving Digital Health podcast. If you enjoyed this conversation, please go to movingdigitalhealth.com to subscribe to the MindSea newsletter and be notified about future episodes.

Authors

  • Reuben Hall is the CEO of MindSea, a mobile app development agency partnering with Health Tech and Wellness leaders to build digital products that empower people to lead healthier lives. With 17 years at MindSea and 6 years as CEO, he leads an experienced team creating mobile and web applications at the intersection of health, wellness, fitness, and technology.

    Starting his career at MindSea as a UX Designer, Reuben brings a user-centered approach to building products that make a positive impact. He believes strongly in the potential of digital health solutions to improve the efficiency of healthcare and enhance patient outcomes.

    Outside of work, he is passionate about giving back to the community—supporting charities through initiatives like the Ride for Cancer and volunteering as a youth basketball coach.

    Follow Reuben on LinkedIn

  • Will Falk is a Canadian healthcare policy and digital health leader focused on how innovation and AI are reshaping care delivery. He is an Executive-in-Residence at the Rotman School of Management (University of Toronto) and a Senior Fellow at the C.D. Howe Institute, where he works with policymakers, health systems, and innovators on responsible adoption and governance in Canadian healthcare.

New call-to-action