In this episode of Moving Digital Health, Reuben Hall speaks with Marco Smit, VP of Business Development at Gesund.AI, about the future of AI in healthcare and life sciences. Marco shares why so many organizations stall in pilot mode and explains why AI governance in healthcare organizations will be the key differentiator for those who successfully integrate AI into their research operations.
“AI governance is going to become a differentiator. Those who can crack that code versus those who cannot.” — Marco Smit on how simply reducing risks isn’t going to help teams move forward with AI adoption, and execution and re-evaluation of strategy will become increasingly important.”
Topics Covered in Episode 35 of Moving Digital Health (Marco Smit of Gesund.AI):
- Why 43% of AI pilots stall — and the difference between good and bad pilots (05:07)
- Organizational resistance to AI adoption and fears around job impact (09:56)
- The difference between using AI for existing processes vs. novel health discoveries (e.g. drug discovery) (13:10)
- Four ways healthcare orgs can reduce regulatory ambiguity in AI (21:10)
- AI startups to watch in the clinical trial and digital health space (26:24)
- How Gesund.AI is tackling the evidence generation bottleneck in clinical trials (32:20)
- Why AI governance in healthcare will define the future — and how leaders can rethink strategy and execution cycles (36:11)
Find Moving Digital Health on Apple Podcasts and Spotify, and go to movingdigitalhealth.com to subscribe to get notified of new episodes.
Read Transcript:
Introduction & Background on Marco Smit
Reuben Hall (00:01)
Welcome to the MindSea Podcast Series, Moving Digital Health. Our guest today is Marco Smit, Vice President of Business Development at gesund.ai. Thanks for joining us today, Marco.
Marco Smit (00:14)
Thank you for having me.
Reuben Hall (00:17)
Maybe you can start by telling us a bit about your background.
Marco Smit (00:20)
So I’m European. I’m Dutch and French. I studied at ESSEC in France and at Erasmus University in Rotterdam. I really learned early on that I’m kind of a leader and a contrarian and discovered that during my college days. And I thought I would use these two skills and these traits to go into political leadership.
really try to change society for the better. But I started working management consulting in Europe and Asia and I kept going east until I ended up west. I started working here in California with Genentech on Rituxan, which was a tiny drug. The company thought it was going to remain a tiny drug. And I led a team that basically brought this contrarian perspective. We said,
You’re looking from the science out, from the inside out. What does the science do? What the FDA allows us to say? But we look at from the outside in. Let’s take the outside in perspective. How big could this be? How could this help patients? And when we did that, we discovered that what they thought was the full market size, 100%, was really about 12%. So this drug could be way bigger. And the main point is not that it’s a financial aspect of it. Of course there was a financial aspect of it.
But the main thing is of course that it also means there are so many more patients who can benefit from this. Just a few weeks ago, was talking to a recruiter and she said, when I mentioned the reduction, she’s like, my God, that saved my life. So that’s what you do it for. And that’s what I decided. In healthcare and life sciences, there are two problems in the sense that the innovators are very heavy on the science of innovation, but not so much in the science of breaking through to the clinic.
Right. Maximizing the impact of your innovation and on the receiving end, the users, particularly when it comes to novel innovation, like AI is they get very confused. There’s a lot of signal versus noise confusion. And so if you take both forces, then there’s a lot of wasted energy, a lot of opportunity costs. And my mission, my personal mission, my professional mission is to use my unique talent, my ability as a leader and contrarian.
to my networks to fix that, to basically say how do I get this innovation that is stuck, that deserves much more impact on the clinic. How do I get it unstuck? And I’ve worked this problem from both sides, first in digital health, then in mobile, when mobile was the next big thing, and then real data for 10 years, started working with the Obama administration on that as far back as then.
Then for the past five years, really in AI and even on the AI side with started out on the building AI sides with care syntax, where I had up the AI team and data team. And then we basically ran into this bottleneck of we build great AI and R &D setting, but how do we get this to the clinic? And that was turned out to be a real big bottleneck. That’s what gesund has been addressing with a truly unique.
technology platform as well as other partnerships. This evidence innovation is really important to get again all this investment that we made, all the eye, it turned into ROI and also financial ROI, but also patient impact because ultimately that’s my North Star, maximizing patient impact. And I’m now starting to also work with some other companies who address even larger get to clinic.
Marco Smit (04:05)
impact bottlenecks. So that’s my story. I love everything we’re doing in healthcare and life sciences, but so much innovation does not get the patient impact that it deserves. And I’m not here to start doing more core innovation on the science side. My job and my mission is to use my unique skills to expand the impact of innovation that deserves a bigger impact.
Why 43% of AI pilots stall — and the difference between good and bad pilots
Reuben Hall (04:33)
Yeah, lots of good points there and you’re really connecting the dots from the science to the implementation. Healthcare and life sciences industries are pouring billions into AI, but most of those companies still struggle to achieve real results.
A recent McKinsey survey found only 1 % of US companies have scaled their AI investments while 43 % remain stuck in pilot projects. Why is there such a gap between ambition and the actual transformation?
Marco Smit (05:07)
Yeah, that’s a great question. And I think it’s a very important question of our time right now. Whenever there’s a new innovation, whether it’s mobile or real data or AI, there’s an initial period of awe with the technology, what it can do. But after a while, they will start dying down. And people are saying, but how is it changing my life really? And how is it changing what I’m doing really? So I think we’re going to face that in AI.
Now, let’s start first with the point about the pilot projects and why 43 % of these pilots don’t advance past the pilot phase. So first, I think you need to understand why is it not progressing, right? So there are good reasons and there are bad reasons. Good reasons may be that this was really meant to, let’s say if it’s a pharma company, wasn’t really sure if they really want to go down this path and the pilot confirmed that they do not want to go down this path. from the pharma company, that’s a good result from the…
Technology innovator company does not a great result, of course. But there are many bad reasons why pilots also fail.
I was at a conference at Stanford recently, and someone from Anthropic had to said his personal slogan. I love this slogan: think big, start small, move fast. And I think what the McKinsey survey results find, and I think this is very typical, you find many other results, surveys like that, is that in healthcare we tend to just focus on starting small and the other two parts we are kind of forgetting about. So I think a couple of things. Number one, for the startups side, I think there’s just a lot of impatience. They want to get going. I was just talking to someone this morning who has lot of experience in this world as well. Pharma cycles can be 18 to 24 months.
you see how fast AI is moving. What does that really look like in 18 to 24 months? How can you even predict what it looks like in 18 to 24 months? So they’re impatient to get going and they’re eager to do pilots that are ultimately not pilots that advance the ball, advance the mission to get AI adopted by the large organization. So they should not do bad pilots. They should only do good pilots, but it’s easier said than done. So, but pilots can be powerful, but…
but you have to do the right pilots and too many fail because they’re not the right pilots. The large companies, AI is very hard to get right. It’s very difficult. It moves at lightning speed compared to what they’re used to with drug development and drug discovery. And they don’t naturally have the skills. And I even see it, we’re working with, we’ve been working with a company that is only 500 people large. It’s not a pharma company. This is healthcare.
healthcare company. They built AI and only 500 people and still they’re struggling with how to put the systems in place. How to get it all right. And there’s a lot of kind of a one-off adjustment, a lot of band-aids, not much streamlining yet. So with smaller companies already struggling to streamline, larger companies have even more problems streamlining, particularly in healthcare and life sciences. And final point is
Michael Porter, when I worked for a monitor company, talked a lot about value systems, you know, and pilots value systems means what is, what are the core value drivers and how do you put all the enabling pieces into place? And pilots often don’t address all the value system components. They focus on getting to these benefits and temporarily the suspending the connection to the supporting systems.
But if you then need to have the supporting systems in order to get the full benefits at scale, then that can be a reason why you then stop at the end of the pilot, even though you should be continuing on because it is creating value, but the band-aids don’t scale. So you need to put new pipes in, new architecture and new systems in. And that is a difficult and complicated discussion sometimes. So I think a lot of pilots fail for bad reasons. Some of them fail for good reasons.
Organizational resistance to AI adoption and fears around job impact
Reuben Hall (09:28)
What about the people within those organizations that are just resistant to that type of change?
It’s fine for the CEO to say, okay, we’re adopting AI. This pilot went well, and now we’re implementing across the board. You have a lot of people who are pretty comfortable in positions, like they’re not worried about losing their job over AI, and they’re just not really excited about it at all.
Marco Smit (09:56)
Yeah, I think two comments to make. One is you’re exactly right. So I had a, I was, hosted a breakfast for navigating the AI future here in Silicon Valley just about a year ago. And I remember that exactly what you just said was basically out of the 12 people who were there, about nine said, my CEO said, I want to do something with Gen AI. I want to get into AI and I basically told me to do it and I don’t.
know what to do. I don’t want to start the wrong thing, but I also don’t want to start nothing. So what? And so they’re very, and these are all life sciences folks. So I think it’s on the executive level, it’s clear that they will say we need to do something with AI, but then how to make that work is not so obvious. And in life science in particular, to your point, maybe it’s not about losing your job, but if you do get AI wrong, you will get dinged on that.
So I think people are concerned about making mistakes with AI. ⁓ So that’s why I think there’s a hesitation. And I think the second part, you say people are not worried about losing their jobs to AI, but I think some of them are. I think Google recently launched an agent which can effectively do a lot of work that you would normally do in lit search. So they basically can…
Marco Smit (11:25)
collect all the evidence, all the publications, we can evaluate them, we can generate hypotheses, and we can propose projects. And that’s all done through their new agent. These are jobs that people have, right, to do these today. And that’s just today. Who knows where it will be 12 months from now, 24 months from now. So I think a lot of people are actually worried about, it may not wipe out my entire job, but maybe half my job. And if there three of us…
That means they may not need one of them, one of us. So I think there is a little some concern about AI at least impacting your career.
Reuben Hall (12:04)
That’s true and you know people are all working in teams and they say hey this might not affect my job but I love my team I want to keep working with them we don’t want to be you know scaled back to two people instead of seven. If I drag my heels on this pilot and maybe that will slow down the the excitement and we’ll get to all stick around a little bit longer.
The difference between using AI for existing processes vs. novel health discoveries
Reuben Hall (12:33)
So we definitely see there is that that fear and hesitancy around it. I mean every every organization is at its own stage in the transformation journey. Some of those are really early on they’re just starting the pilots and trying to figure it out some or further. They’ve done a pilot now they’re really rolling it out at scale and going after like you say moving very quickly you know,
Which of the large organizations that you’ve seen are doing AI well and are ahead of the curve so far?
Marco Smit (13:10)
Yeah. Start off with thinking about what the AI transformation journey is, right? So I think you can almost think about it as a two by two, where one is what are you using AI for? Are you using AI for an existing process? Like the example I just gave you, lit search and all that kind of stuff. That’s an existing process that you do today. You use AI to now do it through an agent instead of a human being. So that is using AI for existing processes. The second part is.
Do you want to have entirely novel processes that take advantage of the strength of AI? So first of all, what’s your objective, existing or novel processes? And the second part is, your current supporting systems, your current data infrastructure, technology infrastructure, are they hurting or helping? And I would say that let’s take healthcare and a lot of sciences. In healthcare, when you look at a bridge and ambience and Suki and a lot of these companies,
They focus on existing processes for which the current systems are hurting and they put their AI into effectively say your current systems are hurting, but with our AI and our AI systems, our AI platforms, we can actually make it help. So we will take care of the hurt and we’ll turn the hurt into help. So you go from existing processes, you put your AI processes in and you don’t need to migrate. We will do.
we’ll make the AI, we’ll do it all on the back end. So that is where a lot of money is pouring in. I think that is a low ceiling. It’s a low entry point, a low floor, but also a low ceiling in terms of the benefits, unless you start translating connecting administrative steps to clinical steps. But I think we’re a little bit far away from that. On the life science side, I think there are two different distinct categories.
So then when it comes to novel uses and your infrastructure is helping, I would say drug discovery and maybe commercialization are two areas where AI is doing fine. Drug discovery because it’s kind of a creative process, there are no limits. So the current infrastructure doesn’t really limit anything. It’s set up for experimentation, for broad divergence of hypotheses, et cetera. So I think that’s drug discovery.
find and not big barriers there. I think on the commercialization side, it’s similar but different in the sense that you have a lot of data, a lot of systems, which is great for the AI. And so now the AI can start finding missing pockets of opportunity. So I think those are two where it’s going relatively smooth. Of course, where I think the biggest pain point is, is despite all this great drug discovery, 85 % of drugs are still failing in clinical trials.
So that’s where really ultimately if you could, I remember when I was at Roche, Thomas Schoenacher, who later on became CEO of the whole company, talked about wanting to cut the drug development time in half and double the number of drugs bring to market. So that’s a really aggressive goal. And you can’t just use AI for existing processes and expect to hit that kind of goal. However, I’d say in general, the industry is doing exactly that. They’re using it at best for existing processes. At Gesund AI, we talked extensively with a pharma company that wanted to do annotation for endpoints.
that’s fully manual today in imaging and oncology and I want to do it as AI assisted but FDA said AI assisted equals black box assisted so the consumer AI platform would open up the black box so but that’s an existing process we’re simply making it a lot more efficient and it would be 90 % more efficient so it is it does matter but that’s the you fit an existing framework you fit within existing rules you just need to get the FDA to accept it
Marco Smit (17:28)
I would say that the supporting systems or enabling systems are maybe not hurtful, but they’re definitely not helpful. And there’s no easy fix there. So I drug development, and I think the last part on that comment too is that, which is that earlier, I think drug development has a lot of very conservative people because ultimately drug discovery is about experimentation and commercialization is also about optimization.
Marco Smit (17:55)
But drug development is where this potential success goes to either become a success in the market or it dies. And you do not want to be the guy who was experimenting with AI and made a die when it could have succeeded. So I think this is very, it’s a rife opportunity, but it’s also the most sensitive to get right. In terms of large organism, yeah, go ahead.
Yeah, so one just kind of dialing in on one point there is that so any AI is leading to drug discovery, but the success rate of those drugs and commercialization is no better than it was before.
Marco Smit (18:32)
Yeah. And I think ultimately if you, it’s a little bit, the process by which you’re trying to get to, to market and get to, approval, hasn’t changed. mean, it’s, changing around the margins. Like what I said, AI assisted, or I remember speaking to someone, and I think I can say this publicly, who said that, was a genetic and he said when they in, before he came, it would take them eight weeks to get data from the clinical, from their own clinical trials, from their own sites. And he basically.
automated, innovated, and it only took them a week. So that’s still the same process. It’s just simplified and streamlined. So my point being is that you can change what you’re into the pipeline, but if your pipeline still is managed the same way, you should not expect a change, structural change in results ⁓ and success rates. And I think that is where change needs to come.
but it’s not there yet.
Reuben Hall (19:34)
So how can organizations address the bottlenecks, including the regulatory ambiguity that are hindering progress?
Marco Smit (19:45)
Yes. And maybe just to finish up the point that you mentioned last time, which companies, large companies, large organizations are doing well. think Roche and AstraZeneca AZ are investing more in this than the most others. But it’s still early days, I think even for them. And I don’t know if in silico medicine is a large organization or small organization, but I think they’re doing a lot right. But they are exactly at this point of
They did a lot of discovery and now they’re starting to get into clinical trials phase one, two. I don’t think they’re in phase three yet. but, so I think that’s, that’s, that’s the answer to that question about which companies are doing well. Yes. What, how can organizations, so sorry, were you asking me around? How can organizations
4 ways healthcare orgs can reduce regulatory ambiguity in AI
Reuben Hall (20:37)
How can how can the organizations address those bottlenecks you talked about?
Including you know getting to market regulatory ambiguity and sure, we’re discovering more drugs. You know what needs to be addressed to increase the likelihood that they’re getting through the trials or kill them before you know more money is invested in them and and you know, just get more efficiency through the pipeline.
Marco Smit (21:10)
Yeah. So I think, so let me, let me take that question into, well, let me focus on the regulatory part first and then maybe the bigger picture. So on the regulatory side, I think there is just a lot of ambiguity that’s going to continue. And so as a result of it, right now, the way we do AI development as an industry is very similar to drug discovery, or sorry, drug development. What I mean by that is,
We work on an AI model and discover if it’s working, right? And we don’t care about FDA compliance, et cetera, and then we try to do clinical trials and then we go to market. So that is not going to fly in this regulatory ambiguity. So I think what you need to do is from the get-go, permeate FDA-grade, we need four things basically. Number one, permeate FDA-grade track and trace across the entire life cycle, even when you start to experiment.
Put track and trace in there on a model level, data level, experiment level, and population level. Because ultimately you don’t know today, the FDA has been announcing, or not today, mean this in 2025, that they want to move not only evaluating the AI at the model level, but they also want to go up more upstream and understand what data you used, how you used, cetera. So you don’t know what’s going to happen.
Reuben Hall (22:34)
Interesting.
Marco Smit (22:39)
Over the next few years, this may change all the time. So whatever you should do is…
Reuben Hall (22:42)
But at least the FDA is changing to the new landscape and trying to address how they’re going to handle with this. you’re right, it’s still ambiguous.
Marco Smit (22:52)
Correct.
Marco Smit (22:54)
So yes, the FDA is changing and no one can predict. think even people in the FDA cannot predict what’s going to happen a year from now, two years from now, three years from now. And many of, if you want to build an AI business, particularly in healthcare and life sciences, you need to think multi-year timeline. So my first step is permeate your FDA grade track and trace across the life cycle so that no matter which direction it goes.
you will be ready for it, number one. Number two, I think a lot of pharma companies, particularly the larger ones, avoid the FDA. They treat engaging with FDA like a lawsuit, right? You don’t want to ask a question unless you know the answer, and so that’s fair. But the FDA is trying to learn, and my experience, I’ve been in multiple startups, my experience is that the FDA does appreciate the unique expertise that smaller companies and innovative companies can bring to the table.
So I’d say engage the FDA, don’t avoid. Of course you need to know how to do it. Maybe what you do is you use partners for that. You basically ask your questions to them, but engage, engage, engage, do not avoid. We’ll on number two. Number three, one thing I would do is have a active and transparent evidence strategy. A lot of AI builders in healthcare and life sciences are focusing on.
Marco Smit (24:16)
getting the best performance out of their AI model. But number one, they’re not thinking about getting the widest population adoption for it. So they should think about that in terms of evidence. And second of all, they don’t really have an evidence strategy other than doing the minimum that they need to get FDA approval. But FDA approval or FDA clearance is not enough. You need to get reimbursement and even then clinicians need to trust that your AI and understand what your AI does.
I would say evidence is critical, not just for the FDA, not just for the payers, but for actually making, going back to the patient impact, for actually making clinicians use your AI. So that’s the third part, evidence strategy. And the fourth part is, yeah, allocate real resources to dealing with this strategically. Don’t just be reactive. Don’t just see this as we’re doing the absolute minimum to be compliant. Think of how do I deal with a regulatory risk? How do I influence and manage regulatory risk?
and allocate resources to that. number one, four things. Number one, permeate your FDA grade track and trace across your life cycle to be ready for whatever direction the FDA goes. Second, engage the FDA, don’t avoid. Third, have a more active and transparent evidence strategy. And fourth, allocate resources to dealing with this strategically. Don’t just be reactive.
AI startups to watch in the clinical trial and digital health space
Reuben Hall (25:38)
Excellent. All great points. Now, the AI landscape in healthcare and life sciences is exploding. There’s a new company every day, every week, filling like all specific niches. And there’s lots of excitement and promise around those. But for companies, it’s also just very confusing. Where do I start? How do I even evaluate?
you know, these different solutions, are they even apples to apples or are they, you know, covering different, different stages of the life cycle,
Which startups do you see are most promising, and why for buyers of AI?
Marco Smit (26:24)
Yeah. So let me point out four or five startups. And it is a very confusing landscape on all levels, right? So let’s acknowledge that for first of all. was recently talking to an investor and who is working with a lot of companies who use these foundation models from the big companies, OpenAI and Anthropic, et cetera. And she was telling me it was astounding even once they picked a foundation model to use the operating costs of these models.
fluctuating wildly on a month by month basis in a very unpredictable manner. So confusion is real, confusion is everywhere. No one can avoid the confusion, unfortunately. So the first one is the company I already mentioned. So Insilico Medicine, which is in the Cambridge, Massachusetts area. The reason why I think they’re doing things right is because they are AI native. They are very creative about the use of AI in drug discovery.
They are starting to get into clinic. As we just mentioned earlier, the challenge is where do you go from here? Right? So, so let’s see what they can do from here. They seem to be able to create a, create a pipeline, targets drug, build drugs, and then, get them into phase one or maybe phase two. let’s see where they can go from here and what they really are going to become at the end of the day. But instead of code does so much more right. And then others in this industry.
company that I like, a lot is, quant health, quant health does basically, in silico trials. So what it basically means trial simulation, a lot of, trials fail for reasons that are not inherent because the drug is not inherent to the drug. So it’s not necessarily the drug was not right, but, let’s say the control arm performed too well. And so you, you didn’t design your control arm because it was kind of an afterthought. She were really focusing on, on your active arm.
And because of that, you kind of made life harder on yourself, right? So in silico trials allows you to understand better how to optimize this process of the conversion from drug discovery to getting to the market. So I think that is a good area to be in. And QuantHealth is a stellar team out of Tel Aviv.
Stepping a little bit away from life sciences, more into kind of the in-between area, company called Prenuvo. They’re doing very interesting work and now I don’t know exactly what’s confidential, so I’ll stay on what I think is public information around them. But they do organ-specific AI. Organ-specific AI, even if you think about aging, right? We all know the situation and I used to be in the IVF business and one of the things…
people love to do is to figure out how old are your eggs? What’s the biological age of your eggs versus what’s your overall biological age, right? If you think about that on this organ specific level, your brain, various areas that can have cancer, they’re working on organ specific AI and that is very exciting, very powerful. That can refine ultimately the targets that we’re going after. We can refine the clinical trial population that you’re going after. And with that, hopefully it should
your chance of success of drugs that you develop for specific organs and most drugs or many drugs are specific to a specific organ malfunction. And the other part too is I believe in general, you’ve seen it with Ezra Function and others as well, this market segment of proactive meaning for health care instead of sick care is very important and it has a lot of potential. People are sleeping on this.
because a lot of companies have, self-paid companies or consumer oriented companies have gone bust, but that is because they were not doing meaningful interventions. Companies like Prenuvo do meaningful interventions. that’s why they have a hundred million dollars in revenue and growing very fast. the fourth and fifth one, and I’m going to say it like that, is Ketryx and maybe norm AI. So Ketryx I know very well, they work on the intersection of regulatory and AI and
Marco Smit (30:45)
They just take so much out of this regulatory friction using AI and building agents now that it really helps companies focus on the innovation and the evidence and not focus on all the regulatory processes that basic catricks is kind of taking care of for them. So using technology, using AI. The reason why I said four or five is because I think norm AI,
is a company that is exactly their entire definition is we will innovate on the intersection of regulatory and AI, but it’s horizontal. It’s not limited to healthcare. So I think that has a lot of potential too, but I don’t know enough about Norma Norma AI to have a meaningful assessment of them. So Ketryx is healthcare specific for now. Maybe they go beyond healthcare in the future. But I think all of these companies that I’m mentioning are addressing very specific bottlenecks that are holding back innovation and will allow innovation to come to market faster, better, more robust and with less friction and less cost than we have today. So, yeah.
How Gesund.AI is tackling the evidence generation bottleneck in clinical trials
Reuben Hall (32:02)
Excellent, so some ones to watch there. And what about Gesund? I know you’ve been very humble not to speak about it too much, but I’m interested to know more about what problem does Gesund.ai solve for their customers?
Marco Smit (32:20)
Yeah. So I think the, yes, you’re right. It’s like the umpire cannot, you know, vote for his own team. Right. So, so, but I think so what, Gesund AI, addresses, which is a critical bottleneck too is so the, first part is really evidence generation. So, so evidence generation is for clinical trials, FDA clinical trials is for AI.
is still done in a very manual fashion. We’ve seen even pharma companies who will say, like the example that I gave you earlier, where a pharma company was doing a large trial, this was around metastasis, and so they had radiologists ⁓ counting how many lesions, the shape, the size, all that kind of thing, manually. And so the evidence generation still is very manual, very slow.
and very expensive. so what GesundAI does is build a fully transparent, GesundAI has built a fully transparent, very flexible, powerful platform with all the tracking and tracing on a very granular level that allows you to do it faster and better and also improve your likelihood of success. So one of the things that we see, like to stick with a pharma company example, they basically put the CRO in charge of managing the trial.
And so once a trial is done, they get the results or maybe every six months they get an update. In our case, once the trial gets going, you are an active participant in terms of seeing what happens in the trial. You cannot influence what happens in the trial, but you cannot edit anything obviously, but you can be an observer. And as an observer, you can spot when something is going wrong early, maybe when you’re 10%, 20 % in the trial and fix it before it’s too late. So maybe instructions were unclear, maybe.
the UI was maybe a little bit fuzzy, whatever it happens to be, you can detect the problem and fix it within the boundaries of what the FDA allows before it’s too late. So you increase your likelihood to succeed. So what Gesund AI does is basically say, evidence generation is very complicated and costly and high risk. We will basically fix all of those things. We’ll streamline it. We’ll make it less risk. I’ll make it a lot faster. And so that is very critical.
And the first bottleneck for go-to-market is FDA clearance. The second bottleneck will be getting user adoption. And so they’re monitoring that because the AI is expanding into and then ultimately I would say either you go upstream where we’re as well as I said earlier, this track and trace throughout the whole life cycle, which should start earlier. But oftentimes the challenge with the earlier part is that let’s say
There’s one customer who has, let’s say 80 models at the earliest stage, otherwise they won’t maybe get, let’s say 10, go to FDA trials. So if I’m paying for 80 models to be fully track and traced compared to 10 that I’m only going to go bring to market, my economics change. And my risk reward changes. Right now, Gesund.ai addresses the evidence generation bottleneck for AI builders.
And then gradually it will move into other evidence areas that need to be addressed to build AI businesses and not just AI science projects.
Why AI governance in healthcare will define the future
Reuben Hall (35:48)
Okay, so zooming back out here, we’re talking to the leaders of healthcare and life sciences organizations.
How can they rethink their approach to AI within their organizations and digital transformation to really achieve scalable impact?
Marco Smit (36:11)
Yes. So I think there are a couple of important comments to make here. Maybe like three or four points. Number one, I think in this in general is what people call kind of AI governance, which is basically translating this shiny toy to real impact at scale. And so AI governance is going to become a differentiator. Those who can crack that code versus those who cannot. I’ve seen pharma companies who said, well, we don’t want to take risks, so we just don’t do anything, or we just cut 90 % of the opportunities. Well, that’s not doing anything, it’s not an option either. So AI governance, which is how do you move forward without increasing risk, is gonna be a differentiator, number one. Number two, I think they need to recognize that this AI, is not about, AI is not transforming.
⁓ your organization with AI is not about AI. It’s not about the technology, but it’s about re-engineering how you create value and how you capture value of that. So think about what am I trying to re-engineer? What bottlenecks were in there, my value creation and my value capture that AI now removes and how can I create value and capture value in entirely new ways? Number one, that’s number two. And I think that the challenging part is that this is a very dynamic.
technology innovation wave. So I think what the part that is important, this is difficult for large pharma companies in particular, is that we’re used to a model where we do strategy, we figure all the constraints and all the objectives and we optimize our strategy and then we optimize our execution. But that will no longer work. Strategy and execution is no longer the paradigm. It’s strategy, execution, reevaluation. And the reevaluation is, was our strategy right? It’s our strategy still right?
right in the context of new technology developments, new AI developments, or do we need to update our strategy? Or the strategy is still fine, but the execution now needs to take advantage of new AI developments. strategy, execution, re-evaluation instead of strategy versus execution. And along those lines and up to the point of AI governance, think about your decision cycle times, are changes needed?
I gave you the example before and I was at the CHI, Coalition for Healthcare AI. So many providers were talking about how they have all these safety committees and other committees. And when you look at the time it takes for these decisions to be made, it’s going to be, again, 18, 24 months. But what are you deciding on? Because the subject of what you’re deciding in terms of AI is changing so drastically.
Marco Smit (38:58)
every six months, right? So, do you need to change your decision cycle times? You really need to think about that. And similarly, think about skills for this re-engineer tomorrow. If we’re gonna re-engineer how you create value and capture value, what are the skills that you need in this future? What are the gaps? How do you approach the gap to filling these skills? This is, AI is inevitable.
It’s really a matter of how do you co-develop and develop co-intelligence with AI. And it’s not about should I do AI, should I not? You’re going to be doing it. Do it in a controlled fashion in a way that is deliberate and smart. And so finally, I think the two points that are connected to all of this is stay flexible, don’t overcommit, but do get going.
AI is a contact sport. It’s not something you can plan out doing it, put in a five year cycle or a long range planning cycle. You need to be just like we had software development that became more agile. Now strategy needs to become strategy and execution need to become more agile. that links back to the very first point. AI governance will become the differentiator. And that’s why I’m getting more more involved with companies that are focusing on AI governance.
Reuben Hall (39:51)
Thank you.
Excellent. Well, I think that’s a great place to kind of bring it full circle. And with some really, really good points on the need for flexibility there. Thank you so much for joining me on the podcast today, Marco. Really appreciate it. And thanks to everyone else for listening to the Moving Digital Health podcast. If you enjoy this conversation, please go to movingdigitalhealth.com to subscribe to the MindSee newsletter and be notified about future episodes.
Marco Smit (40:49)
Thank you. Really enjoyed the conversation.



