Rae Woods (00:02): From Advisory Board, we are bringing you a Radio Advisory, your weekly download on how to untangle healthcare's most pressing challenges. My name is Rachel Woods. You can call me Rae. This week, we're going to be doing something a little bit different. I'm actually going to be handing over the mic to one of my longtime colleagues and friends, Eric Larsen, President Emeritus of Advisory Board, and also President of TowerBrook Advisors. (00:27): Eric, welcome. Eric Larsen (00:28): Rae, nice to see you. Thanks for having me. Rae Woods (00:31): You've been having these conversations with health leaders, executives, CEOs for a decade? Is it all right if I admit that to our listeners? Eric Larsen (00:40): Oh, my gosh, longer, longer. I've been Advisory Board for something like 20 plus years. Rae Woods (00:47): Eric, what's the topic that you want to cover today? What's the big question you want to grapple with with your guest and with our listeners? Eric Larsen (00:55): It won't surprise you. I want to talk around generative AI and understand it in all of its multidimensionality. Have John, who's sort of encyclopedic on this topic, demystify for us how should we think about gen AI across healthcare, across administrative simplification and care augmentation and drug development and discovery and consumer empowerment? What are the different categories of impact? I want to talk with him about how he's harnessing Mayo's huge repository of structured and unstructured data. I want to talk with John about emerging regulatory efforts that he's not just contributing to, but in some cases, leading. And so if we have a successful conversation, and in knowing John, it's going to be fascinating. I'm hoping we can touch across the waterfront on gen AI and healthcare. Rae Woods (01:47): Well, Eric, I'm going to officially pass the mic to you and your guest and friend, Dr. John Halamka, President of the Mayo Clinic Platform. Eric Larsen (01:58): John, good to see you. How are you? I presume you're up at the farm. Dr. John Halamka (02:02): All is good. So we're in that sort of tweener period between winter and spring, which means that we're sometimes mud and sometimes ice. It's all fine. Eric Larsen (02:13): I love it. Well, look, you're one of my favorite people to talk to on a whole array of topics, but I'm particularly excited about this one. There's really an infinity of topics I want to ask you about. And what I like about you, John, is you've got this fascinating heterogeneous background, but it gives you, I think, this very interesting, almost ambidexterity. You're at the biggest incumbent or one of the biggest incumbents in the industry at the Mayo Clinic, but you're ambidexterity, you work with a lot of startups, and if I'm not misinformed, you've spoken in something on the order of 40 countries, demystifying AI, projecting how this is going to play out in the industry. (02:52): And so I think for a topic of this magnitude and complexity, I think it's fitting to start super high level at almost a societal, or I might even say civilizational level. Since November 30th, 2022, when ChatGPT was launched and had the fastest adoption in the history of technology, I think 100 million users within eight weeks or some astronomical number, it's really spawned almost a societal level conversation about how consequential is this technology. There are those who assert, and I subscribe to this viewpoint, that I think we're going to look back at this as one of the most impactful technologies in the history of human invention, perhaps one of the top 20 to 25 civilization altering technologies on the scope of agriculture or written language or the wheel or the semiconductor. Let me ask you, is this hyperbolic? When you think about gen AI, how significant a breakthrough or a series of innovation complementarity breakthroughs is this? Dr. John Halamka (03:59): You phrased the question well by saying this is a series of breakthroughs. I would sometimes describe gen AI as an overnight revolution, 50 years in the making. So think about it. I've been doing AI since the early '90s when I was a Lisp programmer, and back then, things were lists of knowledge or probabilities and such. Well, today in effect, think of generative AI as just a pinnacle of all these previous achievements glued together where we're predicting the next word in a sentence based on billions of sentences, and we're doing it in a probabilistic way. So it has very realistic human sounding communication ability. I would maybe answer your question directly by saying this last 50 years has created a set of tools which will be transformational over the next decade. Eric Larsen (04:56): John, you said something I want to focus in on for a second. You said this particular breakthrough or this particular culmination of 50 years of innovation is predicting the next word. It's an autocomplete or an auto aggressive sort of model. And there are those that are calling this a stochastic parent or a glorified autocomplete. It may be a simulate chrome of human reasoning or thinking, but it's really just a glorified prediction. It's probabilistic not deterministic. So are you in the camp that this really is just a stochastic parrot, just predict the next word, predict the next phrase, and when you train it on trillions of tokens, you suddenly get some interesting answers? Or does this really mimic human reasoning or is it approximating human reasoning and logic? Dr. John Halamka (05:45): Wow. See, this gets into a heady philosophical area, doesn't it? So ask yourself this question, what is human intelligence? What is human reasoning? So I would argue, I do what I do because I have 60 plus years of experience. I kind of know what works and what doesn't work. I know community standards. I know state of the art. Well, if one takes all the exemplars of human knowledge and human speech and human action, and puts them into a probabilistic model, isn't that kind of reasoning? So how about this? It's hard for a human mind to comprehend a model of 70 billion or a trillion parameters. So I guess I would answer by yes, it is an extraordinary mathematical achievement, but it's going to approximate reasoning because human reasoning is itself a kind of math. Eric Larsen (06:48): It's really interesting. I mean, this has been such a moment that's just captured the imagination. And unfortunately, there's a little bit of reductive behavior. Everybody kind of going into their own armed intellectual encampments basically saying, ah, this is just an auto predictor. Those that are saying this is actually going to be a super intelligence that may obsolete a lot of what humans do today. But I want to ask you just about your general optimism level. I mean, I love Silicon Valley for a lot of reasons, but one of the things is they come up with amazing summaries of the camp. So there's the E\accs, effective accelerationists, the decels, the deceleration, but my favorite new one that's emerged over the last couple of months is p(doom). You've been following this one? So P and then parenthetically, doom. So it's basically the prediction of is AI going to hit singularity and destroy humanity? And what's your score, 0 to 100, that this is going to actually be a harbinger of human's destruction. (07:52): How optimistic are you about gen AI? Is it going to take us over and turn us into pets, or is this really going to democratize medicine and knowledge and education? Dr. John Halamka (08:01): So think about it, in my lifetime, right? Born in 1962, my automation in my home was a Smith Corona typewriter with no autocorrect. And so I spent a huge amount of my early career doing redundant work that has actually little value and took a vast amount of burden and overhead and not creativity. So in my lifetime, this notion that I'm not going to have to use a manual typewriter that I can dictate, and dictation will be transcribed and turn into understanding, wow, that's going to make me insanely more productive. I can do the things that are the, what I call practicing a top of my license. And so I think enough of us are working on the guidelines and guardrails to believe it's all up to us, and the likelihood that this is going to be Terminator part 17 is very low. Eric Larsen (09:06): Yeah, look, I share your view. I'm constitutionally an optimist, and I think this is going to be an augmentation of human capability, not an elimination. But just to be a little provocative, all past automations have generally mechanized brawn, human brawn, or mechanized low level cognitive work. If you kind of extrapolate forward, not GPT-4, but GPT-6, 7, 8, one could see that a lot of what constitutes knowledge work today, a lot of the cognitive, non-routine, analytic activities could be first augmented, but maybe even replaced by future iterations. Once we figure out hallucinations, and let's talk a little bit about that. But I'm not Pollyanna-ish that there's not going to be some dislocation, maybe even dis-employment. (10:03): And I recently saw a study that looked at something on the order of 750 occupations in modern, industrialized western economies, and then it disaggregated those 750 occupations into 19,225 tasks. And it estimated that between a huge number, 30 to 80%, you could drive a truck through that, but directionally, it's interesting, between 30 and 80% of those activities were automatable or eliminatable by GPT-4. And what we're projecting will be the capabilities in GT-5. Aren't you the slightest bit word about the dislocation that will happen to knowledge workers if the current direction of travel for the technology continues at its current pace? Dr. John Halamka (10:50): In the short to medium term work will change. And I think that's okay. Do we really need humans to index the literature? I mean, it's a wonderful thing that they have for the last 100 years, but probably it's actually going to be more efficient to have that done automatically. So it's okay. It means those folks will be doing other more creative things. The longterm, you're right, it's likely that a number of jobs, a number of positions, a number of workers will no longer be needed in the longer term, but that will coincide with the demographic shift we're seeing in society where birth rates are less than 1.5 in the industrialized world, and yet people are aging and needing services. The economy will grow because people will do more creative and productive things. And when there are fewer people, we will get the services that are necessary because of the automation we've put in place. So generally, all good. Eric Larsen (11:54): It's all good. I agree. As long as we have enough time to adjust and adapt and the productivity augmentation happens in a way that we can assimilate those changes. I mean, the year 1900, 70% of the US population was employed in agriculture, and then by 1950 it was 4%, but you had two, three generations to assimilate that change. And so my nervousness is if this happens in a really accelerated way, will it be destabilizing? And it's encouraging to hear that the general sort of thought or consensus or emerging consensus around this is that we're going to be able to adjust. (13:24): I might ask you just for our listeners to demystify how you see generative AI applications playing out across categories in healthcare. And just to be a little declarative here, I think about it in four ways. I think about in an administrative simplification, I think about it in care augmentation, I think about it in drug development and discovery, and then consumer activation. And those are super broad reductive categories. But as somebody who's been really a demystifier of gen AI to the healthcare community, how do you categorize the probable areas of gen AI's impact? And then if I could wedge in one adjunct question, the rough timing or sequencing, how you see it playing out across whatever categories you want to put forth? Dr. John Halamka (14:15): So of course, at a high level, we do have to look at AI and divide it into probabilistic, the kind of things that we did with predictive AI over the last decade plus versus generative AI. So remember, we're still going to see a lot of clinical decision support that is more of this predictive, I'm going to look at a test result or a combination of images, telemetry and history and predict disease or suggestive care journey, that won't be generative, that'll be predictive. And I feel pretty good about the math because you can verify the math against ground truth, was it right or wrong? And therefore, oh, for a given person with a given set of characteristics, is this likely to be a helpful algorithm? (14:59): With generative AI, as you suggest, I mean, it has more than just word prediction, that's a large language model. But in general, the idea of creating something new, whether that's a picture or a sound or a sentence, these capabilities based on looking at data of the past and predicting what a human would write in the future are very good, but not completely accurate, as you point out. And that is because there isn't sentience there. I mean, it's just, again, sort of patterns of the past. So if at 1:20 in the afternoon it writes something brilliant, and at 1:22 it writes something fictional, are you going to trust it to manage your healthcare for your family? So I've asked this question of all the big tech leaders, and they said, boy, there's some amazing uses for it, as you say, and we'll go through some administrative and discovery and consumer empowerment, but for now, probably medical decision making wouldn't be the one they would be comfortable with because of this issue of quality and accuracy. (16:11): So administrative, well, administrative tasks include summarizing conversations. Again, it's not transcription, it's saying we talk for two hours, what do we talk about? What do we agree to do? What are the next steps? Well, a human could take a lot of time to come up with that. Generative AI does a remarkable job at creating a summary like that and even identifying next steps, or helping me write an email or helping me fill out a camp form. Well, don't you think you could train a generative AI to fill out camp forms? Yes, for sure, so summarization, email drafting, camp forms, revenue cycle, supply chain, all those reasonable things because inaccuracy isn't going to create much harm. Eric Larsen (16:56): And by the way, even though these are really accessible, intuitive examples, we're still talking about an estimated $1 trillion in administrative spend in this country alone. So the consequence of just application of gen AI to that administrative simplification, I think is going to be pretty consequential. But please continue. Dr. John Halamka (17:15): Oh, well, no, exactly right. I mean, if we're looking at the US, which is 18 to 20% of the GDP on healthcare, and 1/3 of that is what I'll call waste, administrative overhead, makes great sense. (17:27): Another area, as you point out, that if you're looking for a new protein structure... Eric Larsen (17:34): Oh, absolutely. Dr. John Halamka (17:34): ... if you're looking for a new drug target, if you're looking for a potential off-label use, yeah, it's pretty good because you can look at vast amounts of data and be able to sort through that data at a scale that a human would've a hard time with. Potentially problematic, but exciting idea. (17:52): Suppose you live in a rural part of the United States, suppose you live in Sub-Saharan Africa or Northern India, and you have no opportunity to access a clinician at all. Eric Larsen (18:04): Yes. Dr. John Halamka (18:05): Let's imagine that... And of course I'm making this number up because no one's ever mentioned it. But let's just say doctors around the world are 60% accurate. Eric Larsen (18:14): Sure. Dr. John Halamka (18:15): They're humans, and they have variable amounts of training. What if you had a generative AI for consumers, it was 70% accurate. Eric Larsen (18:24): That's right. Dr. John Halamka (18:24): You'd say, oh, my God, it's going to create a whole lot of mistakes. Well, true, but it's going to be better than the alternative that a consumer has in their area, so why not? Eric Larsen (18:35): Yes. Dr. John Halamka (18:36): And so as you say, this notion of consumer empowerment at least help you directionally to solve a problem, absolutely a great category. Eric Larsen (18:45): I want to dive into the clinical augmentation category. How is this going to play out from a clinical decision support and almost an AI diagnostic assistant going forward? And do we run the risk in coming generations of maybe encroaching on physician judgment? How do you see this playing out? Dr. John Halamka (19:10): You raised the point earlier that change is hard, but if you have enough time to change, you'll recognize the positive evolution rather than the replacement of what you do. It's making you do what you would like to do better or faster. And let me give you an example. I know this is going to sound like an absurd example, but it's a true story. So I'm a toxicologist, and I was consulted a year or two ago about a case in California. A young woman was running through a Walmart parking lot at 3:00 in the morning unclothed, and an emergency physician evaluated this patient and said, oh, I better run a toxicology screen. And the toxicology screen showed cannabis. And he said, oh, well, that explains everything. Now, again, every one of us has different human experience, but let's just say in my 40 years as a clinician, I have not seen cannabis cause that kind of behavior. But this clinician discharged the patient thinking it explained everything. (20:12): I was called in to consult on the case. The patient had, in a Boston area hospital, a CT scan and a lumbar puncture that showed she had a horrific case of meningitis, and that caused, of course, all this altered mental status. Well, when she was better, I asked her, I said, "What's the cannabis story?" And she said, "Well, you know, I had photophobia and a stiff neck and a headache. And I asked my roommate if there were any analgesics in the dorm," and the roommate handed her some gummies. The reason I tell you that story is if you look at a million young women with altered mental status, cannabis wouldn't rise to one of the top 100 explanations for the kind of behavior, but meningitis would be probably number one or number two. And so what if an AI said, oh, wow, the patient in front of you, their phenotype, their genotype, and the exposome. I've looked at a million people like that, here are the things to consider. Wouldn't a doctor say, wow, I almost missed that? Eric Larsen (21:14): Absolutely. Dr. John Halamka (21:15): And so in effect, you see that augmentation is going to be a kind of safety net as well as reducing burden. And you'd think over time, people, most people would say that's a good thing. Eric Larsen (21:28): Well, I absolutely think it's a good thing. (21:31): The nervousness I have, and you've written widely on this, is that AI is not going to replace doctors or nurses, but doctors and nurses who use AI may replace those who do not. My question is how is this going to alter what right now is the most respected profession in the United States? Doctors are annually, number one, nurses, number two, military veterans, number three, it's also the highest paid occupation in the United States. Nine of the top 10 highest compensated occupations in the US are medical, 25 of the top 30. And so you've got this revered, appropriately so, profession, and now you've got this emerging technology that will augment but may encroach on some of the tasks that the profession does. Right now, the average primary care panel is about 1 to 2000. Are we entering a period where we may see 1 to 10,000, 1 to 15,000? (22:32): Right now we spend 5 to 7% of the US healthcare dollar on primary care. If we're enabling primary care to be better diagnosticians and manage care longitudinally and close gaps in care, won't we see some territorial encroachment onto medical specialists? And what are the implications of that? If you play it through, again, I keep returning to this, not GPT-4, but GPT-6, 7, 8, how is this going to impact the hierarchies, the practices, the strict segmentation among and between specialties? Few better are positioned to comment than you, especially being at the Mayo Clinic. Dr. John Halamka (23:12): So do you know that a lot of what I do as a human, not as president of Platform, but as a human is help those in society navigate their healthcare. I mean, isn't this a funny thing? You've probably been asked by friends, family members, who do I see? Where do I go? What are the possibilities? It's a very challenging thing to do. And of course, I have to take my 40 years of experience and make a best guess. What ends up happening at Mayo Clinic, which has 7,000 specialists, is matching the right patient with right specialist at the right time in the right setting for the right care is just really tough because you can assume that going forward, especially with the demographic shift I mentioned, there are going to be fewer and fewer specialists to go around. So the key, AI will be kind of like ways for healthcare. (24:04): A million people drove from point A to point B and got there safely at low cost and good time. Well, what if I want to go from sickness to wellness? What's the route? And so I don't think that humans, given the limited resources that we're going to have, to your point, not enough primary care, not enough specialists, are going to be able to do it, unless we are augmented by ways for healthcare telling you what stops along the journey are most helpful to you. If you woke up with a headache this morning, maybe your first intervention is to take some ibuprofen not call a neurosurgeon, but at the moment, you're going to end up calling a friend and maybe the friend will be an AI doc in your pocket going forward. Eric Larsen (24:49): There's an adjunct here I want to ask you about, talk about the healthcare data conundrum. Most of this has been inaccessible, now with natural language processing, presumably, much more of it is accessible. But there's questions of data sovereignty and safety. Just give us a little bit of a primer on data. Dr. John Halamka (25:09): If you look at the history of the electronic health record, a lot of the data was structured, things like diagnoses used for billing. So let's just say that I have a little red dot on my finger. Well, it turns out that's a turkey bite. Eric Larsen (25:26): Literally, on your farm. Dr. John Halamka (25:28): Yes. There is an ICD-10 code for turkey bite, believe it or not. No physician would ever use it. The physician would write abrasion. Well, I mean, again, your listeners probably don't know this, but there are a lot of odd bacteria that live in the mouth of a turkey. So it's actually relevant that I was bitten by a turkey and not simply scratched by a barbed wire fence. Different kinds of differential diagnoses. So the problem is, is that we have a data integrity problem that is the full story isn't recorded in the structured data. We have data inaccuracy. Social security number is typically wrong about 10% of the time in the medical record because data is transposed, or it's the middle of the night and the triage clerk is trying to get an answer out of you while you're in abdominal pain with a burst appendix. Can you remember your social security number? (26:29): I tell you that, so assume a lot of the data is wrong, a lot of the data is incomplete, a lot of the data isn't coded. And so the question is, is how do you turn that which is inherent flawed data into wisdom? And part of what you do is if you could grab the narrative that is because often in the text you would say, oh, patient comes to us today after being bitten by a turkey. And so hence, the narrative I would argue is 80% of the value of the data for training. But there's multiple problems with taking a narrative which has got a lot of run-on sentences and a lot of ambiguity because doctors don't necessarily write their sentences with the most clarity, but also it has some privacy issues. So what if there's a note that is totally de-identified? It has no name, no address, no phone number, it just starts off, and this former president of the Advisory Board. (27:30): Well, I would guess there are probably more than one former president, but probably there aren't 10. And so the problem with using this data is privacy protection. So what Mayo has done, to your point, is we've used machine learning and natural language processing to read billions of notes and to do what we call hiding in plain sight. Instead of saying this former president of the Advisory Board, it might say this former healthcare leader or something like that, that is something with a bin size that's so large that no one's ever going to be able to re-identify. And if you do that on large enough amounts of data and capture all the richness of the text, we did it on 10 million patients at Mayo, but 15 million patients at Mercy. And when you look at UHM Canada, Albert Einstein Brazil, Sheba, Singapore, Korea, Denmark will be more than 100 million lifetime experiences de-identified in privacy protected for training models. What is going to start to get us, what we hope, are models that are fair and appropriate and valid. Eric Larsen (28:40): That resonates with me. And again, there are a few more authoritative voices on this than the Mayo Clinic. And your database, I would surmise, is one of the richest and probably most sort of sanitized, normalized out there. (28:55): But my question goes to, if you have this huge, valuable semi-structured, unstructured, structured data repository, suddenly you've got a new technology through natural language processing that can take a lot of that unused data. And by some estimates, 80% of healthcare data goes unused. Suddenly, the big surface area of available usable data just increases exponentially. That's enormously valuable. It's going to make the factuality higher on the foundation models. It's going to make the foundation model itself that much more valuable. (29:29): Does Mayo believe it needs to own that foundation model to pre-train and run inference on your own foundation model? Or can you partner with either a big tech purveyor of a model or an open AI model and fine tune it? And I guess my question is appropriability, who benefits from that? Does Mayo benefit from it? Does it accrue asymmetrically to the owner of the foundation model? Do you need to own your own foundation model? This is a question I can only ask of probably one or two health systems in the world because it's going to be inaccessible to everybody else. How would you answer that? Dr. John Halamka (30:06): Mayo believes that we have to worry about such things as quality, accuracy, privacy, and so how can we, in 2024, it's a 2024 problem, begin to explore that so that what we produce could be generally beneficial to patients globally, that sort of AI doc in your pocket, or specialty trained LLMs that will help a cardiologist or radiologist. So we have a whole series of LLM projects we're working on. So as you suggest with Cerebras, we have their hardware in-house. We'll be able to generate a new 7 billion parameter model every week based on Mayo Clinic de-identified text data. And might that model have a much better interpretation of fact and less hallucination? We don't know. We're going to try. And we are working with each of the big tech companies in various kinds of multimodal large language models asking how do they work? What are the risks? So we are going to, in 2024, do all these things. And I will tell you, by the end of the year, I'll have a better answer for you as to who needs to own what, who benefits from what and what are the risks. Eric Larsen (31:27): I think you answered it rightly. There aren't any real definitive answers today. It's very speculative. But one would think that with, A, the volume and the quality and the heterogeneity of the data that Mayo Clinic has, that you guys could build a pretty accurate foundation model, a multimodal model that would generate a very valuable output. Not that this is what's animating you, but there's a monetization opportunity in that for someone. (31:58): The other question is, if you do partner with the big tech player, how confident are you that you're not going to see data exfiltration? Certainly, not intentional. We both know these players. They're honorable men and women and they're trying to do the right thing. But the sort of irreducible problem with foundation models is we can't explain them. We can explain what they do, but there's this sort of mechanistic interpretability problem, so we don't know if Mayo Clinic data is going to stay secure. Can you talk about data behind a glass because that's a concept I know you've been popularizing and I thought it was an interesting construct? Dr. John Halamka (32:33): The data behind glass concept is as follows. I trained in informatics in the early '90s. One of my close cohorts, Latanya Sweeney, was a privacy expert at MIT in the early '90s who was able to re-identify the de-identified state healthcare data by simply merging data sets together. Eric Larsen (32:55): Yes. Dr. John Halamka (32:56): So Mayo has said, well, de-identified means we've replaced the job roles, geographies, familial relationships, of course, removed all the usual HIPAA Identifiers, but then we prevent linkage with other data sets. We prevent exfiltration. We do not allow these various attempts at re-identification because we host the data in a cloud container. We audit, we control, and you sign data use agreements before you try to develop any products using the DID data. So that's the notion. Now with LLMs, we're just not totally sure, even if it's de-identified and it is kept in a cloud container and somebody brings a model and then the model goes out to the world, will the model potentially be able to re-identify a person? And that's why for the moments, we are hosting all of the LLM models that we are creating internally so we can control the inputs and the outputs until we know more. Eric Larsen (33:59): I think that's wise. I mean, until quite recently, we thought Bitcoin and blockchain were also unbreakable or non-traceable. And I think we're going to find that there is a lot of ability to re-identify, de-identified data, and I think we're going to see data leakage, data exfiltration. I applaud Mayo Clinic for the vigilance and just how thoughtfully you're proceeding and excited to see how this plays out. (34:22): John, can we talk a little bit about regulation for a minute? And again, we're truncating what could be a much longer conversation, but you have been not only a commentator, but really an organizer and an advocate for thoughtful regulation and technology of this significance becomes a geopolitical issue. And we're seeing the EU take the lead on regulation. I think there's a lot of noise here in America. I worry that this may be a pattern of America innovates, Europe regulates, China appropriates, but we can have a geopolitical conversation down the road. But give us your take on what we, as a nation, should do when it comes to responsible AI, but not over-engineering this or allowing regulatory capture or somehow impeding the pace of technological advancement. Dr. John Halamka (35:13): The regulation that we're looking at, or subregulatory guidance it's called, is to say, could you ensure that every algorithm used for patient care, not only when appropriate, goes through FDA, but has what's called a data card and a model card? What data was used to create it? What performance characteristics can you expect? Who's it going to work for and not? Because that's going to build transparency and that will lead to a set of what I'll call confidence or credibility. And so hence, you're right that we have to be careful not to overregulate and quash innovation, but at the same time, we have to ensure credibility. And that's every conversation I have in Washington is about creating assurance laboratories to create these data cards and model cards to build transparency, a sense of repeatability and consistency to build credibility. Eric Larsen (36:09): That makes a lot of sense. And obviously, we're going to be super vigilant on this. Who knows? We could have a watershed sort of announcement here in just the next couple of weeks. And of course, the Advisory Board's going to be reporting on this and hoping to demystify it. (36:23): I started off my comments earlier today by saying that one of the things I really respect and admire about you is your ambidexterity between working with incumbents like the Mayo Clinic and insurgents like startups. And I think you've been very prominent in Silicon Valley as a voice for entrepreneurs and technologists and engineers. And Mayo has been very active in backing startups, either taking warrants or putting direct capital into it or incubating startups. And I understand, you even have a program, I think it's called Accelerate, that gives emerging companies access to the data to refine their algorithms and get educated on FDA clearances and the rest of it. How are you thinking about what an incumbent like Mayo Clinic should do versus what you would ask startups to do? Dr. John Halamka (37:13): Right. So ask yourself an academic medical center with 80,000 employees is really good at clinical care and clinical expertise. And as you suggest, it might have very rich data sets, but is an academic medical center the best place to create a breakthrough software innovation, manage a product lifecycle, take a product through an FDA process? So this really has to be a partnership. And exactly as you say, what we look for is the energy and an enthusiasm of entrepreneurs who aren't constrained by the status quo, who are willing to push the envelope, but need the clinical expertise and need de-identified data access to create a product or service. We all know that 90% of these will fail, but some of them are doing remarkable things and growing extraordinarily fast. And I'm careful, because I don't endorse any product or service or company, but we have certainly seen in our Accelerate cohort, a number of companies get to series B funding excess of $50 million and FDA approval on products purely because they had the opportunity to validate them in a clinical setting with clinical data to ensure they were effective. Eric Larsen (38:37): So there's certain things Mayo Clinic needs to do, and there's certain things that enterprising startups need to do, and it's a symbiotic relationship. And I'll close by saying I don't think a lot of health systems have figured out the right equilibrium here. The last data I saw suggested that it's 23 months for a hospital from signing a contract with the digital health innovator to actually deploying the solution. So 23 months post signature on the contract. I think Mayo has been a real pioneer here, not just in AI, I like what you did with Medically Home and others, but for a future discussion, John, we should disentangle that because I think there's some useful lessons for others. Dr. John Halamka (39:14): Well, how about this? We've taken it from 23 months to 23 hours, and that was purposeful, and the end result is an agility to innovate. And if there's anything that I hope in my career when I'm 30 or 40 years older than I am now that I can look back on is that I made a difference. I am always happy to speak with you and this audience because the next 20 years of my life will be about mentoring those who will replace me. Eric Larsen (39:45): I'm grateful to you, grateful for our friendship and collaboration., and I know our listeners are going to enjoy this dialogue. So thanks for today. Dr. John Halamka (39:52): Thank you. Rae Woods (39:58): Eric, it was so fun for me to listen to your conversation with John. I know that you are also spending a lot of your time thinking about the rise of generative AI, so I want to put you in the hot seat before I let you go. What's the one kind of takeaway or insight that you want to leave our listeners with? Eric Larsen (40:21): I would say, I do think this is one of the most consequential technologies in the history of human invention, and I think that there are those that are deprecating some of its capabilities now that I think are going to have a reckoning in the coming years. And I love John's optimism. I am quite concerned. Rae Woods (40:44): Welcome to the dark side. This is where I live. Eric Larsen (40:48): But I do think that the transformation we're talking about is going to happen with much more acceleration than I think we would want. It's not going to take generations for this to unfold. And so I think as a society and as a group of healthcare leaders, it's incumbent in us to really start becoming facile and understanding the technology and then becoming a little bit of futurists or just speculating on how this could play out. I mean, John talked about probabilistic versus deterministic, and that's an important distinction in generative AI. I think we've got to really think probabilistically going forward. If this thing accelerates and achieves proficiency, we figure out the hallucination issue, we get the factuality down. How are we as knowledge workers, and I'm talking about consultants and strategists and financial planners and doctors, how are we going to be thoughtful about this? I think this is going to be a deep societal, not just evolution, but even revolution. And I think John helped illuminate some of the turns as to how this might play out. Rae Woods (41:54): Well, Eric, thank you for bringing lessons from the C-suite to Radio Advisory. Eric Larsen (42:00): Rae, thanks for having me. Rae Woods (42:23): If you'd like Radio Advisory, please share it with your network, subscribe wherever you get your podcasts and leave a rating and a review. Radio Advisory is a production of Advisory Board. This episode was produced by me, Rae Woods, as well as Abby Burns, Kristin Myers, and Atticus Raasch. The episode was edited by Katy Anderson with technical support provided by Dan Tayag, Chris Phelps and Joe Shrum. Additional support was provided by Carson Sisk, Leanne Elston and Erin Collins. Thanks for listening.