Brian Ennis: Fear kills adoption faster than regulation does, right? We're in a reactive paradigm in current validation methodologies where we say, prove it's safe before I use it. Whereas you turn that around, you say well. Design it safely from the start. Narrator: You are listening to Augmented ops where manufacturing meets innovation. We highlight the transformative ideas and technologies shaping the front lines of operations, helping you stay ahead of the curve in the rapidly evolving world of industrial tech. Michelle Vuolo: Welcome to Augmented Ops podcast. Today we are gonna talk about AI at the crossroads of regulated industry, polar conversations these days. But I'm really excited to get to talk to a couple of industry experts, both Brian Ennis and Martin Hyman. I'm gonna let you guys say hello. Introduce yourselves. Brian Ennis: Awesome. Thanks Michelle. Appreciate it. It's nice to be here. So let's see. Brian Ennis, the Chief Quality Officer, co-founder here at Swear. So we're a validation platform. Cloud-based solution, man, the AI journey. So I've been a technologist, like an enterprise architect my entire career, 25 years now, working within regulated industries. I had a great opportunity of being at the very early team that launched the Veeva Vault product. I've had a lot of fun working cross industry, so I think at this point I've probably worked with a little over 200 life sciences companies on it, quality strategy. Ai, AI was a surprise for me because. When we started Square, we went out to really digitize and automate the validation process, right? So equipment, computerized systems. We built a platform called Rescue for that. And when I originally started we thought okay, we're gonna have a lot of data. And we're gonna have a lot of automated testing and things, but AI has moved just super fast, right? So made the conscious decision a couple years ago, I think this is going back about two and a half years to move into a role really focused on innovation. And so I spent the last two and a half years working across industry on how we're gonna adopt AI in safe and responsible ways, right? We're building the tool set to help manage that. That's great. I think really as an industry, we have to learn how to ask the right questions. We have to learn how to make sure AI can be trusted and used in responsible ways. And, just like the transition to the cloud, like the questions are really paramount right now. And so for me, like I'm, heavy AI user and heavy, the exploration of the use of AI in our industry. But I really have spent the last, two years of my career just focused on this because it's been such a. An amazing transformation and it's so important. Michelle Vuolo: Awesome. What about you, Martin? I know you've been on the kind of more technical side and have some deep data knowledge. How did you get to being an AI expert? Speaker 4: Yeah, thank you much for asking and I really appreciate being here. Thank you very much for the invitation. So you asked how we ended up working with ai and for me it was really like how I started with working ai with ai. I am a business mathematician actually by education. So focusing on probability theory, focusing on statistics, focusing on optimization, which make. Quite prone to then do this professionally. Actually for all of my career, however, I started this in another industry in the financial services where served for five or six years and where saw things not going that right actually where things needed to be fixed. And it was like five or six years ago when I then made this transition to the life sciences where I saw, maybe we can do this better here in this industry. Learning from the past, learning from other industries failures, mistakes perhaps because here it's ever more important as what we do is for the patients in the end. So therefore, I joined the ISBE community, the GA community in particular, which is. Really my first starting starting point and touch point with the life sciences industry and make my way from article of article to Good Practice guide and know being happily co-lead of the ISP GAM AI Guide in just a couple of years, which is one of the amazing things in AI as things are going really so quickly and now doing this for the life sciences supporting clients. In compliance strategies, validation frameworks, wherever advisory on AI is needed. And definitely a lot of that is, is needed now to bring what we have crafted together as a community in the life sciences really bring this to practice. And maybe this talk, this podcast gives a little bit of inspiration on what's possible and maybe where some challengers are still sitting. Michelle Vuolo: Yeah. That's great. So Martin, it's just so funny because I think you've really highlighted sort of the crux of what we're talking about, right? It's the crossroads of this, what we're calling amazing technology, but also compliance and regulated industry. It's an inflection of where I am too, is having spent most of my career in life sciences being more of a conservative thinker, but also now in technology and really wanna not just jump on the ai. Bandwagon, but really understand how we can do it compliantly. So I find this is a really interesting place to be, and without wanting to just jump on the bandwagon, like I said, or, hop on the the hype curve or whatever, but really try to understand where real value is created. And so why don't we ground ourselves first? Like, where are you both seeing really great benefits from this technology? Because we're not, let's face it let's not just. Use this ai 'cause it's cool, hot, sexy, whatever, but like really 'cause it's creating value for patients and for manufacturers who create drugs and products for patients. Speaker 4: So for me, really one important message is AI is not. It's just one thing only. It's really a powerful set of tools where we have this range from traditional analytical ai that still plays of course a very important role. Think about visual inspection, thinking about pattern recognition, finding irregularities and so on. But then this of course got a whole new. Game and nuance as we are now in this age of generative ai, even progenic ai. Of course, that became the hard topic of 2025. That actually helps us in getting more of the fuzziness. Better sorted. All of the fuzziness, all of the loads of information that we are dealing with now that we have these very capable models available. Now starting with these chat with your SOP type use cases that are now coming all of the place that are sometimes not that easy actually to implement to get it right, but then also the more complex stuff is to how do we use this for root cause analysis? How do we get the right ideas? For investigations, for car paths, and how do we control it in a way that we do not repeat our mistakes in the past when we haven't chosen the right car paths in the first place. So these are now some areas where I see this fusion of heterogeneous and multimodal data with taxed process parameters, possibly, also coming together with very powerful models that we still need to harness in a way that is actually. Bringing the benefits, but it's good that people are exploring this the one way or the other. Brian Ennis: I love the point that AI is not just one thing, right? It is a collection of tools and technologies that can be applied. So just to throw some terms out there, right? We have generative ai, we're just creating content, right? Whether it be, like the heavy use of generative AI for, for email synopsis and PowerPoint presentations and their productivity tools. Retrieval Augmented generation, which is fancy search, right? Being able to take data and create some context out of it. You've got a gen ai which has a certain level of autonomy. Think I, I tend to think of it as like a virtual worker or a virtual person in some capacities. And then you've got surrounding and underlying layers of machine learning and deep learning, which could, might be part of another solution, but they also might be discreetly used in something. I think it's important, we're talking about regulated use cases here, right? So I think every company's using this in the non-regulated entrepreneurial sense right now around meetings and general productivity. But, I think for where we are just to be glass half full. Where we're seeing success is narrow, well-defined use cases because underneath the success of ai, all the stories right, is good quality data. 'Cause that is the fundamental thing that helps AI run really well. We can talk more about, the rise of Agen AI and how that's gonna help with that later. But for what I see now, right? Predictive maintenance and GMP manufacturing facilities where you've got historical sensor data. You've got supply chain on an opposite side, right? Demand forecasting, supply planning, especially around biologics and companies like that, they're using it well. You might see, some automated deviation, triage or quality signal detection within production manufacturers. But if you think about underneath it is two major things, right? One is that all those examples have got structured data which is well contained and can be well understood. Then the use of AI right now is around augmenting decisions, not making final decisions, right? Or GMP. It really is a very productivity tool set, but the key thing here is that these are like narrow use cases where AI is being applied. It's not like a broader holistic, LLM that you're throwing at to see if it sticks right and it can like, do it like a computer in the sky. It is really narrow right now for where we see success. That said, 70% of companies plus are. Are evaluating this within our industry in a serious way, but I think we're so early that everyone's trying different things, right? So there's no one use case that everyone's looking at. And we're learning from. We're seeing companies experiment in many different ways, and that's exciting, but it also means that there isn't a lot of data points to compare of like how certain methodologies or providers might be driving the right outcomes. Speaker 4: But really, as we are in the regulated industry, bins us all is the necessity of qualification, of validation. Doing the paperwork, right? If it's now on paper or. Due to paper, whatever you choose, right? And here I do see it's quite a substantial use case. That of course, causes lots of efforts. But with, I do also see a great chance in applying large language models and applying generative AI to streamline not only to speed up, but really to streamline how do we actually approach in this structured way, validation activities in itself. How do we get from a well plan to a value report in efficient, in a streamlined way that is actually making good use of all the principles that we are advocating now for years in the gap world, like risk-based approaches, scalable lifestyle activity is critical thinking, which again. Brings in what you said this human in the loop, this human control aspect, that it's more of an Augmented, supportive way. But on the other hand, it also changes somewhat the approach that we need to take as a validation person, say in this instance, to how do we actually manage, how do we actually design this process? So there's always this shift of what's my perspective now in using this technology? What's my role? What are my responsibilities? What do I need to know to actually be able to spot hallucinations? Or when we hit some boundaries on limitations or when information is not present for the large English model in a rag approach, for instance. So to target the augment this, and I think this use case is very interesting as it shows tremendous capabilities. But still also some of the challenges that we have in terms of cultural change and change in ways of working to actually get it to a point where we achieve. Better efficiency and streamlining effectiveness on scale. Michelle Vuolo: Yeah. There's a lot to consider here, right? We're talking about all the like other controls that we need to put in place. When we are thinking about utilizing this in this regulated space, what do you think are success factors and really not getting caught up in these, I think one of the things people are talking about is the POC graveyards or, things that were like yeah. Try it here, try it there, and then like never ended up scaling it. To, to full value Speaker 4: value more than British Airways. This is also a nice thing that we have here in Europe, which is true. Yeah. Yeah. It's here. I would say we need to build robust foundations first, whether it's data, whether it's our governance processes, whether it's AI literacy that we need to build to acquaint people with what we're actually talking about, or having a good strategy in place to actually select meaningful use cases that. Bring the value that we're hoping for, but that are also matching the maturity that we have in these organizations. So building these foundations, I think is very important to have fertile ground to actually make it work. However, these foundations in itself cannot live just in ivory tower. They need to be guarded. They need to be guided by tangible use cases so that they're practical so that the validation framework say that this build is actually helpful for these people then working in this process. And the challenge really is how to balance the invest into this foundations and then bringing in some of the early adopter use cases. Knowingly that we put such teams under some kind of risks where it all works out nicely. So it requires this pioneering spirits and the management support to do this in this pioneering environment. Michelle Vuolo: Pioneering spirits. I like it. Brian Ennis: I do. I feel like that we're like we're headed out into the west. This is great. I, there's, coming back to the data topic, right? That's probably the one single thing that you might point to at a macro level and say why things are not working right. They don't have the right foundation in the data or something. But Martin's right there that it's actually quite more than that. Part of it is, the governance approach to it, the critical thinking that goes into designing it and part of it's maturity of both. What we're trying to design as well as the tools themselves. For all the tools that we take for granted ing from an internet browser to rather quality management system, to a manufacturing execution system. There's 30 years of attempts of organizations. Iterating and evolving and superseding, and we're seeing that in a much more rapid timescale, but it's still the same thing, right? This is a pioneering spirit. We're trying things out, and you find areas that are blockers or you find technology that isn't mature enough yet, or you find, but it requires to have that engaged team with the right level of skills and the right incentives and the right support to be able to push through to fail fast and to try something new, right? It's interesting to me. Also because for, at least from the arc of Life sciences, and I've been here for 25 years, we built a lot of software for a long period of time. And then we moved to the cloud, or even before that, software vendors started getting more mature, and so we stopped building software, and so we just started buying software instead. And collectively we're all like, we're life sciences companies. We do not build software. Maybe we build integrations, but we're not building our own tools anymore, and now we're back building our own tools anymore. Like we're, again, we're like we're building novel applications to do different things with the tool sets that we have. While in parallel, software vendors are also doing the same thing, right? And trying to nail that. So I think it's understandable that a lot of things fail right now because it's just early. But that doesn't mean we don't try, right? Because if you don't jump into it, you're never gonna get there or you're gonna get there way behind your competition. And so I think that's I think it's just a normal part of a technological adoption curve. Michelle Vuolo: So Brian, it's funny, it's, that's the second time you use cloud as an example, and I just, it's ironic or funny. I'm not really sure, but I lived through that transition as well, and I remember going to a conference and talking about cloud and people just couldn't understand what it was. It was very nebulous. But also there was this transition at a point where people started trusting the cloud. But not for critical operations. And now people are starting to use it for critical operations and MES type situations and things like that. So I think it's very similar, right? Like it's about building trust on these technologies. And so this is a very, I would consider it relatively conservative industry. What are the best ways to convince the holdouts? Brian Ennis: Like the cloud and that transition, we're learning how to ask the right questions right now. And it's all about how's my data protected and where's my data at rest and where's my data in transit and how can you gimme evidence that hey, it is hallucinated or not. There's lots of questions we're learning how to ask, and providers are learning like, and this is the LLM models and Amazons and others and Oracles are learning how to answer those questions, right? That was the same thing as the cloud. Eventually we just realized the cloud was like somebody else managing our computer. And we were like, okay, I can figure that out. And they learned how to answer the questions, but there is something different about AI in that the cloud was like, we took infrastructure and we just moved it someplace. Whereas AI is really a business process transformation. It's less about the technology, it's more about the fact that it's exposing that in many cases our business processes weren't laid out effectively in the first place. And now we're applying technology to them and finding out. There are manual hops where people used to be, or there were process and efficiencies and so we had to start thinking about our processes and redesigning them. And getting to the holdouts, which group are we talking about? Because business users want technologies that make their lives easier. Always right? It wants cool new toys, right? 'cause we're technologists and we like fun things. And so we want ai, of course we want ai, but I'm a quality person, right? Like I've got a long list of demands that you would need to satisfy before I'm gonna let you put this into a GP use case, right? So I think working with those teams is about education right now it's about understanding what's the really important questions, what they need to do and getting answers to those questions, right? And that's the way to get those holdouts to convert right now. By holdouts, individuals who are more conservative or more risk adverse within an organization who are gonna wanna see more maturity or more accountability before they're gonna evolve into a new technology like this. Martin, do you agree? Yeah, Speaker 4: I was really thinking about this what's the business purpose of this? You said cool, new toys, gadgets, everyone's what? Wants to play around with that, but in the end, the bottom line matters. So either it's a quality advantage that we are seeking or. A business advantage or perhaps both if we're lucky enough to get this sorted out. And here I'm even that, I know that I'm here in an Augmented ops at all, but I really like to think about this use case in pharmacovigilance patient safety that many are currently exploring where everybody feels this pain of, do we really need to have hundreds of people sitting there screening? Clinical trial reports just to get the data out of some unstructured text report whatsoever into our safety database and structuring this with all the heterogeneity that we have in this data. It was not possible before. Now we have this innovation that is very much fitting to approach this complexity in data and streamline that with the right RAs applied of course, with the right process, with the right AI literacy. So we touched upon this before, so this is. Very attractive and it's also attractive because the burden is rising ever more such safety reports. This case reports are arriving, so therefore the natural question arises. How do we deal with them? Just throwing more people on that. Or is it maybe time to think about the process, the business process transformation and how we can use technology wisely to support us in that. And I think these use cases where we actually can unite. The business objectives and the quality objectives for more streamlined and a much more efficient process where we have this ignite of mutual understanding. Yes, let's go for that, and maybe even, let's go for that together with suppliers. As in the end, it needs to fit to fit together from a compliance strategy and what is provided in terms of technology documentation whatsoever. On the other side, and this is what I've found very beautiful about the use of AI for these use cases, where this is, there's this another chance to build connections among various parties trying to solve a very pressing problem together. Michelle Vuolo: Yeah, that's a great example, right? It seems almost obvious that probably AI could do that better. Not only just more efficiently. I wanna jump over to regulatory, right? We're at the crossroads of regulatory, but haven't explicitly talked about regulatory. For anyone out there that's tracking Annex 22, which is European Draft Regulation on ai the first version came out. It was very conservative in my opinion. But I do have to say I was at the ISPE Pharma 4.0 conference last week, and it was really interesting. One of the authors of that first draft was there, and. Was I don't know, defending maybe what happened when that regulation went out there. Lots of comments. Lots of comments. I think he said on the order of 5,000 comments came back and it's quite a short document for those that don't know, but that they are considering opening up what they were talking about, but also maybe considering the fact that there are downstream controls that we keep talking about. We started there. It was very refreshing to hear that gentleman speak and say, that was the first draft. And oh, by the way, the first draft was written six months ago and a lot has happened in the last six months. Just wondering where you think this is going and how that may or may not impact regulated industry. Speaker 4: So what I found very interesting about this annex is that this is a reflection of where a, the regulator is in terms of. Understanding in terms of maturity in this topic, but B, also a reflection of where the air regulator sees industry being mature or maybe not so mature yet. And with all of these discussions. Can we do dynamic systems? In a critical use case, what does critical even mean? We do not absolutely for sure know yet. Can we do generative ai? Can we do large language models or can't we? I think a very constructive discussion evolved around that, that there was an up rise and understanding. Yes, we know that these things can hallucinate, but yes, we are also creative as to how can we handle this in a proper manner. Sometimes a concept like guardrails or rag that Brian already mentioned can help us quite a bit as we then can show effectiveness of our approach. Where you going back to the roots of validation, providing evidence for the decisions and the design choices that we made. And therefore, slowly we're growing understanding as to how should we do this. The right way. Not just going for a large language model, not just going for generative ai, but being mindful about the controls that are absolutely necessary to, in the end, protect patient safety. And amongst these discussions in Annex 22, I think this was. A shocking moment when it first came out, but the way that it was picked up, it's opened a chance for better understanding, for constructive discussions. I don't know whether it was meant to be a little bit provocative so that we could come back as an industry, but this is definitely what happens or what I see happening, and I think this is now a good tendency that we see a convergence of thinking on either side. Brian Ennis: There's a long history of guidance in our industry establishing a position and then telling us to go figure it out. So even all the way back to 21 CFR Part 11, when it first emerged I think, this kind of discourse and this kind of collaboration's really important because certainly the. Overall global political dynamic around AI is somewhat unpredictable at times, and regulators, I feel, are trying to do their best to, provide the right guardrails, not block innovation but at the same time make sure we're asking the right questions. Something that, that I don't necessarily wanna say, I have a firm answers for the regulator position, but I am curious about is. We got a lot of really great frameworks out there already that we're applying, right? So we've got all the great work at ISPE and what's done with gamp, F-D-A-C-S-A, ISO 1345 and medical device, right? 40 2001. There's a lot of really great ways for us to think about like how we contextualize and build controls. We actually don't have enough. Solid tools yet for developing evidence. I think this is still an area where I see the AI platforms and providers evolving, and I see companies potentially under investing because you might see a roadblock coming up and saying that okay, what happens if it does drift in the model and the data needs to be corrected? What happens if there are hallucinations? A process. Where is human oversight possible? Where can I use something that's really adaptive versus something that has to be more static and configurable? There's a lot of design decisions that quality needs to be part of really early. And I think regulators will help set some of the right boundaries around, this is where we feel you need to be concerned and where you feel that you need to have mature processes. And I think we can get there probably pretty quick, but I think we need the evidence to be able to back up those controls and show that they're actually functioning correctly. I still think we're early in that where there isn't like the, really good default tool sets to bring it back to the cloud example, right? Like you can't be a cloud vendor in a GXP context right now without a pretty robust and mature audit trail. Great security features and 21 CFR part 11 compliance interest. That's just full stop, right? That's it. That's the table Michelle Vuolo: stakes. Brian Ennis: That's the table stakes to, to enter into the market. And then you get the whole quality system around it to tell that story. AI is just, is so big and the vendors are serving so many industries and so we've gotta figure out what good looks like and then, they will adopt to that and they'll support us in that over time. But it's gonna take a little bit. Speaker 4: Really reflecting on this evidence. So what does evidence really mean if we have traditional systems? We do have a workflow, right? We do have some user interactions. We do have some automotive processing, some data flowing around all. Okay? But this istic now in this statistical or even probabilistic setting, what really is evidence. Changes because it's much more dependent on what the FDA called the context of use. How is your specific business process looking like? How is your specific data looking like? What are your typical kind of deviations? What are your typical case reports that you have in Pharmac Ulence? Touching on upon these two examples, you have a much. Better understanding of what is actually going on in your business process to then assess is this evidence provided by the supplier actually fitting what I'm dealing with? Or am I somewhat special in the way that I have set up my business or that I have set up my processes that I need to add some further layers of testing to demonstrate that also for these cases that I'm special with, the system works. Alright. This is hard because we need to get to the nitty gritty details of what's our business process actually looking and how this is reflected in data and possibly also what efficiencies we have in the data that is available. That would be the basis to create this evidence, and thus it takes. Much more considerations as in traditional settings as to what does reliable evidence actually mean and what does data mean that we can trust? Michelle Vuolo: Yeah. So it's funny, I can't believe it's taken this far into this chat to get to probabilistic and deterministic. Especially in the context of regulated, because with the question I was gonna ask, which I don't wanna give you the answer that you just gave me, but like we have AI doing more human work every day, right? But we treat it or we're quick to distrust it so quickly. Or restrict it or second guess it every day. Would we treat humans the same way if they made the same mistake? I don't know. Like I'm being a little bit more provocative now, and I think part of the reason is deterministic versus probabilistic, but what do you think, what's the difference? Brian Ennis: First context of use, right? And coming back around. So where we're succeeding right now is. Narrow focused use cases, right? Because you can tell the story and contextualize it, demonstrate the context, answer the questions around data and the humans in the liberal lab easier. But your point is, really pressing when you talk about a agent, AI and the emergence of that. And I think of it, if you take the same concept of what we do around a, like how we manage people today, right? Like we've been managing people for 150 years, we've got. Job descriptions around what role they're gonna play. They come in with resumes, they've got the right training and the skills and the experience. We've got SOPs they follow. We've got performance evaluations that we do periodically. We've got performance improvement plans when they screw up. And we don't expect people to be perfect. If you take that same context and apply it to an AI agent, very quickly. Like it is pretty easy to be able to see. I answer the right questions around, do you clearly understand what job they're doing? Do they have the right skills? Do they have the right data to do that job successfully? Do I have a monitoring and performance management framework? And you can actually tell it in a pretty elegant way that quality leaders can pick up and understand. The key is, it's a matter of having the right tools to be able to grab that data and course correct it if it's, going in the direction it's not supposed to, but there is an expectation to your point that AI right now is like other software and is it either works or it doesn't. If you've got the right controls, the right context of use, the right humans in the loop, you can use it as a productivity tool. And we're already doing that right now. So I can go into Chatt PT or I can do whatever I can as, say, make me an SOP for whatever. It's gonna gimme an SOP in about five seconds. Is it Right? I go in and edit it, and I change it, and I modify it and I make it better. I'm still accountable, but I don't know, I'm not expecting it to be perfect. So why do we expect it to be perfect in all use cases, right? Then again, there will be some of where it does need to be, but that's different, right? I think we can apply in a lot of places where it can be a great augments the human experience, but not be the actual decider. Michelle Vuolo: Humans are fallible humans, I don't know. Hallucinate from time to time. Absolutely. How's it different? Is it just that they're predictable versus way humans are unpredictable. Speaker 4: I think it's really a shift in how we approach technology and the shift in the role. So really. Many haven't had the luxury to have five years of studies in data science. So really speaking out of my very personal perspective, which sometimes also gets me into trouble as people, no, not you. Not you. Okay. Now the point. We all have these projects of our supplier now has this AI module, and we are, we need to go for that because we want to have our first AI GXP use case implemented. We wanna be first, right? This is all over the place. And of course a natural reaction is. Resistance, at least for some, as there are new obligations, new responsibilities that are put upon my shoulders that I'm maybe not so comfortable with. What is this human oversight that I should actually apply? How can I spot hallucinations that I'm now medically suppose to solve? So that's, the system is still under control. What happens if I miss such hallucination and we have a horrible implication on. In the end, a patient possibly, and these are all natural questions, very close to the heart of what we do here in the life sciences where I think it's actually healthy to have this pushback at first, so that we think very carefully about, okay, if this is the setting that we are now working with. And if we say human in the loop is what we want to do to control this. How can we design systems that there's actually an effective human AI team, a very old term that was brought up by FDA like five years ago in, in the medical devices. How can we support them best? How can we take them along this journey and then step by step producing the skepticism about that, but use this in constructive way. To improve our systems so that they are actually acceptable and then adopted. So I think this is really this innovation nature of there need to be some enthusiasts, but there also need to be some of these skepticist so that we need to bring this together to come to defective and safe systems in the. Brian Ennis: From the beginning. Because fear kills adoption faster than regulation does. That's for sure, right? We see that, but we're in a reactive paradigm in current validation methodologies where we're looking, but we say, prove it's safe before I use it. Versus you turn that around, you say design it safely from the start. And then tell that story and then you can grow with it and change it and monitor it and work with it. Because I think at the basic level, the message I'm getting from regulators is not that they're anti ai, it's that they're anti-black box ai. Right? Michelle Vuolo: Yeah. Brian Ennis: Because transparency historically in all cases, builds confidence in our industry, I think there is a bigger change here that's gonna make us a lot more invested on the quality side, really early in understanding. What we're trying to achieve and what controls we might be able to put in place, what evidence we can get. And if we can't get that evidence, maybe we're not ready for that use case yet. And that's not, 'cause as an organization, we don't want it. We don't see the value. But it's because the providers of these tools are not mature enough yet. And there's a thousand startups coming up right now that are trying to help explain AI or create agents in a responsible way. And they're building tool sets that like in a couple of years we'll look back and be like, oh, remember the days when we were adopting ai? They'll just be like the cloud again. Michelle Vuolo: Yeah, exactly. That's what I was thinking. I just wanna get your forward thinking view real quick. So each of you, let's wrap with I don't know, our one to five year horizon prediction for AI and regulated. Brian Ennis: Here's my prediction is that it's all about MPCs and agents, right? For like probably the next like 24 months at least. Why is because a lot, we have a lot of legacy systems, but they're browser based, right? And agents are gonna have the ability to pull data outta something. Put it in NPC framework is just like the. Transport language for for agents to talk to each other. Hand it off to another agent, which can then put it another system, right? That is a job that people do today for sure. It's a transactional job. It's not necessarily the highest value job. I think we're gonna see a lot of those points scenarios emerge where we're using those agents and using MPCs in a much, much bigger way. Because it's gonna be easier to put in agents in to move something from one place to another. So we're gonna live through this probably five year period where we've got legacy systems, which are still trying to figure out how they're gonna use AI or gonna be disrupted, and then this kind of C of AI interconnect ability around it. Which we will automate individual parts of the use cases and processes and things across pharma, and then we'll see where that takes us, right? And we'll see if that becomes our working norm or if that ends up becoming disrupted by tools which say, you don't need that agent anymore 'cause I've now built it in-house. So I definitely see that's one of the big areas of focus I see. People aren't gonna be writing prompts anymore in, in a year or two. They're gonna talk to agents that are gonna be effectively writing prompts for them behind the scenes. Speaker 4: Great perspective from a technology point of view. So my, my point of view of that is let's take a look. Five years back, nobody was talking about GPTs, right? There were some very early adopters. Were playing around more in the scientific area with these things, but there weren't really a theme yet. And they haven't made it to campfire second edition really when first AI ML was established in the Gamb ecosystem really in a lifecycle approach. So therefore, the world will be full of surprises in AI in the next five years. So it's not too much from my perspective as to how can we now adopt this this specific technology. It's more about a transformation journey. How can we establish the capabilities to go with the flow of innovation at that speed that we're observing in the last couple of years that possibly may even accelerate? And this, I think, is the major challenge of that. How do we achieve this cultural change? How do we mobilize these management efforts to transform organizations in a way that they can actually go with this flow of innovation? And I think there will be those who will build those foundations that we are talking about. Data validation frameworks, governance, strategy having this in place, growing the organization with all of these foundational elements, and also growing with the use cases that are implemented under these foundations and those that are still somewhere in this pilot only a couple of use cases up adopt adopter zone with the gap will be increasing. So therefore, yeah, asking everybody to support this journey that we have as an industry to actually lift this transformational effort that we are just going into in the next couple of years. Still. Michelle Vuolo: I love it. In my opinion, that sounds like it's all about education, usage and education, right? It's been amazing. I thank you both so much. Like the right people for the job here. I really enjoyed. I feel like we could have done maybe a two to three hour session here. Brian Ennis: That'd be wonderful. Michelle Vuolo: Thanks so much. Brian Ennis: Yeah. Thank you. Narrator: Thank you for listening to the Augmented Ops podcast from Tulip Interfaces. We hope you found this week's episode informative and inspiring. You can find the show on LinkedIn and YouTube or at Tulip dot co slash podcast. If you enjoyed this episode, please leave us a rating or review on iTunes or wherever you listen to your podcasts. Until next time, I.