Patrick Short 0:03 Welcome, everyone to the genetics Podcast. I'm really excited to be here today with Heidi ream, who's someone I've been wanting to have on the podcast for a very long time, as we were talking about before the show, I did my PhD with Matt hurls at the Sanger Institute working on rare disease and developmental disorders in particular. And we always read all of Heidi's work because she really is one of the top in the field of genome interpretation and medical genetics. So that's going to be the topic of today. Just as a little bit of a background. Heidi is a human geneticist and Genome Medicine researcher and the co director of the programme and medical and population genetics and Institute member, the Broad Institute, if you're not familiar with the Broad Institute, its world leading Genomics Institute in the Boston area. And she's also the chief genomics officer in the Department of Medicine at Mass General Hospital. And it's working there to integrate genomics into medical practice with standardised approaches. And if that's not enough, she's also professor of pathology at Harvard Medical School faculty member at the Centre for genomic medicine. So Heidi, thank you so much for taking the time to come to the podcast. Heidi Rehm 1:02 It's a true pleasure to join you today. I look forward to our conversation, Patrick Short 1:06 I'd love to just start if you could give the audience a quick background on the breadth of your work CLIN Jen GA for GH Matchmaker Exchange, and really all the thread that ties it together of integrating genomics into clinical care. Heidi Rehm 1:19 Sure, I'd be happy to, you know, I started out early on doing a PhD and human genetics and was fascinated by the field. But I really, really appreciated the human impact the clinical impact. And so after a short postdoc, I had the opportunity to to start building a new clinical genetic diagnostic lab. And I took that on and spent 16 years directing the Mass General Brigham Laboratory for Molecular Medicine. But throughout that time period, you know, there was increasing focus from NIH on funding genomic medicine programmes and exploring its use in medical practice, as well as my own self recognising huge challenges in genetic testing, in terms of the need for scalable approaches need to share data, build resources to support the work that we are all doing in isolation and redundantly and without the sufficient evidence we needed. And so over time, I got more involved in NIH funded programmes, both in demonstrating utility of genomic medicine as well as building genomic resources. And that led me to, you know, be part of starting CLIN Gen. And now involved in Nomad as key resources, as well as recognising we didn't know all of the genes involved in rare disease that we were trying to diagnose in genetic testing. And so building, you know, a rare disease gene discovery programme with Dan McArthur, you know, seven years ago now, and then recognising that in order for us to share data, we need better standards and getting involved in the Global Alliance to build those standards that we could share data across the globe. And so you know, all of this really has been a journey in terms of really the end goal of improving our ability to accurately and and cost effectively diagnose patients and allow them to get the care and treatment that theyneed. Patrick Short 3:08 What's been the biggest surprise to you over the last 10 years, if you were to be able to be transported back 10 years ago, what what's really different now that you would have expected to turn out differently. Heidi Rehm 3:19 Yeah, you know, when, when earlier on early on, we were trying to figure out a way for clinical laboratories, which included, you know, a lot of commercial for profit laboratories, to engage in data sharing in a way that would really ease the pain that we're all feeling as sequencing became more robust, and we're generating a lot of data and feeling this impact and, and, you know, at least a small number of us started discussing the ability to really collectively contribute to a knowledge base. And at the time, I thought it was a pipe dream, you know, I was willing to dedicate time and energy to it because I really felt it was the only path forward but I was, I must admit, I was sceptical that we'd be able to convince everybody to join in the fun, as I call it, not always fun for everyone. But But now, you know, as I look back, we've come so far, and I just every day, I'm amazed at how well the community does, in fact, work together as a collective community. I was editing a, an article for a journalist the other day, who is characterising, you know, laboratories and their competitors. And I said, that's the wrong word to use. We're not competitors. We're colleagues, we're collaborators, we really have to do this as a collective community. Because we won't do it right if we're not. So it's that is I think, both the most satisfying but also unfortunately, the most surprising, you know, that we'd actually get as far as we have gone Patrick Short 4:49 why why do you think that ended up differently? Because I remember there was a potential future world where some of the and I won't name names, but groups like Myriad Genetics had Uh, we're gonna build a really big database, it was gonna be proprietary, but for some reason that that was cracked, and we got resources like ClinVar. And actually the collective data sharing force was stronger than any one database, why do you think the force to share was greater than the force to build proprietary databases rather than the reverse? Heidi Rehm 5:19 Right, I think, you know, many of us realised over time that no single entity can act has access to the primary evidence to generate the most powerful database. And they really couldn't do it without the collective work of the community. And there were, you know, efforts to do it that way, where, you know, I remember one laboratory, it wasn't actually married, it was another commercial laboratory that somehow felt that the database needed to be licensable, and more reg regulated in order to use it in a clinical context. And they actually ended up paying money to a group that had a nascent database and thought that that was the route they're gonna go, and after a couple of years, abandoned it, and it really became clear that that was not the solution. There's been, you know, approaches early on before we had ClinVar, I had companies approaching me with elaborate models for how this whole ecosystem could work where every contributor would get paid in small amounts for every piece of data based on who used it, and so on. And I said, you know, sounds great, but honestly, that is just way too complex for this to actually work. And you set up a culture where everyone gets expects to get paid for something, and it sets up a barrier. And we just can't operate that way, we really have to define a pre competitive data sharing space, and then allow the laboratories to compete based on services and the added things turnaround time. And, you know, the informativeness of the reports, and whether they're useful and the services they provide, and many other ways, there's plenty of opportunity to develop commercial models, you know, to satisfy the bottom line of these businesses, but that we could carve out this pre competitive data sharing space, and that that was really the only way that this was going to work. And ultimately, I think that is true. And that's what's happened. But early on that that wasn't, you know, everybody's view, Patrick Short 7:17 right? And maybe the realisation was also at some point that a rising tide will lift all boats, right, that if if we can't actually generate the evidence that we can diagnose people effectively at scale, then none of this matters Anyways, if we've got a big proprietary database, but we are, we've got a 5% diagnostic rate, no one's going to pay for it. But if we share, and we can diagnose it, 20 30%, then actually, we we've got a market rather than having nothing. I'd love to actually pick up on that thread, maybe of where, in the last decade, I think it's fair to say that in almost every major rare disease category, we've come a really long way from low diagnostic rates to significant double digits. But there's still a ways to go, what do you see as the bottleneck to get from, you know, in the world that I spent many years in developmental disorders that a large cohort would be lucky to get in the 40 to 50% range of diagnostic yield? How do we get to 7080 90 100%? Heidi Rehm 8:13 Yeah, it's a great question. That's a question I asked both myself and my colleagues every day. In fact, I was just running a panel discussion at the Brode yesterday, and I asked this exact question to my colleague and O'Donnell area on the panel. And the question is, you know, how much of this missing diagnostic yield is due to new genes we have not yet discovered that are implicated in disease. And there's fairly good evidence that we are, you know, probably only halfway there for discovering monogenic causes of rare disease. But But I think there's another component to this, which is, even for the genes that we already know, are implicated in rare disease, what are we missing in terms of pathogenic variation in those genes or around those genes, and there's probably a huge element of our lack of understanding of the regulatory contribution to gene expression and, and other controls over those genes and their splicing and, and, you know, the sort of regulatory side of it, that we don't have the sophisticated tools to be able to understand. And I think the way that we're going to have to tackle this is a step beyond where we are now. So right now, a lot of us to find these very rare, genetic contribution to disease and the genes involved. We're using Matchmaker Exchange, where one group puts in a gene candidate, and then another group puts in that same gene candidate, and then you explore how good the phenotype matches and if it's a match, you start building your evidence in your cohort of patients, but it all on the premise that you've identified a strong candidate to start with. And in many cases, we look across the genome of a patient with a suspected genetic disorder and we can't find a candidate and I think what we're going I have to do is do a better job of sharing the primary data in large collections. And then for patients with similar phenotypes, be able to use more scalable genomic approaches to look for similarities that are I can't see in a manual way like we do today, where we focus in on coding regions and de novo variants that are highly suspicious. But you don't have that same sort of ability to zero in manually when you're looking at the non coding space and, and things like that. So I think by bringing that data together, we can use machine learning approaches, and burden analyses and other tools that can go beyond our cognitive manual focus, and be able to bring out these candidates in new ways, both for new genes as well as new variation in the genes we know. Patrick Short 10:47 So do you see that as an extension of the kind of model that Nomad and exec before it has pioneered? Where it's its primary data called under the same framework? Yeah. Tell me more about that. What do you see is the the 10 year horizon for that kind of vision? Heidi Rehm 11:04 Yeah, it's exactly right. You know, we still aren't, you know, calling genetic data with the same analytical pipelines, so that we can ensure that when we do our comparison between cases and controls, that we've controlled for artefacts that are based on the raw data and the algorithms we use to call variation. And so one of the advantages of the approaches that we take when generating the Nomad database, is that we do joint culling and QC in the same way across the entire data set. And if your cases and your controls are in that data set, then it allows you to remove some of the Artefactual elements that can confound your analyses and your discoveries. And so you know, in Nomad, we largely remove those cohorts that were were recruited for the purpose of having rare disease or you know, severe disease, but we we in some cases will include them in the joint called set and then remove them later. So that those data sets could actually be used in the discovery purpose, comparing the the affected individuals to the Nomad dataset to control for that those artefacts that said, we still have challenges because we are individually assessing each individual to decide whether they get included or not included in Nomad, we also have lots of bio banks, and that, you know, recruits from the general population from health care systems, there are individuals affected with rare diseases that are getting in that database. And we aren't really collecting actively the cohorts that have rare disease, because they don't go and no matter for the most part. And so we really need to pay more attention to these datasets that are being collected around the world. And, you know, include it and develop common standards for how you process that data. And efforts are underway right now, for example, through the all of us research programme and the UK Biobank to agree to use common variant calling algorithms and other things. And so as we generate these large cohorts that are well phenotyped, we can start to better do these experiments, controlling for the genetic data, but also having the phenotype there to drive it. But the problem there is that rare diseases are so rare that getting enough of them in in these biobanks is hard. So you also need deliberate recruitment strategies, specifically finding these rare disease patients, and then make sure they get into these call sets, and can be incorporated so that when you're doing your case control, you have enough cases of these very rare diseases, right. So there's challenges all around there. But I think we're gradually heading in the direction. Patrick Short 13:46 And I think you made a really good point there where I'm thinking of this like an iceberg where the the part that's above the water is what's actually shared at a at a fundamental level in a resource like Nomad, the iceberg below the surface of biobanks, direct to consumer genetics, all sorts of programmes that are just not in any way part of these shared resources. Do you have a sense of the scale of the above the water or the below the water? If we could, because I was gonna ask you a question about increasing the scale of people that were sequencing, but actually, it sounds like what you're saying is we've sequenced enough people, and actually, maybe the lowest hanging fruit is just getting all of that together. So it's not in 100 different places. Heidi Rehm 14:28 Yeah. And you're seeing, I'm seeing at least more of the commercial clinical laboratories, thinking strategically about how their data can be used, and in fact, trying to consent for data sharing and more deliberate ways that go beyond what is normally put on a requisition form and collected from the patient. And really enabling ways to allow that data to be used more broadly than that one, you know, analysis for the primary indication for testing from Just that one lab, you know, for instance, in vitae is participating in the Gregor consortium through a collaborative grant with Children's National. And so they're actively consenting individuals to enter the Gregor Consortium, which is a rare disease NIH funded study. So that's one avenue, if also, you know, engage with GDX, around ways to get primary data from several healthcare providers that are ordering data, or ordering tests from clinically from GDX. And the healthcare providers are actually doing the consenting cluding, Columbia geissinger. And then GD X is willing to share that data when it's been consented by the healthcare provider. And, and developing ways to put that data on a cloud platform and make it easily transferable. So they're like, really developing some infrastructure to make it easier to transfer this data to the owners. I wouldn't call it owners, but those who have permission to use this data based on consent. So you're starting to see models to allow this data to be used, whether it's, you know, facilitated by the clinical lab facilitated by the healthcare provider, and and other things that are going to have to be in place for us to make use of just an enormous amount of data that's being generated in the clinical arena. Patrick Short 16:14 Yeah, it makes sense. Absolutely. I was curious whether you're pushing for whole genomes, because most of the groups you mentioned, Gene DX, and vitae, I suspect, are still doing exomes. Because at the scale, they're going, there's still a little bit of a price difference. But as the whole genome sequencing price continues to drop and evidence builds for non coding causes, are you pushing the institutes that you're involved with to make the switch? Or do you feel that the time of the exome is still, we've still got a couple of good years of, of exome sequencing? Heidi Rehm 16:44 Yeah, it's a great question. And, you know, I think on the clinical side, were more of the cost is wrapped up in the interpretive side, the regulatory side, you know, you end up in a situation where the raw data generation piece of the puzzle is, is a small enough fraction. And the stakes are high to really make a diagnosis and an individual in a clinical context, that that I am definitely personally sort of at a point of moving to genomes that are better for a lot of different variation types and, and stuff. But you know, everybody's got to sort of develop their pipeline, figure out how to store this data, it's more costly to store genomes compared to exomes. So there are pieces of the puzzle that that people are gradually sort of moving on, I will say, you know, to date in our rare disease research programme, where more of the cost is just due to the raw data generation, we are still using exomes as our primary step, and then for negatives, moving on to genomes. And that is truly just a cost equation for a larger cohort sort of study in rare disease. But, you know, with with, for instance, a limited announcement yesterday, of, of now, the Nova seek x being $200. For a genome, you know, really the this, the research, decision making will probably quickly evolve as well. And we will move probably everything over to genome even in our research environment. So I think I think we're all headed that way. Many, many, certainly clinical labs are already there. But I think the rest of the community will be coming on board. Patrick Short 18:17 Yeah, that makes sense. And I think in my experience, a lot of the decision making comes down to exactly what you said, what fraction of the budget for a particular programme is the sequencing? Is it? Is it a small piece of the pie? And also, what are the other uses. So if someone's really narrowly focused on diagnosing as many patients as possible, but not on the diagnostic rate, and actually, maybe the the Excel makes sense. I wanted to get your thoughts on newborn sequencing, I know you've started to get involved with some of the major programmes. There's a big inaugural meeting in Boston a week or so ago. And it presents a different set of challenges from diagnosing patients who present with significant symptoms that could point to a rare disease. How do you How does your mental model for newborn screening different than the one you've developed over the last decades looking at the rare disease case, which are a little bit different? Yeah, Heidi Rehm 19:06 it's a great question. You know, and I think I think there's several things to think about in this realm. So, you know, as we move towards an era where we don't think about genomic sequencing as a one time to answer one question, but in fact, a resource to use over an individual's lifetime, where you may be evaluating some symptoms and an individual at one point in time, but you also want their pharmacogenomic information. So the moment you know, physician needs to dose or order a drug, that data is there to decide what right the right dose and or if there's adverse reaction that might be predicted that need data needs to be generated in advance. If you know you're thinking about having a family and you want your carrier status for risk of recessive disorders, you know, that's another use that many individuals will encounter at some point in their life. Time. And so at some point, you think about all the different things that you want that genome for, and you say, well, well, then we should just get it up first, and use it throughout the individual's lifetime. So so that's one of the arguments, I think, for newborn sequencing is to use it throughout the lifetime. I think that the practical challenges that we think about our a couple fold, one, if you actually want to use that genome, in the context of newborn screening, where the idea is detected disorder that needs immediate treatment, and management, like is done with newborn screening, today, you've got an incredibly small window, to generate that data and analyse it. And and to really get it can be done quickly. And there's definitely examples from Stephen King's Moore's lab, you, you and Ashley's lab, that have clearly shown how rapidly you can do this, we can do it. But to do that, you have to put all your resources and everybody's attention to a very small number of cases, it's very difficult to have that kind of rapid turnaround for a large scale setting that really contributes the sufficient interpretation for all things, you know, at a rapid point in time. And so you could argue that maybe instead of newborn sequencing, you sequence in the prenatal period, where you have nine month window, a little more stressful, because when you, you know, find uncertainty, like what decision making is done with that. So it raises some more ethical concerns there. But it actually is a longer window, to then have the information and be prepared for how you're going to address that patient before they are born. And that is really stress reducing for the family to be warned in advance that they are going to have a child with a rare disease, they need to have the physicians on service and ready to go with whatever treatment and that's a much better way to deal with a newborn crisis, then the moment that baby comes out, and has something going on, and you don't know what it is, and you're waiting for the results. So I think there's some logistical challenge as we think about how, what is the best time point to generate this data? And it could be argued that it's not at the newborn stage, it's actually before it. But also the question of, you know, are you really going to use that same genome you generate today, in, you know, 25 plus years when that individual is ready to consider a family? Is carrier status really useful? Well, it may not be for that individual. But it may be for that family who's going to have a second child, a sibling to that newborn, and they need the carrier status. But then why not sequence the parents genomes and get the information directly from their genomes? So you start to like, sort of have a whole series of questions and struggles as we think about what to do at this point in time where we're trying to manage the parents genomes, the baby, you know, the timeliness, will this data be relevant? Or we will have, you know, the $10 genome in five years? And we and that's better, and we just generated again, right? That was your whole challenges. Patrick Short 23:00 Yes. Yeah, that's the that's the first time I think I've heard the point that you made about sequencing earlier gives you time to make some of those decisions. And that definitely resonates me I was also speaking to someone at this conference yesterday, Parker Moss from genomics, England, who made the point about storage that actually 99 and a half percent of newborns aren't gonna have anything actionable, then you you may just want to get rid of the data, because storage cost and hanging on to it for 50 years, or whatever it may be until it becomes irrelevant. Again, it's going to be less than sequencing it again, it's it's a very big paradigm shift. I think, because we've all been living in a world where sequencing is so expensive, relatively speaking, that you wouldn't dream of getting rid of the data because storage costs are dominating, but it seems like we're heading heading towards that world pretty soon. Yeah, absolutely. The other interesting thing that struck me about newborn screening as well as how how we can effectively consent families because there was an interesting example that David Beck from who's the principal clinician on the genomics England newborn screening programme shared, which is, if you screen a child and they have their homozygous for bracket two variant, they they need a bone marrow transplant, they likely have Fanconi anaemia, but it also means that both parents very likely are carriers for the breast cancer risk chain. So all of a sudden, you've got a consent form that says, Most families find nothing, but there's a number of things that we could find, so you need to be prepared for it. And then all of a sudden you go from most families are going to be fine, too. Now we've got a family that's got not zero, but three potential issues to contend with how how do you think about informed consent in the in a really complex set of scenarios like that? Heidi Rehm 24:42 Yeah, I mean, you know, in our world of genetics, you know, it is true that most of the information we find are going to be relevant for family members. So you just can't get around the fact that you need to consent and think about the downstream implications of that, you know, and it's it's an interesting thing because there have been no arguments for certain adult onset disorders like breast cancer, that you should allow the child to wait until they're of age to make an informed decision if they want to know their risk, things like that. The challenge is, those policies and and principles were developed in an era when it was based on knowing your family history and your risk. And being already aware that you were 50% at risk for let's say, Huntington's disease. And then, you know, in that context, you know, you're at risk, but maybe you want to wait to determine whether you are at, you actually have, you know, the variant or not, in a lot of the cases we are now encountering, these individuals aren't actually aware of their risk, either, because the family history isn't, you know, as as clear and unappreciated. And so you're not in a situation where this child can just wait till their 18th birthday, and then make a decision, they this may be the only opportunity to let them know that they are the their parent is at risk. And so it's a very different paradigm we live in now in sort of thinking about autonomy to make decisions, where you may, this may be your only shot at getting really medically important information to it, some individual in that family be at the proband, or their family members. And so we have to really think about this differently, and be cognizant of what, what information should be shared. At the same time we struggle, physicians are sometimes not even ordering genetic testing, because of the complexity of the consent process. And do they want secondary findings. So there's also, you know, an element of our community, myself included that sometimes like most of medical practice, we let the physician make the decision in the best interest of the patient as to what information to give back and what is clinically meaningful. And, you know, when does it go too far that we make too many? Would you like this? Would you like that? Well, I don't know, I don't understand that, oh, let me give you a 10 hour lecture on what a secondary finding is, and what all these diseases are. So you can make an informed decision. That is really, really hard to educate, you know, even the physicians, let alone the, you know, patients around this. And so when do we say, look, we've been trained to do this? Well, we understand these diseases, understand their veterans, we understand our our patients, and what's important to them, generally, let me make this decision. Let me help them better make this decision. And that's a tough decision when you remove autonomy and try to make decisions for other people. But there's a practical side of this that we have to think about as well. Patrick Short 27:43 Yes, and I think the human expertise problem is going to hit us very quickly. I mean, it already has so in the was in the room listening to a discussion for one of the big UK programmes our future health where one of the one of the potential routes was to return APO a Apo E for status, for example, to participants. And really, it was an exploration of of, should we do this? Could we do this? What are the barriers and one of the things that very quickly came up is genetic counselling, that if you if you want to sequence 5 million people, and do pretest, and post test counselling, if you just run the numbers, you there's not enough genetic counsellors in the world. But there's some interesting, I think there's some interesting technological approaches. And there's a group out in Canada that recently tested a kind of group webinar based genetic counselling. So you get everybody in the room for an informational session, and then you've got the opportunity for more one to one sessions, I am optimistic that we can find ways to to scale it. But there's also going to have to be some compromises made right that I don't think we have enough genetic counsellors in the world to have pre and post test counselling for everyone that goes through a newborn screening programme and do it at the scale that we want to do. But then the question becomes, is that a must have? And it just means we rate limit the screening? Or is there a flexibility that I don't know if you have any thoughts on that, but it's one that I don't have a good answer to? Heidi Rehm 29:07 Yeah, it's a great question in terms of the genetic counselling needs of our community. And I do think that we are going to have to, you know, continue to evolve some of these more scalable approaches, whether they be group counselling sessions, or you know, there's now use of chat bots, where an individual can sort of go through a series of explanatory information and then, you know, some of them are gonna be like, Yeah, I just want it you know, don't make me sit here for an hour. And others really want to, you know, dive in and have their questions answered and go through branching logic and say this that the other thing and may end up with and Okay, now I definitely need to talk to someone because not all of my questions have been answered. But if we can get a fairly large fraction of individuals that really know what they want, and you know, are willing to just dive in, that reduces the resources and we can focus them further. As individuals that really are borderline and aren't sure they want to go down this path, and we can apply the resources appropriately. So I think these these more scalable approaches and how we think about things, so that we can target the discussions where they're needed and not waste the time of both the providers and, and patients or individuals wanting testing if they don't need it. Patrick Short 30:22 How is polygenic risk scoring moving into your world? So I think it's probably fair to say five years ago, we were really in the rare disease space really focused on monogenic occasionally, there was a multi gene quirk where you had two hits that you needed to get, but for the most part, it was, it was monogenic, but there's some really interesting work that's been done recently looking at how polygenic scores can influence penetrance. And, and complicate the picture where where's that creeping into world? And I imagine it's complicating what you do with congestion and other resources where it's complicated enough to assign for us is as pathogenic or more benign without having this this background polygenic score to worry about? Heidi Rehm 31:05 Yeah, it's a it's a great question. And of course, there's lots of different opinions about the utility of polygenic risk scores today and their use in medicine, I do think they are improving and increasing in their utility and the potential to use them, but more so in a stratification of the patient population, to direct preventive care, certainly not that useful. If I don't think at all useful in a diagnostic realm. Like once you have the complex disorder, whether it's obesity, diabetes, you know, coronary artery disease, you're gonna focus on treating that individual, there's occasions when you might take slightly different approaches to manage their disorder based on understanding that the basis for that is more likely genetic as opposed to diet and lifestyle. So you know, there may be some utility even after the individual develops some of the symptoms, but it's largely about stratifying, for risk. And as you pointed out, some elements of predicting penetrance for monogenic disorders, because we certainly know that the penetrance varies, and for many of these genes for adult onset conditions, like hereditary breast cancer, and other disorders, and if we can give someone a better sense of how high their risk is, it might cause them to make decisions, you know, do I wait to have my breasts removed until after I've breastfed, or, you know, or, you know, things like that are real, real important decisions for a woman to make in that context and others, so so you can, you know, you can think about the scenarios we're getting a better read on your true risk can be helpful. So but but I do think we need to evolve to the point that we are really have a clear decision matrix based on the risk being reported to guide a physician in what they should do, and or offer to the patient. Because if you just say, Here's your score, it's high or it's not high, and a physician doesn't know what to do with that information, you haven't really helped the system at all. So you know, we are right now just started actually reporting clinical results of multiple polygenic risk scores for the NIH funded eMERGE consortium with 1000s of individuals being given polygenic risk scores returned in multiple different healthcare systems across the US. And so we really hope to learn, you know, some useful things about the practical implementation of PRs in the healthcare setting that hopefully will will guide us and how this information might or might not be useful. But parallel to that, what we do still need to do the studies to look at outcomes after you've stratified and taken different strategies. And some of that data is emerging from some studies, but others, you know, are still awaiting those outcome studies to be implemented. Patrick Short 33:53 Yeah, that's great. I'm, I'm I've been following the eMERGE consortium, I think it'd be really interesting to see what you learned there. I have one more question. We're running up against time, man, it's a little bit of a blue skies question I wanted to get your thoughts on, if you can only choose one. So you can 10x the size of the major resources that we have today like UK Biobank, Nomad, others, but the data that we're collecting is fundamentally the same. Or you can 10x the variety of datasets that we have within those same cohort. So adding RNA sequencing, proteomics, you get to choose what you're what you're doing there. You can create stem cell banks, what's the what what do you choose in that world? Do we need greater scale of what we already have? Or do we need greater variety? That to better understand the biology? It's a Heidi Rehm 34:43 great question, of course, I want to say we need both but given that you've posed the question as a choice I think I'm gonna say more variety of data because, you know, I really feel we are missing fundamental components of how we look at information, typically outside of the strict coding region. And we're going to need other omics approaches to be able to understand where to look and how to look in those regions. And I think that we will continue to discover genes based on low hanging fruit because there's de novo variants and a constrained gene in the coding sequence. And that cycle is continues today. But But I do think we need to invest some time and energy and resources into more innovative ways to look in the places that we just haven't been good at looking today. And we're going to need that other those other data types, both proteomic, transcriptomic, and other modalities to really shine our flashlight in the right spot. So that's, if I had to pick I would probably lean towards that. Patrick Short 35:50 Great, I appreciate you. Appreciate you playing that game. But I completely agree we want to do both. But I like your perspective there. Well, I wanted to just say thank you. As I mentioned at the top of the show, I've been looking forward to doing this for a really long time didn't disappoint. This is probably one of the favourite conversations I've ever had. So thank you. And obviously everybody who's listening will will continue to follow your work. And I know we've made you and your team and the broader community. You've made a lot of progress in the last decade. So looking forward to what's to come next. Heidi Rehm 36:20 Well, terrific. I really enjoyed the conversation. They were great questions and the key things that we need to think about as a community. So thank you for having me on on this podcast. Patrick Short 36:29 My pleasure. And thanks everyone for listening. If you enjoyed this episode or any others we'd really appreciate if you could share the link with a friend and leave us a review on your favourite podcast player. Thanks so much and we'll see you next time. Transcribed by https://otter.ai