59 Kilian Seeber Welcome to Troublesome Terps, the podcast that keeps interpreters up at night, and we've also heard from our listeners, the podcast that's great to listen to where you're in the gym, although I wouldn't suggest doing being in the gym for the entire podcast because I'm entirely out with me and are slim down, cut down. Virtual Studio today is great to welcome him back yet again. Alexander Gansmeier, it's a pleasure to be here. And the whole slimmed down cut down that reminds me of what I need to do because Lockton has not been kind, but the gyms are so close I can't even listen to our podcasts in those. So, you know, I've been running five six ks nice and physiotherapy. I wish I physio is there like medieval torturers only paid better. So we have a wonderful guest with us this evening or this morning whenever you're listening to this. We have with us Xavier Kilian heads up the interrupting department at the University of Geneva. FTI has formal training and conference interpreting and psycholinguistics. He's probably best known. And I would say he's probably one of the world's experts on cognitive processes in interpreting including cognitive load and multi-modal integration. And if you've crossed paths with killing at conferences, and I must say that I have, you probably know him as the guy who likes to ask uncomfortable questions. And the first time I met Kellyann was a iPad city in probably Manchester, I think. And you could rely on Chilian to ask the questions that made the presenters go, Oh, I have to think so. Kileen, welcome to the studio. Thanks for having me. I did not remember that. I have I'm pretty sure it was a Manchester City that you and I thought it was I thought it was Vienna we [00:02:00] met first. But that was much later, because I remember you being the one of the plenary speakers one year and my colleague Marwa Shamie. Admitting that she was trying to understand the cognitive stuff, she actually said this guy just totally confused me. So let's get rid of this. I told her exact words. You remember that? Yup. That's happened before. But the thing is, though, is we're going to get into this in a moment. Cognition was the kind of core part of interpreting studies for a long time, and it's history. So what is your history with the cognitive side of interpreting what got you into interpreting and interpreting research and especially cognition, I guess? Well, it was the beginnings, wasn't it? But but I'm going to be that guy who's going to draw your attention to the fact that you just gave me a double barreled question. The answers to which are very different. How did it all start with interpreting that? I gave some thought that when when I saw that, that this might be something you could be interested in. And I thought back to how it all started. It started out of nowhere. I had to pick a major and had no idea what I was going to pick. And I looked at the course catalog and it said biochemistry. And I was very fond of of the chemistry kit and the idea of working in a lab and mixing all sorts of potions and making things explode. So I thought, that's what I'm going to that's what I'm going to major in. And then I saw something else in the catalog and it's a conference interpreting it. I had no clue. And some argue I still don't. And I thought I'm going to do a double major only to find out that was not possible logistically because because courses and classes would overlap too much. And so I went for the obvious choice, which was to study the thing I had no idea [00:04:00] about. Right. And who should be one of my trainers, if not Ingrid Kautz? So the person who pioneered the work on cognition and interpreters with later realization. So hemispherical idolization, what happens in the left and the right right part of the brain? And there was even a young assistant at the time by the name of friends, and it turned out to be friends. Bashiqa, so were friends. You were trained by the best dissy and yet it didn't rub off. It seems the. So how was it like to train other people who have that presence and almost aura around them? Were you aware of it at the time? No, no. I was I was blissfully ignorant. I was oblivious to any and all of that. So it was just they were they were fun to be around. Very approachable, very normal people. No, I was going to say, unlike others know it just like everybody else, very approachable. And and yeah, they basically. Yeah. Planted the seed. But but that had nothing to do really with the research, with me getting into research which started relatively later, not much later once I came to Geneva. And that was, that was and that was in nineteen ninety eight as a student. And then I soon started as a, as a project assistant. It was basically just a matter of, of getting a paycheck. And if you've been to Geneva, which you have both of you. Yeah. You know what I'm talking about as somebody I know once put it, money talks, but here it mainly says goodbye. So so I wasn't half bad at this experimental research thing. And I suppose it was a mix [00:06:00] of just wanting to know how stuff works and being a bit of a control freak and the two mixed. Well, and that was it. I was going to say. So was your research always heading down the cognition track or were you ever attracted by the context or wider field research and interpreting? No, it was it was always yeah, it was always the cognitive stuff, it was always the stuff that goes on and the interpreter's brain provided you can spot it. And it was with the Ascona workshops where it all began. You might be familiar with the It's Going to workshop series back in the day when conference is still lasted for a full week. And it was a full week dedicated to to to research in a beautiful resort overlooking Lake Ascona, in the Italian speaking part of of the country. And it was organized by by FTI and and USB-C. And I was a total greenhorn only just started my my what's it called. You'd call it a basically. And it made by research that which they they used to have in French speaking environments. Maybe this will do in France. It's a prep degree before you can do a Ph.D. and you learn methods and all of that stuff. And I was there to do Xerox copies and to help people fill in their expense claim forms and, you know, serve the stuff, the important stuff. What you it turned out to be relatively important. It's thanks to serving coffee that I got to know Marium so late, Miriam Schlessinger, who would then go on to become the coeditor of InterpretBank with friends, Yaama Tomala from Turku, who I think by now is long retired. Nelson Cohen, Charles home from Oxford. And I was in awe by the work of of [00:08:00] of your mother who who really pioneered this parliamentary thing, among other things in and interpreting research. And I was fascinated by the idea of measuring things psychophysiological. So not just asking people how, you know, what did you do, how did you do it? But I'm going to measure something. And then went on to doing my Ph.D. in Geneva, my postdoc in New York. And there are two serendipity. I don't know, episode five, I did my it really is a confluence of circumstances. The stars are aligned. I was working under and with Jerry Altman with the time was editor in chief of Cognition. And down the hall from my office was Alan Bradley's office, the co author of The Working Memory Model. So so lab talks and meetings ended up being really interesting for me. And this is the thing about researchers interpreting is still fairly small as a field. And you often go to an interpreting university and you meet people who you might not recognize how great they are at the time. But then you go away, you go, oh, that's who that is. Now, you've mentioned people on track. You've mentioned measuring the the elephant. The room, I guess, is what is cognition? We can give a general definition, but what is your working definition for what as InterpretBank cognition if you break it down for laypeople like me? Well, it is obviously broad and it's much broader than anything I've, you know, I could possibly do in the lifetime or to the the part of cognition that I and that we in general are interested in at the lab really has to do with process process research. So so perhaps the mechanics of the task, this is probably underselling it. But but yeah, for lack of of better for lack of better definition, the [00:10:00] way in which signals are processed in the brain and this whole idea of you receiving stimuli through your senses, all of them then somehow coming together in your brain. Now we're interested in the mind much more than the brain. We're not doing currently much, much neuroscientific research here. So so we really look at my end stuff rather than than the brain stuff. Having said that, I recently had my assistant come up, ask me if we can purchase additional equipment to do just that. And like, yeah, you're not the kind of stuff that you buy with the with the coins. You fish out from the from the couch. Got some change lying around. Right. So so that that's the kind of cognition work that we've been doing using both the the the old old fashioned perhaps or traditional behaviorist approach. So the black box approach, you have the stimulus, you don't really know what's going on in the brain. And then and then you have the the output, but particularly also measuring psychophysiological. So do we have other indicators that we can measure, be it. Yet in our case, a lot of ocular responses, where do they look, how do they look, how fast do they look, how often do they look? Which is perhaps counterintuitive for a task that people think is only about listening and talking, but also heart rate, skin conductivity. So things that you can't really control as a participant, unlike a response I give you, if you ask me, well, how was that interpretation? How difficult was it? How tired are you? I control my response. And so I filter it without knowing it, without wanting to. Whereas if if you measure something, if you measure an indicator, [00:12:00] well, then you get a measurement. Now we just need to make sure and this is not a trivial task that you're actually measuring what you think you're measuring. And that's the crux. And that's the thing is where you're traditionally in the cognitive approaches to interpreting, everyone was trying to isolate variables. So, you know, we'll play at this speed and then their speed and then their speed. You've really pioneered an approach that tries to make the cognitive experience more realistic by trying to understand how to interrupt this process, different sorts of different sources of information. What made you move from the the isolate the variables approach to let's see what this looks like when you've got a PowerPoint presentation on numbers on this? I'm not sure that they're mutually exclusive. We still isolate lots of variables, I mean, experimental research is about variables and it's about controlling variables. So you need to you need to isolate things. You need to make sure, as I said before, that you're measuring what you think you're measuring. And for you to be able to do that, you need to make sure that there's as little there's few compounds as possible. There's as little noise as possible. Hence the the the white room setup, which is often criticized, particularly in this field, which is so applied and Mattick and I yet have to go to a conference where I'm not told. Yeah, that's all fine and well. But this is not this is not a speech I would interpret and and I can't have a certain amount of patience. But it's not it is not infinite. It is rather finite these days. So I usually just give the example of you going to your GP for a checkup because you want to run the marathon. He's not going to hook you up to to your sensors and have you run a marathon to know whether you can run a marathon. He's going to if it's an old fashioned GP like mine, [00:14:00] he'll have a stepstool and they'll say, OK, Chilian, you got 60 seconds. Now step up and down is as fast. And as long as you can cut me off at 60 seconds and he'll measure my my pulse and he'll say, yes, you're good to run or now you better stay home because you're not going to make it through the race. How the stepping up and down the steps will relate to running a marathon. You know, it is a stylized environment in which you try to establish a cause and effect relationship. And we have to acknowledge that it's true that we've branched out as well and we've attempted to use some naturalistic stimuli. But any time you do that, you need to up the controls elsewhere. And this kind of moves on to the next question is, you're setting up an experiment, what does a typical experiment look like for you and and what do you take into account when you're trying to design it to work? Do we design it to work? We design it to have fun? No, obviously, we tried to to design it to work. That that's one of the prime objectives. You want to get some some of this great data. The worst thing is a null result, even though it's not the worst thing, not results are underrated. It's a pity that we don't get to publish many of the results that other people would be spared a lot of time and effort, if not results were published. So, you know, I don't have to do that because it's not going to work, whereas everybody seems to be let on to to make the same mistake. A typical experiment. There is no such thing as a typical experiment that they're typical steps and any and all experiments and the kind of stuff I can really only talk about, the kind of stuff we do, really starts with the research question, which is often motivated by [00:16:00] a bit of a gut reaction or a strong reaction, even emotional reaction to something you've heard, to an assumption, to a claim, and you're thinking, baloney, let's test that, or that's a lot of B.S., You know, just general assumptions. Then you hammer out your IQ, your research question, which is not trivial. You look for the method that is best suited to get you data that can answer those questions and and then come the materials. And I can't stress it enough. You don't just go and pick a random speech that was given somewhere and say, OK, those are going to be my materials, my students. Perhaps it's me who's a slave driver, but my students on average spend some six months developing stingily and a lot of work goes goes into those because you need to control for things, because you're reducing it to such a micro level that that any failure at not controlling things will do so much, so many compounds and so much noise that you're not going to get anything out of there. So and then can the participants. Of course, I was going to say this has been the ten million dollar question in the experimental site of InterpretBank at least Gasfield research has not always gotten any better. How do you find participants and how do you twist arms to get them to participate? Well, we're obviously lucky as compared to some places with with, I think, very, very promising research programs that are situated far away from places where conference InterpretBank to live. It is a bit of a truism that and we do a lot of research on conference Interpreters' Help if you broaden that field and say we work on interpreters in general, then [00:18:00] you might find many more participants. So we're lucky. And then we have some know, some probably two hundred and thirty, forty eight members here in Geneva. And many of them, yeah, many of them are very generous and giving their time and participating. Yeah. Just contributing to, to furthering our knowledge about what they do. Don't usually need to bribe or to, to, to push too hard and they don't usually. And and so that's how we get the that's how we get the participants. I feel very strongly about one thing, and that is if I want to make claims about professional conference interpreters, I need to test professional conference interpreters. You probably know that some 90 percent on a higher alert alert. You do know that seventy eight point six percent of all statistics are made up on this on the spot, and this is one of them. So let's just say that 90 percent of of of all knowledge we have about human, about the human psyche, about psychology stems from American B.A. students because because that's that's the participants that everybody uses. Right. So is that a representative sample and or population? Not sure. If I want to know about InterpretBank students, I should test interpreting students. If I want to know about professionals, I should test professionals that is provided. I assume there's a difference between the two, which seems to be this trend about about professionalization, about expertize and so on and so forth. So if there's any truth to that, that you're different, I'm very hesitant to use the word, but I mean it in the very basic [00:20:00] sense of the term qualitatively. So if some quality of you is different as a professional than it was when you were when you were in, you know, interpreting one to one, then that's the population we need. And that's an interesting one, because I think it was Miriam Schlessinger who pointed out that interpreters tend to be happier when the results of experiments show how wonderful the professionals are and not so happy when when something else comes out. Have you? I've met a few researchers who have had issues with research, turning up results that the profession didn't like so much. Has that ever been something that's crossed your desk? I mean, obviously, everybody likes to be right. Right. That's a very human relate to that. At a rate. Not a day goes by when I'm a GEP. I was right good on me. So so that's that's the beauty about about experimental research. It gives you relatively little room for spinning. Which other approaches give you much more room for spinning. And, you know, once you and I'm not saying they're worth less, I'm not saying the results are less valid. I'm not saying the results are less useful. I'm just saying the results are to be looked at differently. You start from if you have experimental on one end of the spectrum and then at the other end of the spectrum, you might have a qualitative set up where you are at as the observer are involved in the thing. And you need to make sure that that is well described because you might well introduce a lot of, if not bias, at least variables into that equation. And they're really good. Example was pre pandemic examples. [00:22:00] I don't know if they're even talked about any more at this point, but up until the end of twenty nineteen, before this whole online thing became to be our our daily bread, there was a lot of talk about the few studies that had looked at at a remote conference interpreting. And the interesting thing was that anything that had been measured objectively with with the physiological measurement of it was it was the stress measures, it was fatigue measures and so on and so forth, did not line up with the subjective measurements. So Interpreters' Help felt that they were performing more poorly when they went to remote. But a blind analysis of their output by judges who didn't know which condition they were judging showed that there was no significant difference. They felt that they they were more stressed. But cortisol levels, which granted, it turned out, might not be the best way to measure it, did not indicate any such additional stress and so on and so forth. So any time you have that, you need to tread very carefully. It's perhaps too simplistic to say, well, interpreters are just lying to us for reason, X, Y, Z, which perhaps had you asked me 15 years ago or 20 years ago, I would have been much more inclined to to to believe. Right. At this stage, having seen what happens elsewhere in research and also thanks to the work of some of my Ph.D. students, shout out to Laura Keller, relatively recently graduated, got her peers. Do you shout out to Rona Amos, who only just defended in the fall? We know [00:24:00] that certain things are just very difficult to measure. So we need to do a double take or triple or quadruple take and and admit that we're still at the very beginning of certain types of research into an extraordinarily complex object of study, which is InterpretBank. It's a little more complex than measuring other things because it's noisy, but it's fun. And I think that's interesting. When I started my research, the InterpretBank had of a kind of cognitive vs. contextual list, and it was kind of like an academic version of West Side Story. Alex, you get the experimentalists in one corner and the people who like to to go, the people who liked the research gave them tickets to go to conferences. Was it read and the other. And it seems to have become a lot better. I think I don't know if you've noticed Ashqelon, there seems to be more of a realization that both sides need each other. And I love what you said about some things are extraordinarily difficult to measure because in some cases I'm not even sure if we know what we need to be measured. Yeah, yeah, and it's the old I want to say it's an Einstein quote, the, you know, not everything that matters can be measured and that everything you can measure matters. It's sometimes as simple as that. You do need both sides. As you as you were saying that the West Side story analogy, I was thinking of something a little more modern, even though already 15 or 20 years old, but maybe you're almost in the right age bracket to remember that on MTV Celebrity Death Match, and you could have a celebrity death match of your work in an experimental, insane working quality. Now we're talking my level. Okay. Now, I mean, it's it's kinda like if I take that broke off and then got back together, you're going to buy it. And I say, [00:26:00] is this realization happening here? I hope it's building up a whole picture of what's going on, though. This is this is always a difficult question to ask a researcher, because every researcher wants to name their own work. But what if you were setting them with practitioners who had never even thought of spelling the word cognition? What are the big findings that you would want them to know about and that you think they should be even applying to? They don't work. I'm not even sure that I would qualify any findings as big findings. That's not usually how it happens. It's finding, you know, research in a lifetime. You keep contributing tiny, tiny little pieces to the puzzle. And anybody who sets out to doing a PhD thinks I'm going to change the world. I'm going to get this article in nature and and everybody's going to talk about my stuff for, you know, generations to come, which usually doesn't happen because you contribute a minor thing about Manute aspect of a minor part of a part of the bigger thing, and that's being optimistic. So I think, yeah, that the challenges that it's only by by replicating a number of times in experiments, if we look at cognition through the lens of doing experiments, you would eventually get to perhaps make a claim about something that could then be considered a bit of a game changer. What I'm always happy to share is. Stuff that is counterintuitive, so so you mentioned this before, you know, we all have our convictions of our assumptions. We don't like them being challenged and that's where I come in. So I love pushing those buttons. Recent [00:28:00] example, we we we studied experimentally what happens during simultaneous interpreting with text. And if you find the average interpreter who does that and you'll ask them, so how is it done? They will tell you that will you know, it's it's all about prediction. Interpreters always anticipates and we have the text, we can look ahead in the text and we know what's coming. And it's it's much more important to know what's ahead, what's ahead, what's ahead, what are we going to measure? And we see that when interpreters read just to answer questions about a text, they do look ahead. So it's it's a very natural feature of reading. But at least in this one, and I need to stress this in this one experiment, only the conditions we sent interpreters did not look ahead. They they were lagging behind with their with their vision, meaning that that information was being attended to, was being attended to for a different purpose. So is that big? No. Is it earth shattering? No. But it contributes a little bit to our understanding of the process and even more complex process than than just the interpretation of simultaneous sorry, of improvised simultaneous stuff. And ideally, but not necessarily, it might even trickle down into training. We certainly do need some with tech training and the couple of times I've had to do professionally, I came across this idea that you're getting cognitive overload. It seems to me like same with text creates a situation where you're going. Should I be following the person or the text or what if there was an eye tracking? I'm sure you've done the eye tracking study to try and interpret those eyes when they're trying to work out which source of information they should be following would be hilariously funny because they probably jumping about the place like kangaroos. We [00:30:00] haven't done experiments done under yet. But but I have a good friend shout out to to Mark Orlando, who keeps insisting I need to swing by. So I'm going to swing by and we'll do the tests, perhaps you on on kangaroos or compare kangaroos with the with the conference Interpreters' Help. I said jokingly right. Once the all Interpreters' Help goldfish bowl, that's a whole other joke. Now one of the things that you became famous for when I was doing my PhD was that in 2010, if the world was still in all of the unusuals, well, the InterpretBank world was still involved. Unusuals effort molders, which is this idea that you break into everything down into effort, goes for simultaneous and everything breaks down into four or five individual efforts. And so you can concentrate more on your analysis of what's going on or you can concentrate more on your production or you can concentrate more on understanding the source text. And I can remember because I don't I don't understand your use of the past tense. Where are you going with this? Wait, I'm going with this is that I happen to remember I was probably around the time I was doing obviously probably slightly ironically. And actually you began to suggest ever so gently, as you're known for doing that perhaps the effort models don't stand up to experimental scrutiny. Would you like could you explain the story of of how you came to think of challenging their models and where you stand with them? No. Now, I know what you're doing here. You're stacking the odds in your favor for this wager that, by the way, this is so transparent. So I think I think it's worse than the listeners need to know that there's a wager going. Am I going to offend somebody first or is Jonathan going to pitch the fact that he's currently working on a particular article on a particular subject right now? First, so [00:32:00] so he's he's definitely learning stacking the odds for sure. Right. So so Daniel Downie, whom I know also personally, I wouldn't say a close friend, but but we know each other relatively well, have had a number of exchanges over the years. And is the author of probably the if not the most famous than the second most famous model of the InterpretBank process, a particular take on it after the Paris model, Paris model, super simple. You listen, you de verbalized, you speak. And his model is is not much more complex. So it's strikingly simple. And it's it's it's it's very well known. And it is also similarly useful for training. And I would argue both are very, very useful models for training. Where does it get problematic? In my view, it gets problematic that it's not really a model in the traditional sense, in the traditional sense of of the notion. You have a theory often represented as a model that you set out to falsify. It's only a good theory if it can be falsified, if you can say, yes, there is the data that suggests it stands up and or conversely, there's data that suggests that no, in fact, that's wrong. And then you can ideally develop a better theory with a better model. So that's how in my perhaps old fashioned view of of of research also in science, that that's how you make progress. Now, I was told in those exchanges that, no, it's not really a model, even though it's called the model and it even has a hypothesis attached to it. This tightrope hypothesis, we're not going to get into the detail, but it's it's a conceptual framework which can be tested. [00:34:00] And that's where you lose me if you want to use this as the cornerstone of doing research. Now, the problem is it is so simple that few are the students writing a thesis on the subject that cannot avoid using it because it's basically A plus B equals. See, I'm oversimplifying an already simple model, so I am told and perhaps we'll leave it at that. Well, I suggested a different model, slightly more complex and slightly more. I know it's horrendously complex as compared to to what Daniel suggests, and that's where I lose people. And I don't have a large fan base as as Daniel, but that's OK. My model aims at being falsifiable and we're currently putting or continuing in our efforts to develop just that experiments to falsify it, because I am not wedded to my model. And if tomorrow I find evidence that suggests that note, in fact, that's the metrically opposed to to the way it should be looking, then I'll just draw it and have a V to where B three and maybe charge people to to upgrade. But I have been told I have been told. I haven't checked this myself. I have been told. Then he writes his own blog that I recently earned an honorable mention in one of his presentations as one of the people who doesn't understand his models. So I beg for his indulgence and maybe he'll have the patience to run me through it and explain them to me. One of the things so I although I'm a field researcher, so let's explain to those who aren't familiar with different ways of doing science. There's a way of doing science that says you come up with a hypothesis, which is an idea of what will be true. And the aim is to create a hypothesis that you can falsify, which means that you stay in advance. If [00:36:00] I find this evidence that I'm wrong to rapidly, massively oversimplify Karl Popper and even in my qualitative work, I was testing Transfer Hacker's model. They developed during his PhD and I turned and tried to turn that into a survival model, saying here's the evidence that would show it to be wrong. You can't really have evidence to prove the theory to be true. That's a whole other issue. But I wrote in my thesis, if I find this, then the model is wrong. It turned out I find the very thing that would tell you that the model was wrong and France was one of my examiners, which was rather fun. But this is where you can have something that is really useful as a tool, but you can never know if it's right. And I think that's a problem, right, but they need it needed to be a problem and I think it need not be a problem. I think it becomes problematic when you're you're no longer striving to improve the model, but rather where you've grown so attached to this being your pyramid that you want to leave behind, that you're not going to take kindly to people taking bricks out of that pyramid and have a wonderful example of somebody who was in on my Ph.D. defense in the Bible many moons ago now. And that is Ted Gibson from from MIT. And he used a borrowed, obviously giving credit his theory very, very, very generously, only to find out just about the time when I defended or maybe a couple of months after that that he said, no, you know what? Actually, no, no, I was wrong. I have a better one now. I'm like, no, I haven't even published my stuff. And you're putting [00:38:00] the bug out and, you know, I underneath from me. So, so, so. But it need not be an issue. Models are there to be replaced by ideally better models. And if the same person can come up with a better model, so much the better. And if another person comes up with a better model, then that should be fine, too. Can anybody come up with a simpler model than Daniele's and or back in the day Downie to a telescope? Which is probably not, because then we're really bringing it down to, you know, a half a sentence or one word. And that's not much of a model. And I think the sad thing is, is finding the balance between models that really do a good job of describing what's going on are models that you can explain in a few sentences. And as someone who deliberately dances on the boundary between research and practice, that's always a difficult balance to hit because you want to say something useful, but you don't want to see something inaccurate. And that's that's difficult to know. At the last conference in South Africa. And I still haven't quite worked out how the European Society of Translation Studies ended up in South Africa. To you, I noticed a lot of research coming out from your department or from your group on interpretor cognition in the classroom. I'm used to eating after recognition in the experimental unit. Could you explain how and how you came across those ideas and how that could help teaching? Right, well, this is something we've actually been doing or as we aspired to do and in Geneva over a number of years now, it's been traditionally our approach to to close the circle and to not only have fundamental research that we do do fundamental research, but see to which extent we can apply it and reinject some of those findings into the teaching, for example, because that is the main mission [00:40:00] of our May spy program. So what we've done is we have tried to describe the approach to training or teaching simultaneous interpreting, particularly the introduction to simultaneous as a very complex task in terms of cognitive ergonomics. So once more, we come back to one of my my well favorite topics, which is that of cognitive load, which obviously is borrowed from cognitive load theory, which actually originates from learning how to do learning activities need to be designed to minimize the load on the learner so as to maximize the learning outcome. It's strikingly simple, but to do it is not nearly as simple. And so we've tried to connect the dots and and there I use a political metaphor, reach across the aisle, because that's what you that's what you do. So we reached across the aisle, but seeing that we have a majority, we still pushed it through one two point nine trillion at one in the UK right now. So we reach across the aisle and really tried to bring together different suggestions that are found in the literature from from different sides, those who'd like to try and holistically and to look at the InterpretBank Tascosa as a holistic thing, that that even the attempt or the idea of splitting it up into smaller bits is heresy. And the others who say, well, you're out of your mind because you're cooking, you know, basically minestrone with way too many ingredients in there. And we need to take them out to understand them and all of that in an attempt to. Improve the efficiency and the effectiveness of [00:42:00] these first few weeks of introducing simultaneous or introducing students to simultaneous and so so we play a lot off of what you said before, this whole separation of of channels, of information, information provided when and how is it provided visually? Is it provided verbally? Is it provided as an auditory verbal stimulus, as a visual verbal stimulus? Is it static information? Is it dynamic information? This is where I actually dress up and we cook in class. We have I actually make pancakes and I have the chef's hat and everything. It's a lot of, you know, both research and and training can be a lot of fun. And so that's where the idea comes from. The model has now been put out very recently with with one of my current students. Elenor, are Bohner a shout out to her? And she does a lot of work on the processing of visual stimuli, the gestures and such. But it's in its first iteration and we are only now designing the the experiments to at least partially test some of the assumptions. So the models there and now we're going to test the assumptions and then we're probably going to, you know, chip away at it and change it and modify it. So. So, yeah, obviously caution no, do not apply expecting miraculous results because for the time being, this is very prototypical. And I think the sad thing is, is sometimes you don't know what works until you explode and Osama will have a lot of research and teaching. Interpreters' Help has historically been based on relatively small numbers. So it must be difficult for an experimentalist like you to go get as a limitation to the stats. I can run on this. This is again a coincidental but but we [00:44:00] focus on the introduction to simultaneous. And as you probably know there, there are schools that do not front load their training program with consecutive. So in theory, I can devise experiments that use a proficient for whatever that that means bilinguals and see how I would need to devise certain activities for them to effectively and efficiently acquire an additional skill. So to get them started on this simultaneous interpreting thing. But yes, not that the participants are obviously the the and the the infamous end. So the number of participants you get, which has the direct effect on on your effect size is always Acrux in interpreting. But we've we've been improving on that end too. And there are a number of studies now coming out using professionals with twenty four participants, if you do everything else right. Twenty four participants will give you nice statistics. You don't need to have that the same end as you would in the medical study where you need to test a thousand people on a few correlations. Yeah, I could see how that would work. Now, the interesting point is that you talked about closing the circle from research back into classroom training. Is it a place for closing the circle between research and continued development of practitioners? So, for example, finding what you're learning about teaching in the classroom and then using that to develop courses to help the upscale or RESCALE or help practitioners in some way. Yeah, I mean, we certainly do that through a program we actually run, which is which is the the MS. Master's of advanced studies and interpretive trainers. I think we're still the only ones who offer a course like that. And you would think, gosh, talk about a niche market. So not only, you know, training interpreters is a niche market, but but training [00:46:00] those who train interpreters and turning that into a full fledged MBA program is is even more of a niche than that. But that's an area in which we try to inject lots of findings, this whole atomistic approach of of peeling away the layers. But but also, you know, the odd a workshop that I've been asked to do on stuff like how to best prep for some text. Well, that that's where the findings that came out of the study were immediately transposed to argue for why we would suggest interpreters do one thing and not the other. And it's really neat when they say, well, you know, but this is what we do. And I give them the quiz and say, what? What do you do when people traditionally opt for the. And that's great because otherwise it wouldn't work the wrong answer. And they said, no, we have data on this. In actual fact, this let me tell you what you do, which is not always a good position to be in. And you need to enhance my reputation of offending people because, you know, I'm going to select I'm an intern. So let us say it's actually true. But somebody is an interpreter, a conference interpreter. They're there in this at least perceived elitist, small group of a few, certainly only a few thousand members of EC. We can we can draw the circle wider and say maybe there's some some some dozens of thousands of professional companies that was in the world. Yeah, you're special. And so you know how you do what you do. Right, because you're special and we all want to be special and you don't want somebody, you know, telling you because they stuffed a few people in into a lab that they know better what you do and how you do it, then you know it yourself. But but that's the quest, right? We need to go beyond simply [00:48:00] being told. We don't need to. I want to I'm that that that's what keeps me as what keeps me up at night. I'm not always only your show often, but not not always. And only I think the a thing is that there seems to be more and more researchers who think I was talking to these today. And I said once, like there are more and more researchers today who are happy to poke the bear. Yeah, I mean, that that's probably, I don't know, societal changes, breaking down barriers, you know, things shift to know maybe, maybe, maybe there are more bears around, maybe they're just more packable. And I think also I've heard of people who teach the I said, you know, don't rock the boat. My my supervisors would send me out looking for boats to rock. But I think it depends on the environment, the urine, and also depends on awareness of the big questions. And I like this idea of, you know, InterpretBank think of themselves as Eley and have someone come out and say, maybe it doesn't look like that. I yes, I associate with both sides of that equation. Have you ever been further made uncomfortable with it? I don't want you to talk about that. Yeah, I think that's definitely happened. I think it's definitely what you were saying. Everybody kind of thinks they're special and they know exactly how they're doing it. And you kind of said in your own way. And then if somebody comes along and says, hey, actually I've done some research on this and I can empirically say this is what we found during that research and it contradicts what you're thinking, I could definitely see why some people wouldn't be fans of that. But I guess I just kind of how it goes, right? Yeah, it's the danger of research. How do you deal with just before we come to wrap up, how do you deal with the in uncertainty and research and experiments, you can actually give a number to how certain you are, you know, a percentage or a P value. If [00:50:00] you're a connoisseur of statistics, how do you deal with the uncertainty and InterpretBank research? Because we tend to have a small number of people involved or just the statistics can never make us 100 percent certain. All right, as I was listening, I thought to myself, I didn't understand that. Can you repeat that as the accent? Sorry, I promised I would squeeze that in a way that is offensive to someone. Yeah, it's just OK, this is what the video does not show on the final recording. I mean, that's the thing. And that's that's perhaps why why of late there's been a convergence among the different approaches. You can obviously go nuclear with the statistics and and these that is statistics. Look, look, look intimidatingly nuclear even to me as compared to what it got away with doing, doing my my PhD and was perfectly fine. And now you have very complex models. There is also very good software out there allowing it to do that. But you need to live with with this idea of your claim, particularly if you do inferential statistics, if you're if you're trying to extrapolate what you found on the small sample to what this how this would apply to the population or your population, not the general population. So the population of interpreters at large. And from that perspective, it makes sense that experiments only look at such small microscopic part of the whole thing, and that's the only thing you can make a relatively modest claim about. But again, [00:52:00] brings us back to Pupper. If if you find repeatedly, ideally that something is not the case, well, then you can claim it's not the case. It must be something else. Right. You cannot claim that something's always going to be like this. You can claim that something had not always been like this or is not always like that. And so it's an extraordinary it's the it's the long game that you play when you do research much to the frustration of practitioners, because they would like to have all the answers after one experiment that ideally can be put together, run and published in four months and doesn't cost anything. Thank you. You just described the entirety of why I don't like research. I'm just like God. This all sounds so tedious and long. Yeah, it reminds me of Geneva when we had that wonderful town hall and we asked people, you ask the practitioners, what would they like from research? And a couple of the answers were basically prove us right now. Yeah. And I thought that's a really I would love to be proved right, but I've seen too much research to realize how dangerous a game that is. And I think that's the great privilege one has. When you work for for an academic institution, you have academic freedom. Nobody can put any sort of pressure on you for your results to go either way. And that people know and I'm very pleased that people still come to us and say, you know, we want you to run. This research can't be run by the association. That will then use the results for a political argument for negotiations. It's just not on you know, if you want to know if smoking is healthy, I'm not sure I'm going to buy into the results that Philip Morris, you know, just sponsored. Monsanto tells us that genetically modified corn is greatest thing on Earth. And and Rio Tinto tells us that we need to keep digging because it's wonderful for the planet. OK, you see, I'm [00:54:00] coming from. So I think that that is one of the the biggest privileges of being able to do this job. Know being a professional researcher, even though that's the sad note on which I'm going to end up, the the longer you stay in and academia, the more the more admin you end up having to do, the more, you know, management responsibilities you get and the less research and do so. So. Well, I think it's definitely true that you do the last stint of of proper, untarnished research where you can dedicate all the time in the world to it is going to be your postdoc. So once you're on your own, you know, you've graduated, you can fly and then you have a year or two or three to do your stuff. And after that, yeah, they're going to take that away from you. That is a very sad note, and they clipped your wings, but but but you do it, you do it partially by proxy and that's why you have a team. And I've been extraordinarily lucky to have a wonderful team which, you know, dates and you get new students. We have small teams in our different labs, no more than three, maybe four students. And they they partly carry the torch slightly differently and the slightly different direction. But they do carry it. And in many respects, they do a much better job than I could have or would. So that, too, is part of the part of the game. And yeah. So don't Bill commits. We love to to end the show and kind of big ideas and quick thinking. If there's one idea that you would like, you know, practitioner who's never read research before, take away from this podcast episode, what would it be? Does this have to be a one liner, take your time. We have a very good editor, so what [00:56:00] would it be in my experience? Ordinarily, interpreters are exceptional, but they're also exceptionally ordinary to mean that they in many ways, interpreters behave very similarly to a quote unquote, naive Participants', your average Jane or your average Joe. In some they don't, but in many respects they do. So, yeah, asking interpreters how they feel about a particular thing in InterpretBank is is good, but it's not good enough. It's for certain things and it shouldn't be discarded. But you need to go beyond that. And there examples with this study and prolonged turns where interpreters were told, you keep going on the 30 minutes as long as you think you're providing good quality. And many of them did and quality went down the drain. Now, the reasons might be debatable. Perhaps it's because we have been conditioned to only be able to do 30 minutes for someone so many. That's always my quip to to talk to those findings. But what's much important, more and more important, is that the interpreters weren't aware. And it goes along the lines of unrecognized sleep loss, where if you sleep less than six hours for, I think two weeks, you really start underperforming as then it's as though you were drunk. And I'm sure that you can relate to that feeling. So I don't know the drunk, but you don't get attacked, both of them for sure. But you don't notice. So yourself don't what is. And that's and that's what we need research. That's what it's out there for. And those results will hopefully then feed into the wellbeing thing because. Yeah, that that's what those I don't want to call them hard findings, but at least [00:58:00] more informed findings can be can be used for. But I do have a quote that comes to mind completely unrelated, but it was a Spok who said you remember Spock and Kirk and Spock and of the years. That's right. And he said at some point something like logic is the beginning of some, not the end. And a bit along the lines of that quote, I think experiments might well be the beginning of some wisdom, but they're certainly not the end. But that is a great way to sign off. Can you. Thank you very much. Thank you.