NOTE: THIS IS AN AUTOMATICALLY GENERATED TRANSCRIPT. WE‘RE WORKING ON BRINGING YOU FULLY EDITED TRANSCRIPTS IN THE FUTURE. Jonathan: Welcome to Troublesome Terps, a protest about the things that keep Interpreters' Help at night. It's my pleasure to be with you again. And we have three of the four Troublesome Terps SCIC is off enjoying new motherhood, all the best to enjoy time with their miss us too much. No, it is fantastic to introduce a man, but it was recently said that some of his articles are great. Ladies and gentlemen, Alexander Drechsel. Alex D.: Good evening, everyone. I'm still pondering this. I don't know. I haven't written a lot of articles recently, but who knows? I just I'll just take it the way it is. In any case, moving right along and well coming. The other Alex on this call, Alexander Gansmeier from Munich. Good evening. How are you, Alex? Thank you. Thank you very much. I [00:01:00] forgot which song I was supposed to sing. That's probably a good sign. Alex G.: I'm not going to sing anyways. Yeah, that's probably a good sign. Probably best not to sing. But yes, here are the very young and energetic Troublesome Terps very youthful energy tonight. And and without any further ado, I would like to introduce a very dear old friend of mine, also very youthful friend of mine, Notability. Elaine Notability is a senior lecturer in translation studies with Expertize in Interpreting and a Ph.D. in translation and intercultural wait. I'm reading out this whole thing. I'm not reading out the whole thing. Eleanor, why don't you introduce yourself? That's much better. Elena: Hi, everyone. Thanks for having me today. It's lovely to be here to see you, Alex and Jonathan and the other. Alex, thanks for joining us. Let's just skip the bio and why don't you tell us a little bit about how you two met you and Elena? Yeah, that's actually a good idea. So Elena and I have known each other for God, but when was that, like, eight years ago? Mm hmm. I [00:02:00] think. Right, yeah. So this was in Manchester and beautiful Manchester where Elena and I both were the co event organizers at the North Western Leaders Network. And yeah, we just I forgot how we were actually how we actually became co event organizers that I was so keen on doing that. It was a it was a very good experience, I have to say. It really was definitely definitely a good times. Yeah. And so that happened. And then I moved back to Munich and then you moved on to Surrey, I think, around the same time. Right, exactly. Exactly. Yeah, good times. But I think what you and Jonathan have in common is the focus on research, on interpreting research at the moment. Right. So I'm sure you guys must have crossed paths many times. We met for the first time in person to the previous conference. And you were working so, so hard the entire time. Yeah, that sounds like me. Every time I saw you got a laptop open and I'm like, oh, is this what I'm supposed [00:03:00] to be doing? There is a researcher at the site enjoying a cup of coffee. Were you doing that during that time, I wonder? Just having to write properly at the knowing the knowing the conference. I think when I was chatting, feeling, I just I was just about to be really, really silly on stage, which is completely unlike me. Yeah. Very usually you lay down and sleep. I never cool stunts on stage like falling asleep. Must be another Jonathan. Yeah. And yeah. And I think the other thing that we that we both have in common is that we're both very much few researchers, both very much into seeing things as they are. Am I right because of any cash to go in the field and were both really interesting in real life practices? What's actually going on? Absolutely empirical research. And that's that's a nice word for it. Sounds much fancier, doesn't it? But let's let's go a couple of steps back, basically, because I'm always interested in how our guests become who they are, what they are. So how did you [00:04:00] how did you get into interpreting LNA into interpreting as a practice? Because I did practice before getting into research full time. I would say there are two elements of me that I believe led me to this, although kind of subconsciously. The first is probably always been this fascination for spoken language. That's something that I never thought could turn into a job somehow I could use as a skill to to make people communicate and make a difference for them. And then probably the second thing was my way to travel, but not traveling in the sense of, you know, going to a place for a couple of days. It was really to travel and and have the feeling of settling somewhere, of getting to know places from the inside. I was constantly looking for opportunities to do internships abroad, jobs abroad that would take me away for two or three months at the time. And probably that's how I ended up in the UK for the past ten years. And I think, yeah, this idea became [00:05:00] stronger when I started working in community settings where really what you say and how you say it makes a difference for somebody and facilitates success in a way that I never thought. As possible, so I think that's that's it's kind of similar to origin stories I've heard about InterpretBank, which which is lovely, I think. So I'm just wondering just describe to us a little bit the situation. So you're in the UK, you work in community interpreting. How did you make the switch or how did you move towards research? I wonder. I was I loved practicing and I really liked being out there in the field. But then there was something that was like I was never satisfied after finishing my assignments. I would never go home and say, OK, that's done. Maybe I review the terminology and then move on to the next. There was always that I would call now self reflective attitude to try to really understand what went wrong, what I could have done differently. You know, these dynamics [00:06:00] which then became the main topics of my research, interaction of dynamics, communicative dynamics, how people could construct meaning. And I kept thinking about it. I just didn't have the tools to explain this. I didn't have the language. I didn't have the awareness to be able to systematize, if you want this, what was happening. And I also felt very many times when my role was challenged, working with the vulnerable kids or with patients that I didn't really know how to make decisions. I was following common sense, but I felt I need to understand this more. And that's how I got into research. And that really gave me some of the answers that I was looking for. And I discovered that there's many more questions, study questions. Yeah, that's when you pull one string and then the whole thing just unravels. Yeah, that's super interesting, Jonathan, because that's a question I had actually for you as well, is does it take a certain group of people? Is that a special kind of people that sort of looks at these situations and [00:07:00] things? Well, I need to do research to be able to answer these questions, to be able to improve whatever, to understand this better, because I think a lot of interpreters would just be, you know, we just see the situation, say, OK, well, I'll try something else next time. And, you know, maybe that's just the way it is. I wondering if it's if that's a specific type of person that would go. I need to become a researcher to find out more about this. Well, Andrew Chesterman has a saying that everyone's a theorist. There's just not everyone's a good one. Okay. And he points out he was talking about translators, about how they make their own ways of finding solutions on the job and they start building rooms on the basis of where that works for this client. And something that I've noticed about researchers is the really good researchers will say, OK, well, that worked here for this group of interpreters. Does that generalize anywhere else? Is there a view outside of that? And I think I don't know about Ailina, but I'm sure you've come across you know, you get these inspirational researchers who can [00:08:00] say, here's something that worked here. Here's something that happened 5000 miles away. Professor Ian Mason, I think was was a specialist saying, look, all these things that we can relate to, actually, here's the common thing that links them together. And that's when, you know, you've got a great researcher who can find patterns and pterosaurs would look at and go, well, one's caught interrupting one's community, interrupting one's conference, interrupting. They've got nothing in common. I could research because hold on a minute. Is the inspirational people who can do that? I think, again, to research the degree and clearly just to go back, experience is key. Experience helps you develop even subconsciously. Sometimes, you know, some of the answers, some of the understanding of how to go about certain situations. So finding the patterns, finding the common grounds, understanding how to approach certain things from a more abstract level as well. And I love how you enjoy your work. Your questions seem to get [00:09:00] more and more and we'll get more and more refined. But in a way, to get more and more practical is you going as well. And you can see it in your work. I think it is Graeme Turner who said to me that if you're doing research, research you to the more questions you should have, that is very difficult. Just killing and destroying things. Just start killing one thing and so many questions open up. Yeah, I suppose for me, for me that happened with but is like different stages in my research. So I started with the interaction of dynamics, how people could construct meaning, how people understand each other. And I looked at it from the lenses of Multimodality. So I was looking at what people say, but also what I call embodied behavior. So how they use gaze, gestures, body posture and whether they are complementary to what they're saying or sometimes in conflict. And that was applied to my own experience, mostly in pedagogical settings. And then I joined study and I started working on video mediated InterpretBank with some pioneers like Professor Sabina Brown. And [00:10:00] I discovered this whole field of research where Multimodality was also very relevant. How do we communicate verbally and nonverbally through a screen where everything is by dimensional? So I started seeing multimode. A different application, and we started looking into how that can feed into training, because I do believe in the value of research that feeds into training, into teaching, into practice. And then continuing in this journey I started discovering is hybrid techniques, hybrid services at the interface between human and machine. And I discovered another dimension of multimodality. So, yeah, questions keep coming up along. Podcast. Yeah. Yeah. So I'm just wondering sorry before we move ahead, is can you can you unpack the whole idea of Multimodality a little bit more. I feel like I haven't quite grasp it. I think yeah. It was the idea of studying how people could [00:11:00] construct meaning. So in an interaction where you have the primary participants as we call them, and the interpreter, how they communicate not only through what they say, which is mostly been the focus of early InterpretBank studies, but also through to how they use gesture, gaze, body posture, mix, which is another very important dimension of meaning making during an interpretive mediated event. And this was quite new when I started looking into that. There were many studies that had done it, and I believe it was mostly for also practical reasons. I had to video records, Real-Life Interpreter mediated events and that wasn't easy. It took me a whole year and a half just to record two hours of interaction to get all the permissions, you know. So it is a very qualitative study, but it enabled me to try to understand how these two dimensions or verbal and body, how they are so closely intertwined, [00:12:00] how sometimes we say things and we convey contrasting meanings through embodied behavior so that that was in a very simplistic way. What got me into Multimodality applied to InterpretBank. And I think even if you think some recent work from killing Zabor in Geneva and it seems like InterpretBank studies are still trying to get a handle on this idea of multimodality. Still, the majority of studies are based on people transcribing what the interpreter said. And I know there's been some work discussing that. But is this idea of what you InterpretBank even if you don't abuse, you're not just seeing words. There's a whole set of layers of being. I mean, I joke with people where experts meet in the back of people's heads for conference interpreters, but there are all these layers that build up. And this is what the multimodality thing is. What is how many layers are there going on? What are the relationships between them that becomes complex very, [00:13:00] very quickly? I would imagine that most competent interpreters deal with that stuff without thinking. And it takes researchers to go hold on a minute. It is actually work that became particularly relevant when looking at remote InterpretBank or video mediated in general to see how we come across through the screen, for instance, how we interact with other people through the technology and with the technology at the same time, how the whole perception somehow can get distorted. That just gave us some tools to add this layer, as Jonathan was saying, of analysis, which is enriching and can give some new perspectives and some new depth to understanding interaction and dynamics. So very complex and there's still a lot of work to do. I would say it's a dimension that is really important. And you started working on this topic before the pandemic started [00:14:00] already. Yeah. So in a way, you were quite well prepared then for this new normal, as they call it, this new situation we work in. Yeah, yes. It's it's it's quite amusing. I mean, how certain recommendations, ideas or findings all of a sudden became so relevant then it's something that we had been talking about in discussing for years and always with the caveats, OK, there's still reticence, there's still some fear towards these new modalities. And then all of a sudden it becomes everyday business. It's almost like a field trial on a global scale, isn't it? If only it could be and if only it could be everywhere at the same time to research everything. Yeah, I think we should probably get away a little bit from remote interpreting. And actually it's not the the main topic that we wanted to to discuss tonight, because our thought was to to look at no other interesting ways [00:15:00] of work that could be relevant and of interest to Interpreters' Help. And that's something that you've been looking at as well. And I think and I think I should point out as well this. This topic has been on our bucket list for a long time now. We've always wanted to talk about even before the pandemic, but it seems it's become much more urgent now, much more pertinent, I guess. So just opening this up to all of us here, what could those other things be? I mean, I guess something that a lot of interpreters do is translate on the side, obviously. And some some of us may be doing more translation work these days, of course. And then maybe, you know, stuff like language teaching, maybe. But there are other interesting fields of work out there as well, aren't there? I was going to say, I've heard of people being more and more attracted to voiceover work, and there seems to be a general trend towards people thinking that Hollywood or the film industry [00:16:00] is going to be the white horse that writes to save parts of the language industry. I guess it's the same way you meet a lot of translators who would love to be literary translators. There seems to be some interpreters going. I would love to be a voiceover artist and I'd love to read the audio books will be familiar enough. But in that same vein, I it's not voiceover work that I've been doing a lot more, but I've actually been kind of recording, I've been interpreting videos, but then kind of recording the audio is sending them a separate audio file to listen to you listening to it again with the video to make sure that everything kind of matches. So it's sort of it's still sort of interpreting, but it's like at home. And then I'm recording into like a little recording device, which reminds me a lot of, like Yuni again, because we used to do that at uni all the time. I was thinking of this concept of hybrid services crossing the boundaries of established modalities, of established practices. [00:17:00] And I've established lots of ways of working. And the one that I'm focusing on at the moment which is emerging is called Interlinks, where we speaking. And that's a form of life subtitling by speech recognition. To put it very simply, it's it's a service that crosses the boundaries of established practices as it combines skills from simultaneous interpreting, but also skills from subtitling. It also crosses these boundaries of of modalities in that normally in spoken language InterpretBank we tend to be within the boundaries of morality. Right. Spoken to spoken sign language InterpretBank already crosses this modality. But this is a language we speak about speech to text. So going from spoken input to reasoned output. And it's also a form of human machine interaction. So it's about learning to deal with speech recognition [00:18:00] software to be able to optimize one's once output end. And in my view, this is fascinating. I call it the simultaneous 2.0 because it's an enhanced form of it's it's complex, but it's also doable. It is being offered, although not on a large scale yet. But talking to industry stakeholders, we know that there's interest there and I think that interpreters would be very well placed to at least explore it. So just to to make it very, very simple, basically how interlinked world means that it's it involves two languages. So we're working from one language to another. Right. And speaking is basically another way of saying life subtitling. So can you give us a flavor of where this would be used? This is something that will be used on television. Where would this be useful? Yeah, speaking effectively is life subtitling via speech recognition. And that's quite important because Life [00:19:00] Subtitling traditionally in the same language. So in trilingual, it can be provided through different methods, not only speech recognition, but also keyboard based methods, stereotyping by typing. Maybe one of the reasons why in InterpretBank we haven't really heard so much of it is that it's been taken up by audio visual translation and media accessibility as a service to provide live subtitles in the same language, mostly for accessibility purposes. So for deaf and hard of hearing communities initially was used. We're talking about perhaps starting around the early two thousand in for TV program broadcast weather forecaster use to make them accessible. Interesting, but then more and more started to be used for live events as well. And that's where start seeing the similarities with now the interlanguage dimension, which is going across languages, opens up [00:20:00] again new opportunities. What I like about the practice is this accessibility is the idea of making content accessible for a wider audience. So we not only targeting specific communities of user needs, but we are potentially targeting anyone from that, people with hearing loss or native speakers, non-native speakers, people that could benefit simply from a reaching out for better comprehension. So it is really an inclusive service. And this is why I believe it's gaining it's gaining ground and it's gaining interest. So what clients want is some kind of like written text delivered across languages in real time. That was a way you get there to this product. There's many different. Ways which are being experimented at the moment. I mean, I'd be happy to elaborate on that. Yeah, I was in a few of those tests and trials as well a few years ago. [00:21:00] And I think what one of the things that we did, for example, was that a company came to Tuskegee to to just show what they had. And they had this system, for example, where they use these velho type keyboard's so a specialized keyboard, which where you don't type individual letters, but you type syllables effectively. So you you basically can get up to a higher speed and that that would be used to have in trilingual subtitles. So you'd have a written output of what the speaker is saying and then you as an interpreter, you could use that basically to then. Interprete, so basically as a as a as a form of relay, I guess, which is kind of interesting to think about, but then, of course you could you could just say, well, the interpreter could learn that technique, you know, maybe using a keyboard or you could use speech recognition like not like serious stuff like that, but professional professional grade speech recognition software. So that's that's really interesting because the company that came, they also say, well, interpreters actually have exactly [00:22:00] the kinds of skills that we would need to do this across languages. So, yeah, I think it's definitely, definitely interesting to to test that out. Have you done any of these sort of practical hands on tests or is it more early research at this point? What's the situation? Yes, we have research surrounding Interlanguage. We speak in is is quite young, but we have carried out a pilot study within the broader framework of a bigger study called Smart, which stands for shaping multilingual access through speaking technology. Now, Spot is a project that we were recently awarded by the Economic and Social Research Council UK. The project is officially led by the University of Surrey, but it relies on a consortium of national and academic, sorry, national and international stakeholders, both academic and industry ones that provide a really valuable conceptual input and insights into the development [00:23:00] of Interlanguage. Speaking as a practice about Smart kicked off in July this year officially, but in preparation for it, we did run a pilot study to test some elements of the methodology. The study was carried out on a group of twenty five postgraduate students who had different backgrounds ranging from InterpretBank to subtitling to each language speaking or a combination of these different skills. And the pilot itself proved very useful in that. First of all, it showed the difficulty to categorize participants. According to clearcuts professions, most students presented a combination of clusters, as we call them, of skills in their background, which had to be accounted for in the analysis. And actually it got us thinking that this is reflective of the reality of the language profession, where different professionals might combine different skill sets [00:24:00] in their backgrounds. And it is important to understand which skill set or which combinations can be most conducive to the development of where we speak in expertize as an inherently hybrid practice. The pilot also showed that our best performers wear those with the composite skill set. So those participants that combined mostly InterpretBank and subtitling in their backgrounds. But as I said, the pilot didn't test all possible relevant skills. So we are aiming to open up the the possibility to take part to a much broader range of participants with different skill sets for the main study. There's other studies that have started to carry out empirical research on Interlanguage. We speak, although this is a very small but rapidly growing, I would say, by the body of research. Now, the the gap that Smart aims to address [00:25:00] is involving language professionals as participants in the study rather than students. This is very important. This is very important for me. We are targeting language professionals from different walks of life, people with significant experience in one or more cognitive disciplines to enter legally speaking, such as spoken, but also sign language interpreting translation subtitling both live or prerecorded. So we are open to different profiles. Of course, I'm interested in seeing how people with interpreting skills perform as part of the study. We are developing a substantial upskilling course that will be part of our experimental design and we will provide this for free, free of charge to participants who aim to join the study. So it will be an advanced introduction to Interlanguage speaking offered as [00:26:00] a continuous professional development opportunity, as well as a way to discover a new, exciting practice for those professionals who might be interested in adding it to the portfolio of professional services. The course will be offered online, will be self-taught, would be flexible, and will offer an opportunity for participants, as I said, to discover and to get a substantial training in them in this practice and for us to collect data, which will be. Invaluable to be able to to further study the competencies required in this in this practice as well as the whole process. So it would also be an opportunity for those participants who decide to join us to take part in research and in shaping an emerging practice. I think that's something that you, Jonathan, have a few thoughts about. Right. Is using interpreting students for studies. [00:27:00] It's called convenience sampling, and it's perfectly understandable in a lot of cases. Well, one thing really interesting is when we are mapping a research and InterpretBank we tend to go for can we understand the process? Can we understand the profile? What would be really interesting for me would be to say, OK, let's take a group of professionals and let's get users to tell us what kind of thing they want the most. Because, for example, I know Interpreters' Help trained before a certain point were trained in this paradigm of see everything it was said, don't Adamic. Well, if you're doing to lingonberry speaking, that's going to subtitling. That's not going to work with, as I imagine, students trained later. I hesitate to think because students trained more recently are probably trained more and OK. There are times when a condensing what was said is exactly the right thing to do. [00:28:00] There are times when you have to think, well, we don't need that detail. And it reminds me a little bit of the side of this surface of audio description where, you know, the audio describer is never going to see everything that's on the screen. They have to describe the things that you need to know to be able to understand what's going on. Is it a similar professor looking for people who can see the sports commentators talking 120 pounds per minute? What you really need is the best bet we can get on screen. We're talking about a form of simultaneous interpreting, and in order to handle it, we know that strategic formulation is important. I mean, it's not always possible to go word for word. You condense your reformulating, and that's part of the language transfer bit. So there are moments that you need to condense moment that you I mean, all the all the theory strategies, everything comes comes very handy. And this is part of the entire language speaking. It's exactly the same process. And somehow [00:29:00] with the difference that in Interlanguage speaking, you also need to handle your décolletage, perhaps even more skillfully, because you are not only translating or rendering what is being said. You also have to verbalize punctuation because you are producing something that is going to be displayed on a screen either as subtitles or a scrolling text. It takes up more words to do it, legally speaking, because you also have to verbalize punctuation. And so I believe condensation is a very, very useful skill to have there in order to cope and for your decoration to become not too long because you need Terlingua speaking. You can see with the titles. So they need to be synchronized with the with the with the source. This is really funny. I actually remembered I had one of look, I basically have done this where I was the machine, so I had a job. This was relatively the beginning of [00:30:00] covid where a client needed us to basically provide subtitles for a forty five minute workshop. And what we did. So this is the workshop was in English and we were supposed to provide the subtitles in German for whatever reason. They didn't want us to actually they didn't want us, they didn't want our voice to be heard to the German audience for whatever reason, but they wanted the subtitles. So my colleague was doing the interpretation severely condensed while I was typing like a mad man. And it's like sending it into into the chat for the audience. And it's exactly what you were saying I had to do. She didn't voice out the punctuation, but I kind of listened with one ear to be to the original speaker just to hear like I was yelling, which was she doing, you know, any period or DataDot or exclamation point. And that's really interesting. That was basically that where we kind of just like splitted where I was like the machine. That's I mean, it's very interesting what you're saying, because when [00:31:00] I talk about interlibrary speaking, I talk about this workflow where you have one person doing everything, all the cognitive processing, language transfer, adding or punctuation as well as editing on the go. And that is why I call it simultaneous 2.0. You have to do additional things like, for instance, editing on the spot. And editing is not the same as self repairing. When you are interpreting simultaneous, it's really about, for instance, using the keyboard to correct your mistakes while you are doing all the rest. But it requires very advanced psychomotor skills as well. So it's an additional layer if you want. Yeah, because usually there's another person who does the editing. Exactly. There's there's other workflows where in some countries where this is offered, there's two or even three people working together. Exactly. Splitting roles, as Alex was saying. So you have the person that does the simultaneous bits and then the editor and potentially a character you can get to teams of within three to four people. The upside is that accuracy is really, really good. The downside [00:32:00] is that the delay, the latency is too much at that. So you may have several seconds of delay. It depends on what is prioritized by the specific context, the specific client. And interestingly, another workflow that we are seeing at the moment being used when in Terlingua re speakers professional that can do this are not available for certain language barriers or is a workflow where you have a simultaneous interpreter providing the service in the normal way and an interesting speaker sort of taking the floor from the interpreter and interpreting the interpreters output in English. So it's this kind of team where the couple interpreters to have the language transfer bits and interlinked very speakers to then turn that into subtitles. Talking to industry stakeholders, we have global players on the project. The broadcaster Sky Access Service providers [00:33:00] like need relax and safety, and they all describe the need for a service, which is this written out puts real time and into linguine and the need very often to try out different workflows, sometimes creative solutions, because at the moment there isn't really a large supply of professionals that can do this. And and there's a huge opportunity, I think, for interpreters and other language before. To to at least explore the practice about getting involved in shaping a practice ultimately, which I think is just like a remote. Yeah, absolutely. And and just to give you another data point, that the tests that I were involved in, that I was involved in with these with this technology provider, they were actually also using a solution where they would have intra lingual speaking. So they would they would provide a text feed and then just run the text feed through Google Translate so people could join in in [00:34:00] any language they like. That is available on Google Translate. So, you know, people people are trying trying out those, you know, they're trying those things out. And I guess that's one more reason for us to get involved, I guess, and give our input. It is. And one of the aims of the project now is mapping out all the options and possibilities that are being tested. And we are testing in a more kind of side pilot study exactly the workflow that you were looking at. So where you have an interesting where we speaker and machine translation together and the question there is how can we shape human input in order to provide the machine translation, something that is well, punctuated with segmented, something that asare automatic speech recognition wouldn't be able to provide simply because that would be so many these recognitions punctuation, we know in a aside at the moment is all over the place. Oh, yes. And again, you might segmentation. I mean, there's all sorts of problems. [00:35:00] I look at all these possibilities, some of which we've outlined. But there's many more on a kind of a continuum from a more human centered practices where the human is at the center and kind of dealing with the whole process of leading the whole process. And this is where we are speaking to more automated ones, like the possibility at some point maybe in the future to have asare automatic speech recognition and machine translation together, which I don't think is going to happen anytime soon. But I think the real question is not to say that will never happen, but to say what could the role and the place of the human be in these different workflows? At what point, at what stage, what kind of activity that could kind of post editing role. Perhaps in certain situations it could work, whereas in other contexts with speakers being all over the place, a completely unplanned, lots of hesitations, then you need maybe a more human centered workflow. I really think it's about [00:36:00] trying this out in this context and understanding the fit for purpose, depending on on different requirements and different variables. That's what really encouraged me to do research, because there's often been researchers have often tried to say, well, what is the best solution? And I think we're failing. The reality is, is there is no one best solution for every situation. When we had someone who was a machine interpreting on the show a few episodes ago, he was talking about USB-C. And it seems to be exactly what you're talking about is theater. There's a variety of use cases. You know what is needed if someone wants to watch. We in Germany is entirely different than if they're wanting to watch their friends play rugby. You know, people want to watch in German for some bizarre reason. You know, those two those two cases are very different because, you know, the only thing predictable about Scotland playing sports is they will lose at least once a year, whereas home and away is very predictable because you have the script there. Absolutely, [00:37:00] I think that's the key word used cases trying not by what I call variable is effectively are then translated into different use cases. So we look at that handling speed, how to cope with a fast speaker, maybe a machine could could be of some support there because machines don't tell you where could the human be placed in that sense. But then when it comes to other situations where you have a multiparty interaction, so multiple speakers talking over each other overlapping and for instance, a use case that we are looking at at the moment is that of cinema festivals. We have somebody that is a big provider of accessibility for cinema festivals like the Venice Cinema Festival, and they would like to make this content, you know, press conferences, award ceremonies available in real time. And so we are looking at that as a very interesting scenario where you can have different kinds of use cases. So dialogic [00:38:00] interaction, press conferences or monologist speech in award ceremonies and try to see where could interlink where we speak and be used, how in what kind of also combinations or team, how many people, how long can you work in this mode before you stop making sense? Effectively, I see the value of this kind of research to feed into working practices, into professionalization, training. That to me is really at the core of what we are doing. So I think that's a really interesting question, because you were saying that you have to do research before you, you know, in order to to to shape this kind of emerging field of practice. But as you were also saying, there is a there is a demand out in the market. So if somebody were to get started in doing this or exploring this, like what would be a good starting point? Because, I mean, obviously the research is still ongoing and you don't want to just willy nilly go out there and kind of, you know, go wild, [00:39:00] go crazy. So if anybody listening wants to get started in this or wants to start exploring this, you know, maybe with some training or kind of laying the groundwork, what what do you think would be a good starting point to get involved? One of the good places to start would be to join our projects in Smart. We are developing in upskilling course, which is going to to be launched in February or March next year. It's going to be free of charge for language. Professionals will want to join. It's going to run entirely online, but in a self-taught manner. But it will provide the opportunity to get a sufficient amount of training to be able to try out the language we speak. So the course that we are planning now is going to be around twenty four hours. It will require a commitment of around three hours a week for five weeks. It's going to be free of charge and is going to be a way of really taking participants skill by skill onto interlink we speak. And so building the skills that are necessary, starting from [00:40:00] listening and speaking, listening and translating and adding, punctuation, correcting, and also all the skills that have to do with dealing with the speech recognition technology is really step by step, very layered progressive approach at this stage. Participating in this course will give professionals a very good idea of what the practice is about and some training both in intra and InterpretBank where we speak with the possibility to have an assessment and to receive some feedback. So I would say that's a good place to start also to see if it's something for you. And then clearly, from our point of view, it provides the data to understand in the skills acquisition process, where are the main challenges? What are the stumbling blocks in this process? Do they have more to do with handling the technology? At what stage will happen? And then also studying the performance, as I said, we're going to train using use cases from from the industry. So I think that I see that as [00:41:00] a good place to start. So you said this is for language professionals, right? So this goes for translators, for interpreters, because you also said that you're going to be training the the speaking and listening part, the speaking and listening and translating part. So I'm guessing this is if you're already a conference interpreter, this would probably be relatively easy to pick up in that part of the of the program. And this is more for the other language professionals. Exactly. At this stage. It's kind of a better version, if you want, of the final upskilling, which is going to be the final deliverable of the project. So the idea is that the final deliverable will be also modular in the sense that if you are, as you are saying, an interpreter and you already know how to do shadowing these listening and speaking the same language and inter lingual, then maybe you can skip that part of the course and just move on to what I believe is going to be. For interpreters, [00:42:00] that is listening and translating so effectively simultaneous, but using what we call software adapted delivery. We call it a side project. And Smart was a better acronym to be on the side delivery. Got some jokes and perhaps reveal a funny intonation and prosody for interpreters are important tools to convey meaning when it comes to enter. Legally speaking, you need to learn to manage that because your first interlocutor is not. The audience is the speech recognition software. So in order to optimize recognition, you need to articulate really well managed pauses and chunking and also manage your your prosody in general. So that's that's perhaps going to be something that interpreters may need to focus more on. It's about and learning something and learning to do to do listening and speaking in a slightly different [00:43:00] way possible, but different from what has been practiced so far. And yeah, as you're saying, it's inclusive. It's a hybrid practice. Right. It brings together skills from different disciplines. Clearly, I am very interested in seeing being an interpreter of myself how professional Interpreters' Help, but also professional interpreters may have other skills. They might be doing subtitling, so they might already have some skills that are relevant to Interlanguage speaking as part of their background. And then we want to open up the training also to subtitles that perhaps have more in their practice. This constant shift from spoken to output. We want to open it up to people who are doing at the moment interesting where we are speaking. And so these are the people that would need to add a language transfer component to to to the to the practice. We want to test different profiles and see how they cope, but ultimately not to say, ah, interpreters are better than some type, as the purpose is to see what [00:44:00] skills can better support performance. And we will be doing this through collaboration with the partners in cognitive psychology and neuroscience. So we will collect and correlate different measures and we will be able to explore links between cognitive abilities, interspace skills, whether and how these change via training. And ultimately, we hope to gain evidence of what factors seem to best contribute to the development of legally speaking so that we can further refine and optimize the upskilling process for language professionals. I was going to say, I guess this is one place where accent would be an issue. There's been a running story throughout my career of people saying, you know, have you ever thought of changing the movement that there's no such thing as removing an [00:45:00] accent as linguistics and changing your accent? And this is one of those cases where like, ah, the automated speech recognition has been trained on certain parts of the U.S. and our piece, and that's basically what it has been trained on. So this then, I guess, is another accessibility question is to what extent will intellectual speaking require people to have to speak in the way that it has been trained or refused to see it because that doesn't exist. But if it has been trained that this is the language it recognizes, then that actually privileges those who already speak in that way. Well, that's a very good question, actually, and for for the moment, for instance, for the project, we are using a speaker dependent type of speech recognition. So I don't know if you're familiar this aside, in the sense of automatic speech recognition, like the [00:46:00] one that Google provides Google speech that are completely speaker independent. So they are trained to recognize any speakers or even recognize the language and then transcribe what any speaker is saying, speaker dependent systems like Dragon, naturally speaking, which is the one that we are using, and it's industry standard for speaking even. Interestingly, it necessitates the creation of a voice profile. So you need to train the software, although now the version that they have requires very short training. You used to have to be much longer by reading texts and talking to the system so that it starts to pick up on your accent, on the on your prosody, on your intonation and maximize recognition. And recognition is really, really good with this kind of system. The downside is that it's not offered for every language. So it has some English, Italian, French and Spanish, which are the languages we are testing in the project, [00:47:00] German and a few more perhaps. But when it comes to languages of lesser diffusion or less resource languages, there's no such system in place that delivers such good results. And again, there's a lot of research going on. I have a student of mine who is working with the Croatian trying to to optimize a systems for creation for the purpose of subtitling. We have I think it's called Knewton, which is again another system for college. So there's experimentation. But I believe that that's a very topical question because some languages will we'd have to wait for the technology to catch up in order to be able to provide the same level of recognition that Druggan guarantees. So I have a very practical question, because I have a friend of mine was doing Interlanguage Alex, and mine was actually doing interestingly speaking for subtitles on German television. And she was working with Dragon [00:48:00] as well, I believe. And another friend of mine was actually doing that here in Munich even. And they were telling me that they had to train their dragon and all that. And they usually always got the sort of the main topics of what they were doing, whether it was like a soccer game or whatever. And then they researched the names of the soccer players of the stadium of the city or whatever and train the program, because obviously, especially names and product names, people or whatever it may be, is very it's very difficult. So is that something that you're going to be I'm guessing that's also going to be in the course because you have to tell the people, listen, we're going to be talking about car engines and it's going to be about a very specific brand. So you have to input that into your software. Absolutely. That's a way of optimizing, as we say, for specific situations or specific context. I mean, if you think about the dragon itself wasn't created for very speaking purposes. It's I think one of the first fields of application was in health care [00:49:00] for dictating health care reports rather than and legal and legal as well. So it is about optimizing that kind of tool for interlibrary speaking purposes. So one part of the of the of the training that we provide is exactly about preparing the software at best by inputting word lists. So it becomes part of the preparation that you would do for any assignment. Right. And you also prepare your your software with proper nouns or anything that would be recognized. And we know what a star has troubles with and then feeding that into the machine, using macros as well. So there's all sorts of tricks that can be implemented to facilitate your life and and maximize recognition. But again, that's part of the preparation. It's it's part of the interaction with the software, how to use it at best. And that's such a big part of the final accuracy. Right. This [00:50:00] is very cool, I have to say. Very, very cool. Also, what I like so much about it, about this is also how initially this was quote unquote, just something that was done for accessibility and for a supposedly small group of people that kind of needed this. And people would kind of roll their eyes almost having to do this. And now it's sort of branching out into into being used by, you know, many more people. And it's kind of similar to a situation where you build a ramp for people in wheelchairs. But then it turns out this is also very useful for people pushing a pushchair or a pram. So and I think it's somewhat similar to the situations are very, very exciting. To watch, I think, do you see any any other sort of related fields where there are sort of propping up as potentially interesting for interpreters to work in, or is this kind of the next big thing this week speaking? Well, [00:51:00] I'm focusing on this one, clearly, there's there's more hybrids, hybrid possibilities, but for me, this is already not just one service. If you look at it, it's it's can be provided entirely in an interesting way, which, again, is already two separate services potentially. I'm thinking about also interpreters wanting to diversify, wanting to explore other settings, as I say, other scenarios where maybe simultaneous interpreting is not always provided. I mean, I give you an example, which for me was quite interesting, the whole field of online digital radios, that's that's one where we are exploring. And it would be fantastic to provide Interlanguage speaking French language and speaking to make this content accessible to a wider audience. And again, it's something that perhaps you wouldn't have thought about as a as a simultaneous interpreter, books, webinars. I mean, there's a whole, I [00:52:00] believe, new range of opportunities that that open up. I mean, we live in they call it the multilingual boom era. Right. There's so much content being being produced, multilingual and multimedia content. And and I also really much like the social dimension to it. I also want to say, though, that I don't see Interlanguage speaking as substituting simultaneous interpreting because I think that could be easily, easily misinterpreted. This idea that you you have interlanguage speaking, therefore simultaneous, is not necessary anymore. Well, no, I think it depends very much on the situation, on the needs. In an ideal world, I see these two services together to provide accessibility for all. There might be other other elements that are related to having to pay more than one professional that those have to do with the concerns of the industry from the industry, [00:53:00] but the one doesn't exclude the other. What I think is very important at this stage, because this is so emergent and this is so new, what I hear is that there is a lot of demand. So people need a certain service, but they do not know how to have it. And therefore, I think it's it's our duty also to to raise awareness of the existence of this service, the possibility to to provide this enhance the accessibility to target more than one user group. And it really does make me think about the work that's going on, studying organizations that have InterpretBank. And this will in the research community to figure out what is going on in the world and what services can be provided. And I think that it's really helpful to think of it in terms of different use cases. And actually, you know, Präsident, to linguistic speaking, make to growth and InterpretBank. And, [00:54:00] you know, there are so many different kinds of content going out there is that seeing the onset of simultaneous interpreting is just silly. There's so much that can be done. And I think one place that would be good to begin to wrap this up is if you were sitting in a room full of interpreters with a little bit worried, a little bit concerned about is one thing that you would like these interpreters to know about these kind of new ways of speaking that was talked about. I would say, as I said, actually, I think at the conference last year, not to be scared of these new ways of working, not to be threatened or to feel threatened by hybrid services or to feel like rejecting them because they have a technology aspect or they seem complex or difficult. I mean, one thinks that research has established now is that it is feasible. The question to explore is to what extent [00:55:00] it is feasible. How can we best support professionals doing this practice? How can we explore the process so that it is feasible? As a matter of fact, it is often in countries like Belgium, like the US. I spoke to InterpretBank speakers that do this on television for live events. So it is possible and I would say look at this as an opportunity and an opportunity for diversification, as an opportunity to not only offer a new service, but get involved in the debate that is ongoing, shaping a practice. And I think that because it's so emergent, it's really a good time to then start the discussion with academia but also with industry stakeholders. And all of this, I believe, will have good, good repercussions on working conditions, on on the status, on the visibility also of the professional. And as a last point, I would also say [00:56:00] on this broader debate about human machine interaction and automation, I told you earlier there's a lot of experimentation of different different workflows. But I think starting from the most human center that you can see, seeing the opportunities and the challenges where copes well and where it fails really provides good a good starting point, a good benchmark to then feed into the debate of what is responsible automation of these workflows. How can we find the best place for humans in these different constellation of of workflows? I think, yeah. Get involved in two words would be the message.