62 AI versus Interpreting Alex D: [00:00:14] Hello everyone. And welcome to another episode of the Troublesome Terps, the podcast about things that keep interpreters up at night. My name is Alexander Drechsel. I'm an EU staff interpreter speaking here in a private capacity. It's probably bears repeating since I missed a couple of episodes, unfortunately, but, um, Sarah and Alex were holding down the fort quite well. Um, so, uh, let's welcome. First of all, Sarah Hickey. Good evening. How are you? Sarah Hickey: [00:00:40] I'm good. Glad to be back. And it was fun running the last few episodes, just with the other Alex. Alex D: [00:00:48] Yeah. With your favorite co-host, right? Sarah Hickey: [00:00:49] With my favorite cohost. Alex G: [00:00:53] I heard, yeah. Sarah Hickey: [00:00:54] Now both of you are my favorite co-hosts. But no, I also really miss Jonathan, but that's another story. Anyway, glad to be here, uh, also in a private capacity, but otherwise in my daytime, I spend my time researching the language industry at Nimdzi and, yeah, by night I am a podcaster here with my favorite colleague, the other Alex. Alex G: [00:01:17] Yay! That would be me. Right? So this Alex Gansmeier. Always in a private capacity, cause that's all I have to offer, but, um, yeah, always a pleasure to be here. I mean, I I'd like to think it's enough. Uh, but yeah, it's a pleasure to be here. And I think, Alex, today, we have some very special co-hosts for today's show, don't we? Alex D: [00:01:37] Indeed we do. Uh, we have two, uh, two guests that, um, well, we've, we've been planning this for a long time, shall we just say, because it all started with, uh, with a tweet or a Twitter conversation actually as as many things do, uh, nowadays. So it's finally happening because the idea was to have a bit of a debate, uh, about a hot potato, a hot ticket item, uh, namely artificial intelligence or whatever's being called that, uh, these days. Um, so here they are: Professor Graham Turner and Henry Lee. Um, Henry I know from Twitter, of course. Uh, we also met in person a few times. Um, last time we met was at the ATA conference in 2016. That's been quite a while. Um, I think back then you were still president of, um, FIT, so the international association of, um, interpreters and translators or associations thereof. Um, and what are you doing these days, Henry? Henry Liu: [00:02:29] Just like most people just trying to find new ideas. I mean the, the consulting part works pretty much the same and doing a bit of writing. And now I'm podcasting! Alex D: [00:02:43] That's true. We're very glad you are. Henry Liu: [00:02:45] Only being interviewed, not good enough to be actually podcasting my ideas yet. Alex D: [00:02:51] Well, that might come at a later point. Um, but it was going to say you're also an interpreter and translator. So do you still get to do any translation and interpreting at the moment? Henry Liu: [00:02:59] Yes, absolutely. Um, thankfully, in the Antipodes, um, we, we finally get free movement and conferences are happening and certainly, I mean, the day to day community interpreting absolutely still happening and had to be happening. But yes, uh, face-to-face is alive in the Antipodes. The trans hemispheric travel is, um, is still out of bounds so far. Alex D: [00:03:23] Yeah, unfortunately, Henry Liu: [00:03:24] Conference appearances hasn't happened yet. At the beginning of the pandemic, I, I decided not to accept any virtual appearances. You know, being somebody who is advocating for visibility and, and the physical presence being very important for what we're doing, um, and particularly in the speaking capacity, um, I decided not to do that. But yeah, uh, a few special requests I have done. Hopefully we can all get back together and being actually physically uh face-to-face and actually having, having the vibe and, and hopefully one day we can actually have this, um, what we were talking about at the beginning, the Troublesome Terps round table live, um, where we have, where we have real, real hot potatoes throwing out from the audience. Alex D: [00:04:12] Actual physical hot potatoes and, and rotten tomatoes possibly. Um, yes. Also joining us from an undisclosed location, as I just said, is professor Graham Turner. Welcome, Graham. It's lovely to be able to speak to you again. We had the pleasure of, uh, uh, speaking quite a bit, a few years ago about a topic that is very near and dear to your heart and my heart as well, uh, about sign language and the BSL act, um, which is not really the topic, although it might be tonight. I don't know. Um, but, um, the whole sort of Twitter conversation goes back, I think, to a talk you gave at the CIUTI conference in 2018. As far as you can still recall, what was the main slant of that, of that talk? What sparked the debate there? Graham Turner: [00:04:55] It wasn't a talk, actually it was a, um, uh, a panel , uh, an audience- initiated discussion really about a whole range of topics. And there was a particular, uh, focus at one stage on this question of, um, the role of technologies and, uh, artificial intelligence in particular, uh, in the interpreting field. And, um, I took the view at that point and would still take the view that, um, we are generally underestimating the impact that that is going to have in the relatively short term. Um, but I won't go any further into that just now, because I think this discussion will evolve in the next wee while. Alex G: [00:05:44] Spoiler alert! Alex D: [00:05:46] Keeping your cards close to the chest. Um, yeah. So I'm, I'm um, wondering Sarah, how, how are you looking at AI these days? Um, you know, during your work at Nimdzi. I'm assuming that's a big topic and of course, translation, um, as, as is often the case is a bit ahead of interpreting in terms of technology adoption and machine translation and, you know, tools built into, into CAT environments and so on. So what have you picked up so far on the whole topic of AI and I guess natural language processing and so on? Sarah Hickey: [00:06:17] Yeah. I mean, there's, there's a lot happening in that field. And as you rightfully said in translation, this is already further ahead because it's, it's a bit easier, uh, in some sense to deal with the written word. And ideally also something's already written and not changing all the time, even though there has been progress made there too. Um, machine translation is pretty much a staple these days, right. That it's being used in all sorts of forms. Actually, I just had a live stream earlier this week on Monday with one of my colleagues who talked about, um, it was player generated content in the gaming industry and specific machine translation engines, um, that were developed for that, which is really tricky because there's a lot of jargon and slang and basically gamers more or less speak their own language. And there's lots of references to memes and it's changing all the time, but even there, there have been applications. There are applications now that can remove the need for human interaction. Of course it's not perfect output, but it's good enough for this type of, um, communication. Um, and you know, there's, there's lots and lots of need in that field. So I went a little bit off topic there now, but, um, actually I was really surprised when I first looked into, um, AI in interpreting, um, well actually there's two fields, right? On the one hand we have machine interpreting and then you have something like, um, computer assisted interpreting tools, the CAI tools, the famous ones. Right? I think this is something that we will see very, very soon and we're already seeing . Because it sounds really handy from what I've seen, um, that basically interpreters will still be, you know, the human interpreters, but they will basically be given a hand by their AI assistant with some of the names and numbers and stuff. And I mean, why not? Right? That's how I feel about it. And then when it comes to machine interpreting, I didn't have high hopes for that. I went into my research thinking no way, you know, that's not, that's not realistic. And then I was really surprised to find that there were some solutions that are actually really good. I mean, they can't hold a candle to human interpreters. I'm not saying that, but it's further ahead than I expected to be for certain use cases. You know, I don't think we need to be afraid, like I said before, that AI will replace human interpreters. But in some instances, I think it's becoming a viable solution as well, which really surprised me as well. Alex D: [00:08:42] you see any differences in terms of, uh, what language or what languages were involved or was this just for English? If you can recall the details. Sarah Hickey: [00:08:50] I mean, predominantly I've seen English and German because those are my languages so I could judge them a bit better, but, um, it's of course, like in most cases they start out with the bigger, bigger languages that are more in demand because it makes more sense for your own return on investment as well. To focus on those first thinking that people will need those the most. Um, and at the same time, well, if you have any kind of AI machine, it needs to be trained with certain data. So. It's not as common to have a lot of AI for the really rare languages, because there's generally less data there. But one thing I've seen in, in general, in the wider language industry, as a trend, um, we call, um, data for AI services. So a lot of the language service providers in the industry they've started, um, to see a lot of demand for data annotation, data labeling and all that kind of stuff. So there is a growing demand, in that field, not just for language services though, but in general, you know, for all the internet of things, um, the voice assistants from the really big companies. Yeah, AI is a big topic and it's growing too. No surprise, I guess. Alex D: [00:09:59] Yeah, there's probably a lot there we can, we can get back to and go into more detail. Um, unless anybody wants to jump in, uh, right there. Um, I'm going to hand it over to Alex and I was just wondering whether you use any of those tools, um, for preparation or whether you've done any tests like for machine interpreting or speech-to-speech translation, as we could also call it, I guess. Alex G: [00:10:21] I think on the day-to-day, what we do use in the freelance market is definitely machine translation. Just because if you get a relatively random speech or, you know, like some cards from the host or the moderators. And they're, they're usually not super specific. You just put them into machine translation real quick, you know, it's kind of the quick and dirty solution. You wouldn't necessarily fully rely on them for interpreting, but it usually is quite handy to have them on hand. Um, when it comes to machine interpreting, I've only been privy to some demos in different scenarios, and they've always been extremely underwhelming, to be quite frank. But then also I'm thinking they probably didn't really disclose the super latest and super greatest to a pack of interpreters. Do you know what I mean? I'm pretty sure like behind closed doors or Sarah, I'm sure you also have some additional insight than just publicly available demos or webinars to the wider interpreting community. Um, but what's out there commercially at the moment. I don't think is very, um, intimidating. So yeah, but I, I am very curious about the, the, um, assistant systems, so that I think is even in InterpretBank, like they would offer some stuff like that I saw on techforword, I think. Um, and that I'm kind of curious about, because that sounds handy enough where I could actually see myself getting into that. Um, so the assistant systems, that could be pretty cool, but I haven't actually used it. I've just learned about that from a friend. Alex D: [00:11:55] So you're talking about, um, terminology management, that kind of thing. Alex G: [00:12:00] Yeah. Even stuff like, um, automatic like name and number recognition where it then automatically transcribes that. So stuff like that, which I wouldn't even know where to begin to enable those kinds of features in InterpretBank, for example. But, um, I know from your webinars that they have those types of things, so that is quite interesting. And I think that could definitely also be something where machine learning or AI could find... I don't want to say fans in the interpreting community, but possibly a bigger audience in the interpreting community. Um, so I think that's definitely something to, to keep an eye out for. Um, in the, in the short to midterm? Future, I would say Sarah Hickey: [00:12:40] Just one quick thing, because Alex, you just said, um, you know, we probably don't need to be afraid yet of AI basically taking our jobs. Right? Alex G: [00:12:48] Right. Sarah Hickey: [00:12:49] Uh, to me, I understand that that is like a natural reaction that most people are having. But to me, it's not about that, you know, I don't think it is a threat. For me, it's not part of the story because if anything, there should be different use cases for this. Um, I was talking to Renato actually recently, also, and he said when the first machine translation, um, was developed many, many, many years ago that one of his, um, colleagues also said, oh no, you know, our industry is over. I'm going to leave, you know, this is it, we're going to be replaced. And he went and so working for a bank and like 10 years later, he was replaced by an ATM and we're still translating, you know? I mean, I know it's not as simple, but technically it's.. . Like you're right. The AI solutions can't hold a candle to humans yet. Um, but they are very good in the sense that, um, like in translation they have come very far, in interpreting they're not as far, but even there they've started to enable communication in some use case. And there's some use cases for that, where, like we said before, in those cases, maybe you wouldn't even call an interpreter. It's too expensive. Makes no sense. But at least you have some form of communication. Like you said, the, the quick and dirty option or something, you know, where in some cases it's better to have some communication than no communication, you know? And we always think of the big conferences and the prestigious stuff, whatever, those like high class things where you want the fancy stuff. But what if, you know, there are so many everyday scenarios as well, sometimes, you know where, yeah, right now, sometimes there are no interpreters, but if you could at least have something that's still better than nothing, you know, and in those cases then. I was surprised to see that some communication this way it's already possible. It's not the same, but, um, at the same time, you know, it should all be about, um, more access to communication, language, access, you know, widening that. And more like thinking of it as expanding the market, I think, yeah. Alex D: [00:14:41] I'm seeing some sort of reactions there from both Henry and Graham. So if you want to jump in, I want to give you the opportunity to. I mean, Graham is keeping his poker face for now. Sarah Hickey: [00:14:51] Yeah, can't, I can't read this face. Alex G: [00:14:53] I don't know what's happening in Graham's head. Sarah Hickey: [00:14:56] Henry is smiling, so. Henry Liu: [00:14:57] There's a lot of things happening in Graham's head, always. Graham Turner: [00:15:03] Sarah, you said, you know, it's not a threat. It's not even about that. And I haven't heard the answer as to why it's not a threat. I suspect that most answers to that question actually come back to: It's not a threat right now, but nobody's saying it's a threat right now. Alex, a few moments ago talked about, uh, the short to medium term and I think it all depends on what we mean by the short to medium term. Because as far as I'm concerned, it clearly is a threat. And depending on what you mean by the short to medium term, I think potentially in the short term, it's quite a significant threat. In fact, I don't think it's even sensible to talk about it as a threat. I think it's just inevitable. Sarah Hickey: [00:15:53] Can you elaborate a little bit on that? Graham Turner: [00:15:56] The re the rationale for, for that line of thinking is, uh, is based simply upon turning around and looking back over your shoulder as to how quickly things have advanced. Uh, when I was an undergraduate student, I remember the linguistics profs in my department were working on very secret, very early models of, um, artificial speech generation, recognition and generation of artificial voices. And they would play games with students by, uh, playing recordings that they'd made and inviting us to guess which member of the staff team was speaking. Uh, I, you know, and then they would actually have mixed a variety of completely artificial voices in with the real ones. And we couldn't tell the difference. So this is... When was I an undergraduate student? The mid 1980s, uh, mid to late 1980s. Now in short to long, short to medium term quote unquote, uh, terms, um, that's uh, what, 35 years ago, that's a generation. Now to my mind, that's the short term. So in another 35 years time, if we've come from there to where we're at now in 35 years, where on earth are we going to be? Now you can call that a threat if you want to. I don't think it's terribly important to think of it in those terms, as I say, precisely, because I think it's just inevitable. At that speed of development, with the resources that various kinds of organizations and institutions, you know, are now able to throw at this kind of issue and the value that they can generate for themselves by doing so, it's not a fight worth taking on. Sarah Hickey: [00:17:58] Well, I completely agree with what you're saying there. Um, and yeah, the growth, like the, the advancement will be probably exponential, you know . It keeps growing and growing and really, really, really fast, but I'm just thinking of, well, on the one hand looking at how far, for example, in translation machine translation has come. You know, it, it has come really far. And in some cases, the, the, the quality's almost on par with humans. And in other cases it has sometimes been almost better, which sounds crazy, but there've been like some stats I've seen for that as well. And yet there's still human translators, you know? So, and we're saying for interpreting, yeah, maybe in some cases, uh, for some scenarios, maybe there will be some replacement. I look at it more from thinking that we will find more use cases where we can enable communication. So less of thinking of it as only what we know now in terms of our interpreting market, but more, what else can there be? There's so much untapped where we would maybe like to have communication, but we can't have it now. Plus, add on top of that, that not all interpreting is just, you know, business and conference, but there's also the hospital interpreting for example, or interpreting when it comes to mental health and lots and lots of areas where the interpreter even takes on another role, which I know can be controversial sometimes because interpreters are supposed to be impartial and everything and invisible but… Alex D: [00:19:20] Careful! Sarah Hickey: [00:19:21] I know. Well, I don't care. So... Whatever. Alex G: [00:19:27] It's fine. Jonathan's not here. We need somebody to be controversial every now and again. Sarah Hickey: [00:19:31] Um, I'm happy to volunteer any time, but it does mean the interpreter takes on a little bit. Also, I think that role when... Like imagine you're in a scenario where in the hospital, um, it's a life and death situation or not, but you're in a foreign country. You don't speak the language, you know, and then comes in someone who finally speaks your language and can help you, you know, and you kind of cling onto that person as well. A little bit. I imagine, you know, this happened to me before, as well as the, as the patient. And, uh, they almost become a little confidant as well. Like a machine is not going to be that confidant for you. Graham Turner: [00:20:04] Have you, have you seen the, um, the, uh, care robots in Japan? Sarah Hickey: [00:20:10] Yeah. I know that this kind of stuff also exists. Yeah. Um, I'm just saying, I don't think it's going to be replaced in all cases. This is a, in a different area. I just looked into the market for American sign language interpreting. Um, which is totally taken away from tech, but to illustrate the different scenarios where you use different types of interpreting. And there you have, you know, certified deaf interpreters as well. So who are deaf individuals who are also interpreting and not just between different sign languages, but for example, from American sign language to American sign language. So, and specifically in cases where people are more under stress, like, uh, in really stressful situation in the hospital, mental health or with children, you know, because in those cases you need the extra bit of care as well. The extra bit, the extra level of communication that, you know, in this case, even just a deaf person can offer, you know, because the, even the really good ASL interpreter in some cases then is not being sufficient enough. Not that they're not good interpreters, but that you want that extra level of understanding from the culture and that level of connection as well. And I would imagine that similar scenarios will always apply to spoken language interpreting as well. Alex D: [00:21:20] I wanna get Henry's take on the whole " Is it a threat? Is it inevitable?" and so on in a minute, if that's okay. But I just went, when I listened to Graham there, I was kind of reminded of the developments that we've seen for remote interpreting, which just has been sort of dismissed out of hand for years and it's not good enough and it can't replace proper onsite interpreting. And then the coronavirus came along and nobody cared whether it was good enough, it was just the only available option. And I tend to agree also with, uh, you know, the thing you mentioned Graham about the care robots, because sometimes it's, it's also not about whether it's good enough or whether it's adequate. Sometimes it's just a cost issue, unfortunately, especially in healthcare. But, um, yeah, I'd like to hear from you Henry um… Alex G: [00:21:57] Yeah, because Henry's been smiling mischievously. I think he's ready to go. Henry Liu: [00:22:03] No. Well, you guys know me. I mean this, um, I'm dynamic all the time and, and, and, the care model is very interesting because, uh, I'm not seeing it as a threat in terms of, well, it depends on what are we talking about, threat to whom and threat to what? If we're talking about us who are all established, it's not going to be a threat, but what about the next generation? When you say there's a job, what sort of a job? I mean, okay, sure. Not necessarily was everybody going to be staff interpreters or staff translators? I mean, what, what sort of a job are we talking about in terms of remuneration, in terms of satisfaction, in terms of what actually are we contributing? I think that is actually the interesting part that I'm seeing. And because as you know, I'm always trying to think about what the next generation is going to be, because the dynamics of this is fast, as Graham was saying. You don't need to look very far, just look at how Facebook has completely changed the setup, unregulated. And the same thing applied, as I wrote in the past, that the idea that tech in, in language industry has been in our system, infiltrating or affecting and creating new opportunities and destroying ones as we go. None of that is regulated. Are we seeing the consequences? Has somebody thought about the consequences? Um, a clear question. I mean, just look at, look at Sarah's example, if you're in a foreign hospital. In the future or, I mean, it's already happening now. Um, would you be given an iPad? Would you be given an iPad because you are a public patient? Because if you're a private patient, you have an interpreter, is that what's going to happen? I mean, I don't know. If that's the case, well, first of all, is it better? Is it not better? That's one question. Um, what's the role of that? Um, I mean, mental health, imagine if you're trying to have a schizophrenic who is seeking mental health in a foreign country, I mean, it cannot, it's even difficult to be handled by, uh, by, by a mental health specialist interpreter, let alone by machine. I mean, the doctor, the psychiatrist will be saying, hang on, is it the machine giving me gibberish or is a patient telling me gibberish? Sarah Hickey: [00:24:17] I think that's an excellent example. Henry Liu: [00:24:19] I think that is the really difficult part. I mean, I often talked about the reverse Turing test. The idea is: There's something wrong, right? Whose fault is it? I mean, Turing test is easy. Can a human look at the output and say, is this human? Can I work that out? But now I'm the other way round. We have a breakdown of communication. Look at the breakdown of communication situation. What is the error here? Because now the whole tech system has been decoupled from accountability. In the past, at least, at least you can say, oh, hang on. That's the interpreter's fault. Or that is the editor's fault or that's the copywriter's fault, or that is the printer's fault. Or however that might be, this is now totally unaccountable. Alex G: [00:25:07] Yeah, it's the black box, right? Henry Liu: [00:25:09] Correct. It's a black box. So I think it's replacing what? And replacing how? And a threat to whom? And who's got the right to choose? Because as, and now this is the key, because as you know, I'm an amateur amateur economist, when Alex says it's all about money, it's all about budget. It is! Because, why did RSI suddenly take off in that setting? Yes, sure. It's because we can't fly. It's because of budget, it's because of... but have actually people been consulted? Is it going to be the new normal? Of course it's going to be a new normal because it's cheaper. Ultimately with people actually value about what, because, because, you know, diplomacy happened, not all diplomacy happened by Zoom. How did that work? Because you know, special diplomats and envoys are still traveling. Alex D: [00:26:00] It's all about the priorities. Henry Liu: [00:26:02] It is, it's all about the priorities. So replacing what and replacing whom? I think Graham and I have been talking about this for quite a while. And, and most of us have. You know, driverless cars, the automated driverless cars thing. If you think about the threat, the threat to our profession is not what we're talking about, quite often, because we're actually looking at it the wrong way. So looking at driverless cars, it's like, well, actually that's a hype thing because people are hyping and wanting people doing this. And it's a, it's an investment issue. Um, is it ever going to come? I don't know, but I mean, driverless cars will come when there are only driverless cars. Alex G: [00:26:42] If I may just jump in there real quick with the driverless cars, because I think that that's a really interesting point to make because the driverless cars, um, if we go back just like five years or whatever, um, the automotive manufacturers were already saying, oh, by 2021, we'll have fully automated driving, like fully driverless cars. And the thing is. The technology isn't as far advanced as they thought it would be. However, there could be driverless cars at the moment already, but the legislation isn't advanced enough. So the thing that's currently really putting the brakes on it is because the legislators are slowing down the process because they don't know how to regulate this stuff. And I'm thinking, you know, if let's say in 30 years time, machine interpreting were to be at a level where it could potentially be used at a larger scale, there could also be some legislation coming into effect here, because at the end of the day, I think... If we're looking in the future, there has to be a legislation kind of ruling or governing a lot of the machine to human interfacing that's going to happen. Because there have to be like laws and rules in place, because as you mentioned, it's a black box. So what does the machine do? What if a machine makes a mistake, et cetera, et cetera. So there have to be rules and regulations made for all those things. And so even if the machine interpreting sort of looms large, I think if we kind of look at the autonomous driving situation, we see how that gets slowed down as well. Sarah Hickey: [00:28:17] First of all, I liked what you said, Henry, and Alex, I guess as well with, the money coming in, right. Because we always say at Nimdzi as well, that in the end, it's the market that decides. It's about the end users, about the consumer and the buyer of language services in the end. You can have the best product ever - if no one's buying it, then it's pointless, you know. And you got to kind of bring your language, like adapt your services to the actual needs as well. So like the matching up with what you said, you know, people always said, okay, RSI isn't good enough, but well, now the market decided it is. In the sense that people were responding to it and decided it is good enough. I want to use this and not the other thing, you know. And then the second thing with regulations. Regulations are actually another really, really big driver for the demand in language services. That's why, um, in the United States it's such a big market because there are laws that, you know, you have the right to receive certain services in your native language. I think it's the same in the UK. Whereas in Germany, we don't really have that, you know? And, um, so this really decides as well, how big or not big language services are, what types of language services you can receive, because then let's say if a hospital has to provide something or if they don't have to, you know, that influences what type of services you get as well. Alex G: [00:29:29] And then we come back to the care robots. Sarah Hickey: [00:29:31] Yeah, exactly. And then I'm thinking also, drawing a bit of a line again to translation: You know, these days, like I said, machine translation is a staple and there's actually more being translated by machines than by humans these days, if you're looking purely at the number of words. Right? But there's of course, post editing in a lot of cases. Um, but then there's complete sectors where machine translation is not used because it's not common to use machine translation in marketing, because you need to be a bit more creative and colorful. Uh, it's not common to use it for literature translation for fairly similar reasons, not the exact same ones, but you get the idea. I don't know, maybe in the future of interpreting could also be that it's -or human interpreting- that it's becoming more niche. You know, that, uh, only for certain types of cases, you still have a human interpreter. Whereas in another cases you have other types of language services. Actually, companies I talked to during the pandemic, there were a few who said that they did onsite interpreting before, and they said not all of that demand pivoted to remote for them, but also to like adjacent services, like captions for, you know, for virtual meetings or document translation. I don't know, maybe interpreters need to branch out as well. Your job changes as you go, I guess. Translators, a lot of them had to adopt post editing as well, or I don't know. It certainly will have an effect on the industry. I just don't think that it will ever be completely replaced. Henry Liu: [00:30:53] No, no, no, but we're not saying that, as Graham was saying. Sarah Hickey: [00:30:56] Oh, I'm not saying you are. Henry Liu: [00:30:57] We agree. agree on that. Sarah Hickey: [00:30:59] Sorry. I'm just clarifying my own position. Henry Liu: [00:31:03] Actually given that Graham and I are in agreement, we can, we can finish the recording now because that's Sarah Hickey: [00:31:11] That's Henry Liu: [00:31:11] The scopos of this is finished, but no, no, but actually, I mean... Graham Turner: [00:31:16] Just to say Henry, I'm not sure I do agree with that actually, but continue. Alex D: [00:31:19] Brilliant. Henry Liu: [00:31:20] Okay, good. Excellent. We've got a reason to, to carry on. But, but pick up on two things, that have just been said, because I think that's very important. Um, one is: Yes, it is very common for tech. I mean, not that we, not that people quote Jack Ma these days, because he is now completely out of favor, but I mean, that's what they were saying. You know, the regulation is, is, is the problem of tech and, and, you know, there are no experts except experts of yesterday. Well, actually, if there are no experts of yesterday, there won't be experts of today. Let's let's face it. I mean, and that's, that's the whole idea of, of, of tech actually not thinking about where did it come from? I mean, I often said without human translators, machine translation are just adjust formula and algorithm. I mean, that's why low resource, uh, languages don't work when it comes to this sort of thing. But anyway, let's put regulation aside because regulation is a totally different idea. You've got a leader of, of your government - well, for now, for the next little while, before she retires - that actually understands science, but most politicians don't, right? And the Congress hearing on tech and surveillance, I mean, it's just laughable! So, so how regulation comes in, and the drivers of regulation, is totally different and it's an even bigger money issue. But put that aside. A simple, relatively linear way of looking at it: Autonomous cars, whether it will happen or not, it's a tech issue. And just look at the investment now suddenly dropped knowing that this tech is too hot or this tech is actually taking too much money. Flip it the other way. Ignoring regulation: Planes. Autopilot is on all your planes, but it's not happening. Because we don't accept it. Right? I took one on the first planes, one of the first flights that went from Perth to London. 16 hours. Direct flight. And they have four sets of pilots. That's all paid for. Because the plane can do it. We know the plane can do it. Why are we not using that? It's not a regulatory issue, well, it is a regulatory issue. But, but it's also the idea of how we are going to accept that. I want to bring into that because that's why it's important in interpreting, because it's very dynamic and general. Jonathan always say that this is the real world stuff. It's because if you are driving, and the only way, at present, at least in the foreseeable future, autonomous cars will work is because all cars are autonomous. You can't have rebellious drivers, somebody having road rage and the autonomous cars would just not go. Or with, uh, pedestrians walking out, or cats jumping out and all the rest of it. It's not that ethical issue, it's even that the technology issue, whereas planes are different, right? Planes are all regulated, you know exactly which altitude there is, there's much less of interference and birds and all the rest of it. I mean, simplified, but we actually need to think about the dynamic side of communication. I think that is the key. We are changing the environment. If we are adding tech, we are changing the environment. The environment of language services is already changing. We are, we are seeing texts. We're seeing speeches. We're seeing corporate guff that are actually very easily translatable. Alex D: [00:34:37] That's an excellent point. I want to put a pin in that real quick, because I wanted to throw this to Graham if I may. Um, because Graham, you have quite a bit of experience of being involved in legislative procedures with the BSL Act and sort of what I would be interested is in getting your take on whether regulation or laws can actually do those things that some of us now alluded to, or, or, I mean, I mean, how do you see that play out? Because sometimes, uh, you know, for the BSL Act that was this, this legal act, and then it had to be filled with life as it were. And did you see that happen or did it remain more of a paper tiger, I guess, is the, the question in a somewhat, you know, very short form, I guess. Graham Turner: [00:35:19] Um, I mean, I could give you a long answer to that question, but much of it is perhaps not really pertinent to our real topic here. Um, the short answer to the question is at this stage, then yes, it is, uh, to a significant extent, just a paper tiger at this point in time. Um, because there was an act of the Scottish parliament in 2015, there was a national plan published in 2017. That first national plan was due for an interim review in 20, uh, 2020. Wait a minute, 2020. Yes. And then COVID. So there has been no interim review and much is, you know, the whole thing is to some extent in suspended animation at the moment while we, uh, you know, hope to get, get things back on track and so on. So it's not necessarily the best example to take. The more crucial question I think is about whether regulation is going to be a knockdown issue here of any kind. And to my mind, there's not the slightest chance that it will be. Uh, because I think that our policy developers and, uh, um, ethicists of various kinds and so on have to, uh, handle much, much more complex issues than this, and have done so on a regular basis. We may not like the outcomes that they come up with all the time. They may still be contested, but we have rules and regulations around all kinds of things. So for example, you know, a few years ago I was involved in a campaign to, um, protest about, uh, the human fertilization and embryology bill in the UK. You know, that's seriously complex ethical stuff. Uh, and peoples are still disputing whether the right quote unquote outcomes came out of that process and they will do so probably until, you know, long into the future. But there is legislation on that. Decisions have been reached about how we will go forward. Things will be disputed in the courts. We have institutions that deal with that, et cetera, et cetera. That will happen in this field too. Alex D: [00:37:38] That kind of reminds me of the whole debate about how some clamour for more regulation of the interpreting profession, so that you need to have a certain degree to be able to work as an interpreter in the first place, that kind of thing, which could lead to regulation that we've been clamouring for but we don't like the result in the end, as you just said, if I'm not mistaken. So it might not be, you know, what we want in the end. Um, but yeah, but maybe we can get back to the whole question of, of dynamism. And it seems to me like it's a little bit similar to what you've been, um, saying Sarah, is that interpreting could go niche or maybe could go premium or maybe that's the same thing. I don't know. So I'm wondering if, Sarah Hickey: [00:38:16] sounds Alex D: [00:38:16] Yeah, but I mean, that's the question because niche and premium is not necessarily the same. Niche could also mean it's a, it's a, it's a rare language. Yeah. And premium, it's just about the price, um, and who can afford it. And that's not necessarily the same thing, so . Sarah Hickey: [00:38:29] True. Good point actually. Alex D: [00:38:33] Because I really liked the whole, the whole aspect of the whole question around language access. But I'm wondering whether we are really going to see that or whether it's just going to be pushing down prices and rates and, um, people making do with, with lower quality. I mean, we haven't really gotten into the whole quality issue yet. I don't know if we want to, but, um, yeah. I don't know, Henry, if you wanted to go into more detail on what exactly you mean by the dynamism, um, of... Are you talking about sort of the, the messiness of human communication or? Henry Liu: [00:39:05] So let's pick up on what Sarah is saying. Sarah's saying that, you know, we need to be more flexible. We need to change. We need to actually look at the future and so on. I totally agree with that because, you know, we, we come from the second oldest profession and oh, yes, but back in my day, I have to do this. No, no, fine. But on the other hand, there's also an implication issue. Professions are getting harder and harder to be established, and harder and harder to actually say ,the next 10 years is going to be, or the next 50 years or the next lifetime of somebody's earning, the income, is going to be secure. That is much harder even in, in, in, in, in, you know, established services, let alone for us. But the problem for me is that I don't like the term AI, because, you know, when I was at university, as Graham was saying, when I was at university, those things are called stochastic methods. It's not AI, there's nothing artificial or nothing intelligence about it. It's just stochastic methods of actually solving a problem. But now it's being marketed as an AI. But anyway, be that as it may, any profession needs investment. From individuals and from the people using it. Clear and simple. So the question for that is, I mean, looking at people who are actually in my multilingual setting and I'm involved in an indigenous world and, and deaf world is in a different sense because of the regulatory pressure and so on. But telling people to say, oh, you're talented, you could be an interpreter, or you could be a translator. Is that happening? No! Oh, because they will be going: well, I will be coding. I would be doing whatever interesting thing it might be. And the interest is not there. And this is turning into a precarious situation, it is much harder to ask people to invest in a huge amount of time and the resources to be able to get to where we are, let alone better than who we are. And I think that's the problem. The problem for tech it's, it's, it's a little bit like, but you know, not the same. Um, when I was at EST in Åhus, the conference on translation studies, I said, if we are actually promoting all this idea of an industry that's changing, and that is actually getting, you know, automated and so on, there won't be any students in 30 years because nobody will want to study translation. Nobody. There will be no translation studies. So we need to change that idea because the same idea of tech, the more we are automating either has to be self perpetuating, which is very dangerous, but let's not go there. Where is the next generation of experts going to come from? Alex D: [00:41:36] So you're saying we have an image problem or the profession kind of feels outdated compared Henry Liu: [00:41:40] We always have an image problem! Alex D: [00:41:41] ... the shiny tech companies. Sarah Hickey: [00:41:42] A new image problem. Henry Liu: [00:41:44] Well, it's an image problem on various levels, because yes, just by saying, oh, interpreters are not going to die. That profession is not going to die. Translation is not going to die. Sure. But, but, but in what form, and is it attractive to the next generation and is it attractive to actually allow people to actually come into the profession that will actually reinvigorate it in the way that we want to see. Sarah Hickey: [00:42:10] I think that's an excellent point. Henry Liu: [00:42:12] That is the dynamism that's changing, but the other dynamism is, you know, people laugh about translationese, but actually we are generating texts - I mean, the European Union would know that - you know, you're generating texts that are machine -translatable. Soon, we will be talking machine-interpretable. Now, is that what we want? Sarah Hickey: [00:42:32] Uh, I'm trying to, uh, to think what to... I'm, I'm processing everything that, uh, Henry so brilliantly outlined there, and I think you make some excellent points and a lot of, uh, good, uh, food for thought. I'm kind of, I don't know; I keep thinking: What about, you know, in relation to this, uh, image problem? I actually agree with you, Henry, that's a good point, you know, like how do you get people then to study translation and interpreting, to become the next experts there as well? Um, well what if in these courses, what about, you know, embracing tech more than in those fields? I know, for example, when I did my interpreting ,degree, we've already also started, um, interpreting from some videos as well. It wasn't live, but from some videos as well for practice or when I did translation as well, there was some working with Trados or something like that. Um, even though to be honest, in my translation degree, we had to hand write everything on paper. I was also thinking, well, that's kind of far away from reality, you know, like, why not bring it closer to how translators actually work, but okay. It doesn't matter. I don't know, and this is just top of my head, but if there's more of a future in it with also presenting it in that way, that it's not, you know, if you do, maybe it's not like pure human translation or pure human interpreting what you're studying, but like a mix of, you know, there's some role that you play as the human. In other cases, there is more the tech side and that you are embracing this to something new. I don't have an answer obviously, totally rambling here, but I was wondering with the image problem, you said as well, that's a way to solve instead of resisting and trying to push it away, you know, like. Henry Liu: [00:44:07] Hmm. Uh, just one very quick comment, because otherwise it becomes a monologue. I've always talked about the fact that actually what we see throughout history, what is translation and what is interpreting is only a very narrow scope of intercultural communication and interlingual communication. We all know that. Forgive me if I'm wrong. I mean, the current literature hit is all about respeaking and trans-speaking and all the dynamic audio-visual translation and so on. That's very exciting. But then the question is how are you going to train the next generation of people who are actually going to be doing that? Because at the moment, the people who are actually doing them are simultaneous interpreters, are translators, are audiovisual translators who are already retraining or taking their skills to a new level or to a different platform to be working on tech, to be working on accessibility. But the real question for me, it's not about our market. Our market is gigantic. We're not actually even touching the aspect of our market. So, so to say that it is a threat because the market is a zero-sum game - I don't think so. But the problem is how to actually generate the interest to actually fulfill our real mission of being communication experts. Sarah Hickey: [00:45:20] How to train the next generation, the market is changing so quickly and you don't have the teachers for that yet. I think that's a problem anyway. Right. And not just in language studies, but, uh, in university. Henry Liu: [00:45:30] Perfect for Graham, I think. Graham, to answer that as a pedagogist. Sarah Hickey: [00:45:34] I don't mean any offense. Basically, my family are all teachers, pretty much with exception of my dad who was an architect, but like it's just full of teachers. Um, so no offense to teachers, you know, I just mean, how I'm going to say that it's often a little bit behind. If you're of course first have to study something and then you teach it. And in that time the market has moved on. At the same time, especially in universities, I know, there are people who have worked on the market or working on the market. And again, my interpreting studies, all of our teachers were active on the market, but I'm saying oftentimes in, in academia and in schools, that is exactly the problem that you're saying Henry, right? Or you as well, Graham. Things are developing. So, so quickly, it's hard to, you know, catch up and prepare the next generation for what's laying ahead, no? Graham Turner: [00:46:17] I mean, I've come in there just to say, as the baldest man in the room, I feel able to say that, um, I fear, unfortunately, that a lot of this conversation sounds to me like bald men fighting over a comb. Sarah Hickey: [00:46:33] OK. Graham Turner: [00:46:33] Because as I said earlier, I still think that I haven't heard any argument that really persuades me that, you know, the shift that we started out thinking about towards, uh, machine-delivered interpreting, uh, is anything but inevitable. And I think we, you know, we were talking about, oh, but it will be complex to regulate. Yes, but not impossible. Yes, but it's a very difficult challenge. Yes, but it's not an impossible one and the machines are getting cleverer. Uh, yes, but the market will grow. Yeah, the market will grow, but it will be satisfied by the cheapest possible solution. And the cheapest possible solution will be an AI-driven solution. If there were a market in machine translation for, uh, social media messages, um, that's one of the areas where more and more translation has happened over recent years. We're all routinely using it, you know, uh, none of that's being done by human beings. Uh, so it's the machines that are, that are lapping up the extra work. And I think that will only continue. Uh, there will be some niches that will take longer to fill, but the smarter the machines get, the more data gets poured into the hoppers, you know, the faster the answers will get generated, et cetera, et cetera. So, um, yeah, unfortunately, fortunately, unfortunately, um, I think we're guilty really of just not looking quite far enough ahead. I think if one looks 50 years ahead, let's say for the sake of argument, a couple more generations; to me, you know, youngsters coming into the schools now and opting against, uh, studying languages because they think there's not going to be work in translation and interpreting are simply making entirely rational decisions about that. If what they were doing studying languages for were to get employment in those professions, I think there are lots and lots of other good reasons to study language and culture, don't get me wrong. But for translation and interpreting purposes, I think that they're just making a rational decision if they say there's not a long-term career in this for me. And if, if not the current young generation, then I reckon certainly the next, you know, the next one, 20, 30 years time. Um, I think we, we probably really just have to lift our heads up a little bit and say, uh, okay, well, there are plenty of other important things that we could be doing with our ideas and with our languages and with our skills and with our pattern recognition abilities and all the rest of it that's underpinning the kinds of works that we do. How can we deploy those in innovative and, um, you know, dynamic ways to address the colossal problems that our societies face, uh, over the coming century. This is not one of those colossal problems, it seems to me, this is, this is one that is on its way towards, uh, you know, uh, us finding very smart tools that will handle this adequately for human purposes. Alex D: [00:49:35] Just a quick follow-up. Do you see any differences there across languages? And most importantly, what about sign language? Same judgment there, or? Graham Turner: [00:49:45] Uh, as Sarah said, where we're talking about, um, less widely used languages, smaller communities, and so on, you know, the process will be perhaps slower in some respects. Um, but the same steps are going to be taken. And essentially that applies to sign languages as well. Uh, so right now there are some pretty clever, um, uh, pretty credible kinds of avatars being, uh, produced that can, uh, you know, looking increasingly natural and increasingly like human signers. They're not there yet, but, you know, as I've said throughout this conversation, I think, you know, when we talk about the yet part, we're just not looking quite far enough ahead. I think that's the case with, uh, with these kinds of avatars as well. Um, and you know, the resources that are needed for creating and more importantly, perhaps reading sign language, uh, in, uh, in machine-driven way, are less widely available at the moment. Um, but again, they are being developed. They are evolving faster than we can blink more or less. Uh, and you know, in the case of, uh, visual dealing with visual information, uh, digital visual information, most of this is happening completely outwith the language field. Uh, it's happening for example, through, you know, CGI for films and so on. Um, and all of that will wash back into, uh, industries like ours in due course, more or less as an afterthought because they don't generate as much money as something like blockbuster films do. Sarah Hickey: [00:51:33] That's a good point actually. Alex D: [00:51:34] It looks like Alex is rethinking his life choices there. Alex G: [00:51:38] No, no, not at all. Alex D: [00:51:39] Probably re-evaluating options there. Alex G: [00:51:42] No, no, no, no. Not at all. I, I, it's interesting taking in all the different sides and I think what Graham was saying as well, it's just, it's just, you know, it's tough looking far enough into the future because at some point we're kind of looking, we're trying to look like around the Alex D: [00:51:57] Humans are bad at that, yeah. Alex G: [00:51:59] Do you know what I mean? Like... Henry Liu: [00:52:01] Yeah, we're dumb. Sarah Hickey: [00:52:03] Yeah on that actually. I also want to say about the, you know, I'm saying that the market is expanding, um, and you were saying , Graham, you know, it will be filled with more, um, machines and you're probably right ultimately. But again, I was more looking at the more immediate future. When I was saying the market is expanding I mean more that, for example, scenarios where there is no interpreting yet, that some of that will be filled with, um, you know, some machine interpreting or some remote. You know, RSI found a whole lot of new section of buyers of interpreting now, you know, that didn't buy interpreting services before, so it didn't take away. Henry Liu: [00:52:39] Because it's now cheap. Sarah Hickey: [00:52:40] Yeah, exactly. So that's what I was talking about in terms of expanding there as well with, um, not only in this case of course, but it comes with new opportunities as well for the interpreting market is what I'm saying. Alex G: [00:52:50] Let me just jump into real quick, because I remember it was about like a year ago that we talked to Lakshman Rathnam. I think it was about like June or July last year that we talked to him about, um, machine interpreting and we had this... this is kind of always like how the circle goes where we had the same conversation about RSI year after year after year until the bubble burst and all of a sudden there was RSI. And so we talked to them about the same thing that it was going to expand sort of language access. And it was kind of like going to open doors because the quality of machine interpreting wasn't going to be properly right at the beginning. But then it kind of raises awareness for language access, which opens doors for us. And I think it's kind of like a similar discussion that we're having right here at the moment. I'm, I'm struggling, Graham, to see where we're going to be. I agree what Sarah is saying: there is going to be more language access across the board. And I agree with what you're saying as well, is that wherever it's feasible and cheaper, and obviously it's going to be cheaper wherever it's feasible, we're going to be replaced by interpreting, but I don't see any future, unless there's like C3PO walking into the door right now where we're actually going to be replaced across the board by machines. I just don't see that happening even 50, 60, 70 years in the future. I just don't see that happening. And it's been said here tonight that it might mean that interpreting or conference interpreting or whatever type of interpreting we're talking about becomes more niche, becomes more sort of like boutique. Graham Turner: [00:54:18] Can you say more about why you don't see it happening, Alex? Alex G: [00:54:22] So I was just thinking, um, so I do a lot of motivational interpreting, right? A lot of motivational speaking, et cetera, et cetera. So it's very fast, very slang. And also you have to really bring the motivation, like the emotional beats. You have to bring that into the other language. And even if a machine were able to compute all of the input, the output just wouldn't be the same because even in like 30, 40, 50 years, the machine is not going to be able to perfectly replicate the emotional impact of a motivational speech from one language to the other. I just don't see that happening unless C3PO comes into the door right now. And you know, he's the ultimate interpreter across all languages. I just don't see that happening. But again, maybe I'm just not looking at far ahead enough around the curve into the future. I don't know. Graham, convince me I'm wrong! Graham Turner: [00:55:11] So, if I understand your example, you're saying the motivational impact of a speech, for example. Alex G: [00:55:18] For example. Graham Turner: [00:55:19] The machine interpreted "I have a dream" speech for instance. Alex G: [00:55:24] Yes, if we will. Yeah, sure. Graham Turner: [00:55:27] Honestly, I, I don't see what, what, what is, um, I think it's just decline, isn't it? Uh, you know, from, from the simplest two word utterances of a tiny child to, um, Martin Luther King. And it, it's just a question of whether we have enough data and enough capability, sensitivity, uh, in our, in our machine processes, our machine-driven processes to handle more and more complex material. And as I said earlier, I think, you know, the evidence, if you look back over your shoulder is precisely that, uh, you know, human abilities have demonstrated that in tandem with, uh, digital, uh, processes, one way or another, they have, um, found a way to handle more and more and more and more complex material and they will continue to do so. So I think, you know, the same kinds of underlying processes that now enable machine translation to produce a very credible, um, "I have a dream" speech in writing. Uh, I see absolutely no reason why they can't do exactly the same, uh, with the spoken word. Alex D: [00:56:48] So in terms of emphasis and pathos, if you will. Um, yeah, I would probably agree in, in that, that that might be, uh, an option. Sure. Why not? Yeah. Henry Liu: [00:56:58] And I agree with Graham. Being the Cassandra in the room often, because I predict dystopic futures, that's terrible. They would go, oh my God. But nobody listens. I mean, just look at all the fake videos that's been coming out. I mean, this is only at the beginning and it's already affecting how people behave. So this is rudimentary, but it's, it's not very hard to manipulate people's emotions. That's one thing: using machine to manipulate the emotions is not difficult. Using machine to generate impact, to generate empathy. Empathy might be harder, but impact certainly, or... Alex D: [00:57:34] Disinformation? Henry Liu: [00:57:35] Disinformation! I mean, that's all happening already and that's all happening translingually or through different language medium. It's already happening. The dystopian part is this. At the moment we have the tier level, we have the very high-level interpreting that we can hear, that we can actually go: this is really different and this is something that machine cannot imitate yet. There is actually a mathematical point of inflection that will come through eventually, as what Graham is saying, it's not necessarily linear. I agree, because look at what happened in the past, people were saying that ELF, you know, English as lingua franca, will actually take over the world. Well, it actually hasn't, but you know, that's, that's an aside. Alex D: [00:58:15] Not yet maybe. Henry Liu: [00:58:15] It's not necessarily linear, but as we go up that scale, we will also have a downscale of expectation. Because the downscale of expectation is this: unfortunately one day all of the experts will die and there will be a new generation of experts that may or may not be as good as, and eventually we will actually approach the asymptote where the machine and the human actually intersect. And that's the dystopian side. Graham Turner: [00:58:42] It's uh, I was just, um, thinking about, uh, Ray Kurzweil. You know, these predictions about, uh, 2045 is the date he gives... Alex D: [00:58:52] For the singularity? Graham Turner: [00:58:54] For the singularity, for, uh, you know, human knowledge and machine knowledge attaining, such a pitch that boom, all sorts of things become possible. 2045. That's 24 years. Henry Liu: [00:59:06] Not very far away. Graham Turner: [00:59:07] Absolutely round the corner. Now I'm not saying every word that comes out of Ray Kurzweil's mouth is bound to be true. Um, but he's a smart guy. He's got a handle on a whole lot of knowledge the rest of us, you know, can only dream of having access to kind of thing. And I think Henry makes an important point there about, about these two kind of intersecting, uh, these two kind of intersecting directions as it were, because the other part of all of this, I think, and it does come back to something that we've sort of acknowledged as a, as an elephant that we don't necessarily want to pet within this room, um, about quality. Because I think it is, it's already surely true for us all that we are more tolerant of less high quality translations than we would have been a few years ago. We see them all the time, we live with them, we don't stress about them. Uh, you know, kids text messages to each other, they spell everything wrong. They can't be bothered with the capital letters. They don't care. Not communication still seems to be happening. You know, they still manage to form human relationships and all the rest of it. Academics, uh, who, uh, you know, a number of them, an increasing number of them, linguists, who are saying, okay, I'm going to live with the variation that we all see around us in academic writing and so on. And I'm not going to correct my students' English in this, that, and the other respect, because we're just all more tolerant of this kind of variation now. So I think that's one part of the intersection, uh, that Henry's talking about. We are well on the way already to learning to live with different kinds of expectations around, around language and communication in that respect, too. Alex D: [01:00:56] Yeah, we're getting dangerously close to the, to the precipice of doom and gloom, it feels like. But, um, and I mean, it was interesting that, uh, Henry mentioned that the, the, um, what was it that the inverted Turing test or the reverse during test, we talk... about Henry Liu: [01:01:10] the inverse Turing test, yeah. Alex D: [01:01:12] Inverse Turing test. We could talk about blind faith in machine translation. You know, this story of police officers knocking down a door because they were working with wrong assumptions due to false machine translation. So I'm not sure we want to take further steps towards the precipice. Um, so maybe we can sort of circle back to, um, uh, to what Sarah was getting at earlier and talk about what's, you know, what's maybe the better alternative. I mean, what, what are ways and means to put our skills to good use, as you say, Graham, um, if in fact machine translation or the singularity is inevitable. Graham Turner: [01:01:49] Just to sort of contest the way you frame that very slightly. You, you know, I don't, I don't think it is necessary to see this as a precipice. You know, it, you could say it's just a, it's just a very, um, sane refocusing of our attention on the things that really are going to matter, you know, in the years ahead. And if this is not going to be one of them, why would we focus lots and lots of our attention and anxiety and concern on it, you know, and brainpower. Sarah Hickey: [01:02:18] Yeah, that's an excellent point because again, it's not interpreting for interpreting's sake or translating for translation's sake. Right? Um, but yeah, just, um, for people who have these skills and like you said, there's lots of transferable skills that you gain as a translator and an interpreter that you can use in other fields. And lots of us have other interests as well. Um, so, you know, you can go into other directions. Um, but yeah, I know, Graham, like I said, I'm putting you on the spot, but off the top of your head, what are some of the other, you know, more important things we could be focusing our attention on with those skills, for example? If you know any. Graham Turner: [01:02:54] I mean, we've touched in passing on, um, uh, things like the growth of, of disinformation. And, I think allied to that seems to me in the wider society is, is a level of, um, intolerance in many respects, that people are encountering in many of their human to human interactions. And for me, um, you know, learning about other languages, learning about other communities, learning about other cultures is, uh, is the way par excellence of learning tolerance. And I think that is going to be a human skill, uh, that we, that we clearly are going to need to work on, um, in the society that is now emerging ahead of us. You know, languaging and, uh, and understanding language processes, uh, as a way of building human relationships, uh, seems to me to be absolutely critical. So I think if we are able to reframe some of, uh, some of the kinds of things that we do towards those kinds of exercises of, um, you know, intercultural encounter, uh, and inter-community encounter, whether that's within one and the same language or across languages, you know, and in essence, if the machines are doing the translation for you, in other words, if you've got a Babelfish in your ear, then we might as well be speaking the same language anyway. Sarah Hickey: [01:04:28] Yeah. Um, I like what you're saying. And it actually reminds me of, uh, I've seen a brief clip a while back from, uh, I forgot his name, unfortunately, but, um, well-known successful business guy, I'm just gonna say, I really cannot recall the name. I can probably find the video later and put it in the notes somewhere. But, um, I remember he was talking also about how, you know, we shouldn't try to compete with the machines because the machines will ultimately win, but that instead we should focus on what makes us human, what we humans bring to the table, like you said, and then for example, in schools, people should maybe also focus on teaching children empathy and the power of empathy. And like you said, when we, you know, focusing on tolerance and overcoming those wider issues, exactly like you said, the, the bigger human issues that we're facing. Graham Turner: [01:05:16] Yeah. I think empathy is a key word. Thank you. Henry Liu: [01:05:19] And that's very hard to put a value on, and that's very hard to convince people to be actually, uh, investing in. Just picking up on that idea, um, not to take us to the precipice, but to, to, to remind or, or at least highlight the fact that there is a double edged sword. There is always a double-edged sword. Technology is a double-edged sword, just like language is a double-edged sword. There is a flip side, you know, the idea, uh, um, the interesting discussion about premium and niche, um, was, was interesting because, you know, uh, uh, Sarah talked about marketing, um, the idea of marketing not being used in machine translation. Well, yes and no, it's only a conceptual issue. Because actually a lot of disinformation is generated by machine translation. Well, actually, a lot of disinformation is generated by humans, let's not forget that there are many countries in the world that train translators to provide disinformation. But that's something that we, in the European society, don't think about. I was talking to John Jamison, not too long ago. We've lost the ability to think about the fact that information was a weapon, that language was a weapon. We've lost it. In the Cold War, we were, we were thinking exactly the same thing as the other side. But now we, we, we have 50 years, 60 years, 70 years of peace. We've completely forgotten about it. Sarah Hickey: [01:06:46] Yeah, that's an excellent point. Uh, I was just gonna say that, uh, in Europe, it's not like maybe we don't think about it anymore, even though I feel like it's coming back a little bit with all the movements we're seeing, uh, also in Europe. Um, but yeah, of course you would think in world war two, then the followed by, uh, the Cold War and everything there, like you said, language was a major weapon. And there was lots of disinformation. So I think I said this before as well, that, yeah, there's some ways where most machines have been used for disinformation, but humans have been used for that for way longer. So, you know, that's that, and I'm circling back to the whole thing with empathy as well. I think that these kinds of soft skills, yeah, they have been underestimated for a really, really, really long time. It's still seen as like a, yeah, that's nice, whatever, you know. But what are the harder skills that you can bring to the table? Whereas really it's the soft skills that, where we as humans can excel, you know, and that should be brought to the forefront more. Alex D: [01:07:42] But I guess the question is, do, do soft skills pay the rent? Sarah Hickey: [01:07:47] A lot of us, will, if the machines take over the other jobs, you know. Alex D: [01:07:51] I think we just have a hard time imagining what that looks like, but yeah, go ahead, Graham. Graham Turner: [01:07:55] If you go back to Henry's point about, uh, about language as a weapon, um, I find that hard to square with the notion that this is a soft skill. Alex D: [01:08:06] Good point. Yeah. Graham Turner: [01:08:07] It's a very powerful skill, you know, extremely powerful skill. And my children are now 26 and 22, uh, my wife is an interpreter, uh, as well. And, um, you know, we always said through those children's schooling, the number one skill, the number one capability, that we wish their education would focus on would be, um, critical discourse analysis. Sarah Hickey: [01:08:36] Absolutely. Yeah. Graham Turner: [01:08:38] And so, again, in terms of, you know, skills that are of real value in the long term to the human race, I think critical discourse analysis will be pretty high up my list. Sarah Hickey: [01:08:48] Yeah, I could not agree more. Graham Turner: [01:08:50] Yeah. And it's exactly to counter language as a weapon, to counter disinformation. Yeah. And to, and to, and to think straight using language. Sarah Hickey: [01:09:02] Yep. Yes to everything you said that Graham. Henry Liu: [01:09:06] Yes. And they're transferable, it's totally transferable skill, which is what Graham's point. And I, and I totally agree. It's it's that we need to be able to make what we do sexy. But the sexy part is that actually all we do is very transferable, whether it is in journalism, whether it is in translation, whether it is in diplomacy, whether it's in any human encounter. Of course we do, because otherwise we don't, we cannot attract the new generation, the new experts. Yes, income matters, but we also need to see, but, but how does income matter depends on how we value the society. It's all about how we put, put value on who is being remunerated. How are people getting rich? How is society being remunerated? But that's too philosophical. Alex D: [01:09:51] But that's really the big question. Yeah, exactly. Graham Turner: [01:09:54] I'm not sure we need to make it sexy though, Henry, do we? We just need to get people to recognize that it is as important as breathing for Pete's sake! If empathy and thinking straight are not as important as breathing, I don't know why we would do them. And I would go back, you know, comment Sarah made earlier, she talked about, um, uh, when she was studying translation, they were made to handwrite everything. Was handwriting sexy for us when we were at school? No, it was just something that you absolutely had to do. Henry Liu: [01:10:28] Yes, I went through penmanship. I mean, you know, I had to learn how to write and having, having had a colonial education, you know, penmanship and articulation, that was the thing that we had to do. The reason why I talk about sexiness, or maybe it's the wrong term, is, is this: um, at the moment, and you know, I worked, I'm trying to work on, on, on the issue of indigenous languages and the rarer languages. Sign language is a little bit different in the sense of the accessibility side is different. But if you think about how do we revive dialects, how do we revive languages that are not popular? A huge part of it is because employment opportunities. A second part of it is stigmatism. I mean, classic example, because I'm talking to three people in the German speaking world. The classic example between the north and south. You know, I go to Zurich and people speak to me in Schwizerdütsch. They are proud to speak to me in Schwizerdütsch and they go, this is my language. I don't care what you think. Why? Because they're rich? Sure. But, but it's also because it's also because they, they are really proud of that. Their children are really proud of that. Everyone's really proud of that. And then I go north, you know, friends of mine who are up north and go, oh, we don't speak dialect. That's no, that's a terrible thing. In order for this to work, in order for language to work and, same thing, as in order for humanities to work, in order for all the, all the soft skills as whatever we're going to call it to be, to be attractive. We want to make it attractive. We want to highlight the attractiveness. I think that's why I call it sexy. That's why I call it sexy. Totally the wrong term. But if we make it compulsory, all we do is antagonize because that's what happened. You know, like, like, like hand-writing, if we make something compulsory, most, most children will go: Why do I have to learn that? I don't want to do that. Sarah Hickey: [01:12:29] Well, like you said, you can make it attractive, even if it's on the curriculum. Right? I mean, that's Henry Liu: [01:12:34] Sure. Yeah, yeah, absolutely. But, but not necessarily, not necessarily compulsion. Sarah Hickey: [01:12:39] You know, I'm not an Irish, uh, speaker, but I lived in Ireland for a long time. I know the language was, you know, pretty much almost dead. And now It's kind of coming back. And for a very long time, there was a stigma associated with speaking Irish, that it's the language of the poor and that's lower class, and now it's coming back and people take more pride in speaking Irish as well. So there is a little bit of that, you know, image around it as well and attractiveness of course. Alex D: [01:13:00] Sexy. Yep. Sarah Hickey: [01:13:01] When it comes to dialect. So I think, we kind of tend to say that in Germany we are no longer, you know, patriots, but we're regional patriots. I don't associate with the whole, ugh, being German thing, but I love the area that I'm from. And I love our dialect. I love coming home and speaking my dirty dialect and I will Alex G: [01:13:18] Your dirty dialect. Sarah Hickey: [01:13:20] Yeah. Because our dialect here is known as being very trashy. But I love it and I'm very proud of it and everything it represents. So... Alex D: [01:13:30] It's it's, it's really funny to see that , as the conversation progresses, we sort of keep poking at bigger and bigger topics, but, um... maybe this is a good opportunity to, um, wrap up for today and maybe say that we continue another time. Um, if agree, uh, in a pub ideally. Yeah. Henry Liu: [01:13:48] Live! Live! In... Alex D: [01:13:50] With a few more people. Henry Liu: [01:13:51] audience. Sarah Hickey: [01:13:52] Ooh. Yeah. live pub show. I love it. Alex D: [01:13:54] With, with a very interactive audience. Yeah, I've really taken away a lot from this conversation and liked what Graham, for example, said earlier with, with, um, you know, putting these things to good, to good use and maybe not looking at whether they're marketable first, but whether they're, you know, of benefit to, well, humanity, I guess, uh, made me think of things like the universal basic income, but again, poking it yet another big, uh, topic we could get… Alex G: [01:14:19] Yeah. Oh god. Alex D: [01:14:20] ...into, but probably won't get into. So I guess for tonight, thank you so much, um, Graham and Henry for coming on and talking to us about all these topics. We've kind of moved away from technology along the way. Alex G: [01:14:33] But it's been an Alex D: [01:14:34] been Alex G: [01:14:34] It's been Alex D: [01:14:35] worthwhile Sarah Hickey: [01:14:35] It's Telling in itself: we've refocused on the human side and what we bring to the table. Alex D: [01:14:40] Exactly. And I'd be, I'd very interested... Henry Liu: [01:14:42] Technology's a tool. It's how to use it. It's the same thing as all the, all the, all we're talking about. The theme today is that actually it's how to use the tools. Really or how to, how to equip the user… Well, anyway. Okay. Alex D: [01:14:54] Actually. Yeah, exactly. And Henry Liu: [01:14:56] Cut that out in the thing. Cut that out. Alex D: [01:14:58] I'm also very, very interested in, in hearing from our listeners, uh, and feel free to, you know, write in or, or send us voice messages. Um, and just give us your take on, on the whole topic of the future of the profession, I guess, to sound a bit grandiose, but once, once again, Alex G: [01:15:15] Just in a short post. Alex D: [01:15:16] Exactly. Very quick. Um, thanks for joining us. It's been a wonderful discussion and, uh, again, I really hope we can continue it at some point, uh, live. Uh, but for now, uh, we'll sign off and say, thank you for listening.