EPISODE 21 [INTRODUCTION] Do you enjoy listening to On The Ear but wish you could earn ASHA CEU’s for it? Start today. Speechtheraphypd.com has over 175 hours of audio courses on demand with an average of 19 new audio courses for at least each month. Here’s the best part, each episode earns you ASHA continuing end credits. No, wait, this is the best part, as the listener of On The Ear, you can receive $20 off in annual subscription when you use code “ear21.” Just head to speechtherapypd.com to sign up and use code “ear21” for $20 off your annual subscription. You’re listening to On The Ear, an audiology podcast sponsored by speechtherapypd.com. I’m your host, Dr. Dakota Sharp, Au.D., CCC-A, audiologist, clinical professor and lifelong learner. While I primarily work with pediatric cochlear implants and hearing aids, I am absolutely intrigued by the many areas of audiology and communication in general. This podcast aims to explore the science of hearing, balance and communication with a variety of experts, in hopes of equipping you to better serve your patients, colleagues and students. Let’s go, we are live and On The Ear, brought to you speechtherapypd.com. [INTRO] [0:01:36.7] DS: Because audiologists and speech language pathologists typically start with the same undergraduate major, we see a lot of overlap between the disciplines. When asked, a lot of audiologists, myself included, will say one of the reasons they were drawn to audiology over speech was the rapidly advancing and life changing technology involved in the field. I was always the family member who set up the gadgets, and audiology is full of gadgets. However, our SLP friends also work with some pretty incredible technology sometimes in the world of augmentative and alternative communication, or AAC. Today’s excellent guest is going to break down all things AAC for us and we are so excited to have him. Stephen Kneece, MA, CCC, SLP is a certified speech language pathologist and AT AAC consultant working and living in Aiken South Carolina. He is currently an SLP for Aiken County Public School District, the president of the South Carolina Speech and Hearing Association, the founder and CEO of Speech and Language Songs. I’m excited to talk about that with him today. And an adjunct instructor at the University of South Carolina. He is also a graduate of the ASHA leadership development program. We are going to have such a fun conversation, I’m so excited to learn more about AAC. [INTERVIEW] [0:02:41.3] DS: Thank you so much for joining me, Stephen. [0:02:42.7] SK: No problem Dakota, thanks for having me. [0:02:44.9] DS: Yeah, okay. I kind of want to just get going out of the gate. I got a lot of questions for you, I got a lot to learn here. [0:02:50.4] SK: Okay, I hope I have some answers. [0:02:53.0] DS: I think you do. Something I think is kind of surprising. Like I mentioned, we kind of started in the same undergraduate major of communication disorders, for the most part, in the CSD world. I don’t think I had a class on AAC or this realm of speech. So how did you learn about AAC, what drew you to that kind of specific discipline and speech and how’d you learn more about it? [0:03:14.4] SK: Sure, yeah. As a, same as you, I didn’t really have any background in it, after I even graduated with my masters, we didn’t have a course. I don’t even think we had a day. We didn’t talk about it, really, hardly at all. I think it was mentioned in one of our classes but we really didn’t talk about it much. I graduated about eight years ago so at that point in time, we didn’t really have anything going on there. It really came slowly. So I worked in the schools, I’ve been working in the schools for eight years, and as you were saying, you started off talking about technology and your just innate interest in it and I think I kind of share that with you. I was always kind of drawn to it and so when I was voluntold to oversee the AT department a few years ago, I think that was really a blessing because I found something that I was really interested in and that I continued to get excited about. It was something that kind of got pushed into at first and then I immediately started to gravitate towards that and really been focusing on it ever since that has occurred. Then the past three years, solely just being an AT AAC consultant. Prior to that, I was kind of like, overseeing the SLP’s within the district and doing a little bit of overseeing the AT department as well, but now I get to focus fully on that, which I’m really glad I did, or get to do that. I really didn’t have any course work so it was really on the job, figuring it out. I went to ATIA, I’ve been to ATI a couple of different times. That is the Assistive Technology International Association. They have a conference every year down in Orlando Florida and it’s amazing. I definitely recommend it if anybody has an interest in that area. I went down there, have gone there pretty much every year they’ve had it. Obviously, this past year, they did not have it in-person due to COVID-19 but other than that, I’ve been there. Learning on the job. Learning through coursework like that, I listen to podcasts like Talking With Tech and Chris Bugaj, Rachel Madel, a really good podcast that focuses on AAC. Just kind of through the years of figuring that out, reading, listening and things like that. [0:05:22.9] DS: Really cool. Okay, could you tell me then, could you break down just a little bit of, if someone – you met a stranger on the street who is like, “You work in AAC, what in the world is AAC?” What’s your elevator pitch for what it is that you do? [0:05:34.7] SK: Sure, yeah. We really are trying to leverage technology that we have available to help people communicate. Communication is such a big deal for us as human beings, and without communication life is very different and a lot less enjoyable. We really want to look at how can we use technology that we have currently available to allow someone to communicate as most efficiently as possible. I would say that probably would be my elevator pitch. I’ve never even thought about all that before but I guess that’s it. I use technology to help people speak, help people communicate. [0:06:08.7] DS: Got you, cool. What does that look like? Can you break it down a little bit further than from the elevator to I don’t know, I guess we’re going to lunch then, it’s not as quick as the elevator, I got a little bit more time. [0:06:18.3] SK: Yes, we really look at – this is one of the reasons why I really enjoy doing this, one of the reasons why I probably will be doing it for many years to come, I just kind of – this little niche of speech language pathology, is because each of those situations is a problem-solving opportunity. Each of these individuals are very unique, it’s not a one size fits all solution whatsoever. We really go into a comprehensive evaluation and have to look at all these different pieces. And something that obviously you and your listeners are familiar with, we work with, team up with OT’s, PT’s, ATP’s, but also teachers of the deaf and hard of hearing and audiologists. The audiologists a little more indirectly, we’re getting evaluation from them but we need to ensure that the client that we’re working with is aided appropriately, they’re hearing is fixed in a certain way that we can move on from that. That’s one of those pieces to it. It’s a multidisciplinary team that I have to work with to figure out each of these problems and obviously, you all are one of those team members. It’s not always the case, and the majority of my evaluations are – “It’s a box, did they pass their hearing screening? Yes, okay, we’re moving on.” When the answer is no, then we have to keep that in mind and match the features up based on that. That’s a really big deal. We kind of go through a comprehensive evaluation and one of the big things that we have to look at is access methods. Looking at, how are they going to actually use this device? That’s going to be super important because efficiency has to come into play there. Obviously, all right, speech is going to be the most efficient, then we kind of peel back from there. Are they literate? Because text to speech is going to be the next level of efficiency. “Okay, no that’s not the case, we’re actually going to be,” there may be a child, like I work with obviously, children within the school district so working with a four-year-old or something like that. Obviously, we can’t go and just slap them on a Microsoft Word or something and text to speech. We have solved the problem. Obviously not. We have to kind of go deeper into it so we’re going to use a device. Now we have to figure out, okay, what is going to be the access method for them? There’s a bunch of different ones, how are they going to actually interact with that technology? And this could be low, mid or high tech but typically, we’re looking at a robust vocabulary system so it’s going to be high tech. That’s what we’re trying to get to but that’s what we really want as somebody… The whole reason for that, obviously, is so they can say what they want to say whenever they want to say it. We don’t want to do things like packs or just a small communication board that has a handful of things on it. Obviously that limits them in what they want to be able to communicate and obviously communication is a human right so we want to be able to give them that opportunity to say whatever they want to say when they want to say it. That’s something we call, in our little tiny niche, is a SNUG. It’s spontaneous, novel, utterance generation. They can do it whenever and they can make up new ones if they want to. They can put words together that I wouldn’t have thought that they wanted to say. Kids say things that I had no idea that they were going to say all of the time, which is great because they know how to use their device. Anyway. [0:09:33.9] DS: No, that’s great, yeah. That’s awesome. [0:09:36.2] SK: Wrap myself back in here. We’re looking at access methods, that’s where I kind of jumped off there, went on a tangent. The most efficient thing outside of what I said earlier, speech and then looking at text to speech using a keyboard, okay, we already ruled that out. We’re moving down the line here. Direct selection is going to be the most efficient next option and that is just using your fingers typically and just selecting icons on a device but doesn’t have to be your finger, direct selection could be a toe or something like that as well. There is a guy that I’ve seen speak a few times, his name is Chris Klein and he is an AAC user and he uses his big toes, I think he uses both big toes to select the icons because of his motor capabilities with his arms, he’s not able to do that so toes were the best access method for him. All right, we kind of went through a direct selection there. Let’s say we’re working with OT and the PT, and direct selection is really not going to be efficient, they fatigue very easily or they don’t have the motor control to do that. Okay, what’s next? We have a handful of other options that we look into, which is cool because not that many years ago, if they weren’t able to do that, it’s over, we don’t have the opportunity to – or the technology, especially readily available. Anyway, we have all these different access methods. So eye-gaze technology would be one. Infrared sensing or head tracking, and a little aside there is iPad access. Apple is doing a really good job of making things very accessible. Any iPad that you have, has head tracking built in now, and that started probably a year ago or something around there, maybe a year or two is when it started rolling out. Makes it way more accessible. So infrared sensing or head tracking is another thing. Sometimes you’ll see people with the dot here or glasses with a dot on it and is helping the head tracking. That infrared sensor can then pick up on that. The way that iPad does it, it actually leverages your facial ID which a lot of people probably already use. So they leverage that technology and they can actually just see where the face is and then you can control a cursor based on your head movement so you don't have to have a dot for anything like that. Infrared sensing, head tracking. Joystick and alternative mouse would be another access method we would look into. Again, we have to look at motor skills and see. I’m working with a kiddo right now, cerebral palsy, direct selection was not an option, his spasticity was not allowing him to directly select so we’re like, “Okay, let’s try a joystick.” Again, I’ll be a fanboy, I guess, of Apple here because they have made it very easy to attach joysticks to iPads as well now. That came right at the same time, similar time as the head tracking thing. They’re really doing a good job with this. Joystick or alternative mouse, so we can start there. With this individual, we actually had a large foam ball, again, working with my OT ATP friend to figure out what’s the best solution here. We added a big foam ball on to this joystick that he can manipulate and said, “Okay, this is the most efficient access method for him.” All right, joystick and alternative mouse, that’s another option. All right, then we keep going down here, single and multiple switch controls, and you may have seen people use switches to control devices as well. We always want to try to get multiple switches. Okay, let’s say we have ruled out all of these, we’re going switches. If you can use multiple switches to control a device then you want to. If you can’t, then of course you will use single switches and that’s fine but multiple is going to be much more efficient. If you think about it, let’s just think about just regular switches, we can just, buttons, nothing fancy about those, just for now. If I had multiple switches, think about an AAC device. Basically a grid of icons, if I have multiple ones, I can use one as selecting, “Okay, I want this row and now I want this icon,” you can tap, tap, tap and get to what you want to do and then use the other button to actually select it. If you only had one single switch, then you’re going to have to use scanning. You’re going to have to use auto scanning, and then you have to talk about dwell time, it’s like, “How long is it going to stick on these columns and rows?” Then you hit, it’s like, “Okay, I want that column and then we’re going to go across,” hit the icons and okay, “I’m going to select that,” I missed it. Now we’re going to have to wait for it to reset and go back to the top. [0:14:11.3] DS: Yeah. [0:14:12.7] SK: That’s the reason why we would prefer multiple switches if it is possible because, just the efficiency. Again, we’re trying to figure out, as far as access method, what is the most efficient mode of using these devices? Because if this is what they’re going to use to communicate, we want them to be able to do it very quickly and without fatigue because, “Okay, I got a sentence done, it took me an hour and I’m going to lay down, I’m done communicating.” [0:14:38.9] DS: “I’m done communicating now,” yeah. [0:14:41.1] SK: Exactly. Of course, that’s not the case, we want to make sure it’s efficient and then you go into the weeds, and tell me if I get too deep into the weeds. [0:14:50.2] DS: No, we love the weeds here. [0:14:51.1] SK: You can reel me back in. Switches, you have all kind of different types of switches within there. Your typical buttons that you see, then you have things that are pneumatic group switches, again depending on their fine motor capabilities, you have to really kind of problem solve here too again, which is a cool part of this. You’re really figuring out what’s the best thing for – so a pneumatic or grip switch where you just press it and has a device that actually senses the air pushing through it, that’s their method of selection. There are a lot of things like, just pressure sensitivity that changes the pressure, there’s even – it senses water, it senses liquid, if you use your tongue and tap it, it can actually sense that, the smallest of movements there. [0:15:36.5] DS: Yeah, oh my goodness. [0:15:38.2] SK: Even eye twitches or using your check muscles to activate it, you can do stuff like that. And even if you can use your arms but you can actually press down. There’s something that are called proximity switches, you just have to get close to it so you’re moving your hand to it but you don’t actually have to depress it, you don’t actually have to touch it and there’s something like laser beams and all these different things you can use. [0:15:59.8] DS: My gosh. [0:16:00.7] SK: Electromyography, EMG, is another one and you’re probably familiar with that. If you think about using your muscles, it’s sending electrical impulses to that muscle and then it can sense that electrical impulse even if you're not actually engaging that muscle. You can use EMG’s for witches and a combination of all of these. There’s some situations where people are using eye gauge technology but they’re also filtering through things with just a typical switch and then they have an EMG on their other arm. So they’re having like three different access methods all in one because that was what the team determined to be their most efficient way of selecting. Then there’s some stuff which is kind of a cool thing that you probably have heard of, is brain computer interfaces. [0:16:46.3] DS: I was going to have to ask about that, I was wondering if we were going to get there. [0:16:50.4] SK: Yeah, we got there pretty quick. Like neurolink and things like that. I mean, that’s obviously not something that we are functionally putting into place right now. There’s a lot of research, a lot of cool stuff outside of neurolink, like EEG and things like that, so not as invasive, not cutting a hole into your skull, but EEG and things like that, that are used but they aren’t commonly used in practice that I’ve seen. The technology isn’t quite there, very expensive, not reliable at this point in time but it’s promising. Think of what 20 years could do. Look back 20 years and how far we’ve come. It’s super interesting to see. That’s really kind of the gist of it as far as access methods, so we kind of – that’s going to be a big part, we’re going to lean on those OT’s, PT’s, ATP’s to figure out what makes the most sense and then we start looking at individual applications and then we start talking about linguistics and things like that. So it’s like, “What is this kiddo, what is this client, what was this adult ready for?” We have to look at all of the different features that these different applications have and try to match them to that individual. It’s a process called feature matching that we talk about and you can really talk about the hearing, as far as the device doesn’t have an external speaker, does it have Bluetooth capabilities and things like that? That’s something that again, I don’t really think about a ton because it’s typically not an issue that I have to deal with. I’m working with a teacher of deaf and hard of hearing, so I kind of rely on them to kind of chime in with that. But back to the feature matching as far as the linguistics piece of it. There’s really like three subsets that I think of when you talk about AAC apps. It's really kind of a semantic group as far as how they organize. Semantic, syntactic and then kind of a mixture of the two. I really like to look at, and tell me if you’ve heard of these or want more explanation. LAMP words for life would be more of a semantic one and LAMP is an acronym that stands for language acquisition through motor planning. Everything’s grouped by what it means. They use these things called semantic compactions, like, “This picture can mean a bunch of different things so it’s under this picture,” and sometimes people think, “I don’t even understand.” It can be a little abstract. Again, is it back to the efficiency thing. They have to organize all this in a certain way and they don’t want to reduplicate so they don’t want to have “there” in 17 different places within the app, they want it in one place. You remember that and you have that motor plan to get there. Again, language acquisition through motor planning and that’s just, muscle memory is what motor planning is, and you could think about your keyboard. If I took your keyboard and I just shuffle it all the keys around, you would have, obviously, a pretty difficult time typing anything. That’s the whole idea. Motor planning. You have that semantic organization and that’s really LAMP words for life is really good example of that. On the other side of things, you have a whole subset of applications that are syntactically organized, which is really cool because if you have a client that really has a lot of linguistic capabilities and they’re building long sentences, you probably want to lean towards one of these and touch chat, HD with word power, it’s a really good example of that because it is predictive. So I hit “I”, then it knows what subset words were to come next. It’s kind of predicting what you’re going to say, so “I want,” and it’s like okay, “He said I want so I’m going to show you these items,” or whatever. It kind of can predict everything going on there and also has like a keyboard embedded in it and also, it has sentence prediction but also word prediction in there too. You start typing something out and it gives you a bunch of options that you can choose. Again, you have to kind of consider literacy in here too. [0:20:47.1] DS: Sure. [0:20:47.5] SK: How well can they read? Can they spell a little bit, enough to actually get some word prediction going? Again, this is another thing that you have to think about. All these different features, it’s really kind of complicated to get to the best one, the most efficient one for each of these individuals. We have semantic organization and on the other side, syntactic organization, and then you have a few that are kind of in between, something like Proloquo2Go is kind of a hybrid. It just feels like it’s in between, it does a little bit of syntactic, kind of prediction, not really prediction but it kind of sets you up to make the sentence a little easier than one that’s like LAMP words for life but it’s not really doing the prediction part of it and then they allow for things to be in multiple places to make the access a little easier so it’s kind of in between. That’s another piece of it, you’ve got to think about access methods, then you have to think about the individual app and think about all the different features it has and what’s the best one for that kid or – I keep saying kid but I mean, for the adult as well? It’s just the world that I live in. [0:21:51.8] DS: Sure. [0:21:53.8] SK: Is working with school aged kiddos. [0:21:55.2] DS: Is it a lot of trial and error to get to that point or how do you know? Because I’m sure new apps come out all the time and it’s like, if it’s going to – you have all of these different factors that could be impacting that. I mean, you probably have a pretty good catalog, like, “Okay, we have a sense of your linguistic capabilities, we have a sense of your – whatever your access method is going to be,” it kind of narrows it down a little bit but still, there’s got to be some trial and error there, right? [0:22:20.2] SK: No doubt, yeah. We really start with a pretty comprehensive ATE now, we make people fill out all this information, which is laborious on their end sometimes, but then we came – [0:22:32.0] DS: They don’t have their device yet, right? [0:22:34.3] SK: Right, the team, we’re asking the team to fill out all of this information and kind of give us all this information beforehand. [0:22:41.5] DS: Got you. [0:22:42.2] SK: Then we can – like you were saying, kind of get a gist of it is like, “Okay, we’re going to go this route,” but I’m not really making a guess based on those ATE values. That is a starting point, we establish a baseline and we really figure out, “Okay, where are we going to go next?” And within our school district, we require at least three trials. We do three completely different trials and that’s just a minimum, really. This is a dynamic assessment that is ongoing so we’re taking a very long time to figure each of these things out. [0:23:12.4] DS: Yeah. [0:23:14.2] SK: We do a minimum of three trials and we typically are looking at LAMP, TouchChat Proloquo2Go. That’s kind of just our basic package but then it’s like, “We did LAMP and that was not good, you know? That was not a good fit,” whatever. But we know the syntactic, the sentence prediction, that’s not going to fly either and then we start taking some detours, “Okay, I know this app has that feature that we want or maybe this one. He’s fairly literate, his literacy skills are going to continue to increase, let’s start with touch chat. Okay, he’s actually more literate than we thought, let’s actually move on to something like Proloquo4Text.” And Proloquo4Text is a really straight up, very powerful text to speech app. Then you can store a bunch of phrases and words and things like that. It was like, “We really are going to be pushing literacy with this kiddo.” So by the time they learn how to use the icons, that app, he’s going to be way more literate. So let’s just get your wagon to Proloquo4Text or using something like [inaudible 0:24:15] cost comes in. That’s a very expensive app but Microsoft Word is very available, we could go there, “Okay, Microsoft Word is going to be really easy thing,” and then we talk about access method again, “They can actually type fairly well but their skills aren’t quite there.” Anyway, to answer your question, we do a ton of trialing. We make some guesses as far as where are we going to start because we can’t try all of these different things for every single – our evaluations take five years on average, or something like that. We have to make some of our best guesses and then we learn from the trials that we do. But we always trial at least three things. Because a lot of times, kids can – and adults can surprise you and you’re like, “I didn’t think,” that has happened a ton, “I didn’t think that was going to be a great fit.” It’s kind of interesting because even preference comes into play sometimes, like, “This four year old hated LAMP and they love Proloquo, I don’t know if it’s the symbol set that they just rely, they like the interface, they just like it better but we’re having success with it. I thought it was going to be LAMP, it’s not, it’s Proloquo.” So sometimes that comes into play as well. [0:25:26.5] DS: I’m curious with some of these – a lot of these children are exposed to probably more text than another child would be, right? Do you find that their literacy kind of is accelerated because, let’s say you start with one of the apps that’s a little bit more literacy based, you’re like, “They might do okay with this but their literacy isn’t quite there, they’re pretty young.” Do you find that because they’re in this technology all the time and relying on it to communicate that their literacy kind of takes off a little bit because of that exposure? [0:25:52.5] SK: Sure, it really depends. That is a lot cases. If they’re using it with fidelity and if we’re like, comparing that same kiddo without it, I think that them using the device is definitely taken their literacy because – [0:26:07.6] DS: Wow, yeah. [0:26:09.1] SK: The thing is, they’re being exposed to it, they’re reading these words. Because it’s not just icons, it’s icons paired with words typically, and some of these side words like pan and things like that. [0:26:18.4] DS: Sure. [0:26:19.0] SK: They really don’t have a – most of them don’t really have symbols paired with that. So they end up learning those side words because they understand, they see these images, like DNA, or just the words and then they memorize those. You’re exactly right, but again, it has to be done with fidelity, they have to be actually using the application. If that is the case then yes, the answer is definitely – it can be a way of increasing their literacy skills. [0:26:45.4] DS: Yeah, could you break down a little bit, who nowadays is like a typical AAC candidate? Whether in the pediatric world or in the adult world, what kind of kid or adult are we seeing here? [0:26:56.6] SK: I’ll start with the kids and then kind of go up from there. Kiddos with autism, like I work with a lot of kiddos with autism, intellectual disabilities, cerebral palsy and so those are kind of a big grouping right there. [0:27:12.1] DS: Yeah. [0:27:12.6] SK: And you have to think of each of those individuals differently. Cerebral palsy linguistically, they may be within normal limits and this is a motoric issue. Then we’re talking about, “Okay, what’s the best exit?” That’s when you really kind of start digging into access methods and we need to make sure they have it because they have a lot to say, let’s make it most efficient for them. Those are three big groups there. A small group but it’s still prevalent is dyslectic mutism. So that has been of subset that we have worked with and we really try to pair that kind of with a multi-faceted team, so working with counsellors to maybe see what is underlying the dyslectic mutism and then use AAC as a potential bridge to get to the other side of this. I mean, that’s what we would like to see happen here so that they are actually speaking but in between, there’s a time period, so use that as a bridge. That’s a little small little grouping but I mean, we see those kiddos every single year. They pop up, so we’re continuing to work with those kiddos. More so on the adult side, we are looking at more degenerative diseases, multiple sclerosis, ALS, amyotrophic lateral sclerosis, Parkinson’s, Huntington’s, so those are some big groups there. That’s definitely not the entire list. There’s plenty of other degenerative diseases that motorically maybe deteriorating but their brain is still there. Their linguistic skills are still there and we just need some type of technology to be able to get that out for them. That’s kind of the group there that we work with a good bit. [0:28:49.0] DS: Interesting, and it makes a lot of sense to me too. Previously, I was in a children’s hospital setting and there was a fellow AT AAC guy. He was right next door to audiology and there was some overlap between patients and I really want to get to that too, to a sort of like where that overlap happens but I got to get to see a little bit of insight there and I do think a lot of his caseload was children with autism and several children with cerebral palsy and like, you know, kind of special motorized wheelchair set-ups that they had. I really thought it was interesting, sometimes they had like their motorized wheelchair but also like a branched arm that was also kind of attached to the chair in a really interesting way to give good access to that, depending on I guess whatever their method of access was. That was really cool to see but I could just imagine how creative you have to get sometimes because the needs here are just so vastly different across anyone who might qualify for one of these devices. It’s really interesting. [0:29:40.3] SK: Yeah, absolutely. There’s a really interesting story, you know, the crane, that arm, or the little goose necks and then they have a – yeah, because we could have set-up a switch and they decided that next to their head was the best option for one of the switches, maybe the other side. I’ve seen kiddos using switches on their legs, so they are using their legs going out, you know, on the side of their leg next to their knee going out from one switch and then in for the other switch because that was their only reliable point of access, so it is really neat. Again, I have to rely on all of these different team members because I wouldn’t have came up with that myself, you know? OT, PT, we need you to help us out and then these guys that we work with every once in a while, ATPs, they know a lot about the technology. They know a lot about the switches and the mounts, they can be really helpful to work with as well. [0:30:33.2] DS: That’s really cool. Okay, question for you. More of, this is me having a consultation with you. I had a kiddo this week. He wears bilateral implants and he is six and he has some other developmental things going on, I’m not too sure. He’s not really my patient, I was just kind of stepping in for somebody, but this family speaks Spanish in the home and he doesn’t really have any kind of expressive language that I was aware of in the time that I saw him. I asked mom about that, I was like, “Are you guys learning sign or does he have an AAC device, like how is he communicating with you at home?” and mom said, in Spanish, through an interpreter so maybe there was some like, I’m not sure how much, there is a little bit of that but she said, “He had one through school. He really liked it. It was working for a while and then he figured out how to get onto YouTube on his iPad for his AAC and then they had to take it away from him and then they would give it back to him but he always got back in somehow.” He’s just one of those kids who knows how to navigate even when things are locked, he just knows. So I am curious, do you deal with that and what do you guys do? Because I know kids, they are just mind boggling how much more they know about technology than I do and I work with it all the time. It is just innate to them, so what do you do in these situations? [0:31:44.7] SK: Yeah, I know. That is constant, we constantly have issues with this so yeah, we get calls like, “We don’t know what happened. The app is gone, the screen is green now and that’s all we can say about it.” Okay, so we go and pick it up. We try to hammer into the teams like guided access, keep guided access on all the time and I know you have some young kids so you probably throw in guided access to make sure, “All right, you’re in this app, slap it on, you cannot get out of it.” Make sure that they don’t know the passcode to the guided access so they cannot get out of guided access, and it still happens. They can close out the iPad and just do a hard reset real quick and then they’re out of guided access. And kids figure that out and so they do that. Screen time, using those features, we’re really trying to get this system called Jamf going within our school district. We’re almost there, we’re really working on it and hopefully next year we’ll have it really going. So Jamf is kind of a cool thing. It really comes into help us after all that has gone down, so we try to use guided access and the screen time features to kind of restrict as much as possible. They have a lot of restrictions under the screen time option, so try to go there, so they really don’t have anything but they can do – maybe take the Wi-Fi completely off. We downloaded the app, it’s there, Wi-Fi is gone too, so we’ve done all of these different things, put it in airplane mode, things like that, and still things continue to happen regardless. I don’t know if there is a foolproof way of doing it. We have sat down and figured out all of the different steps, “All right, this is the most locked down possible,” and it still occurs sometimes and so the Jamf thing is a really cool thing because I can, from a distance, I can, on my computer right now, as long as the iPad is connected to Wi-Fi, so I guess I would have to get them to connect back to Wi-Fi because then I could do a reset for them. Sometimes kids change the passcode and then we can’t get in the iPad and the kid doesn’t even know what the passcode is. They didn’t do it on purpose, so no one knows the passcode and previously, we had to send it back to Apple and that was like a six month process of, “Okay, you know, get the passcode off of it.” So Jamf what we could do is go in and just manually set, boop, take the passcode off. I’m going to delete this app and I can do that from a distance. We’re almost there and I suggest any teams that are doing something similar, deploying iPads with ACC, to have Jamf or something similar so you have that backup. Have all of those guided access because that stops a lot. That does stop a ton. Just have all of those things in place and then have a backup that you can from a distance fix some things. [0:34:23.3] DS: I’m a huge supporter of guided access, we use an iPad. There’s a few test we do in audiology where the child just had to sit still for like a minute or so and, sit still and quiet I should say, and guided access on an iPad and just pulling up a YouTube video or some kind of letter game and then they can’t really tap anything or you can select parts of the screen that they can tap just to progress through things. So they feel like a little bit of control over it but wow. Every time I pull that up with a parent it’s like mind blowing, like, “Wait, they could use my phone and not mess with everything?” I’m like, “Yes, they can,” so yeah, huge proponent. Anyone listening out there who works with children, guided access is your friend. [0:35:01.4] SK: Yeah, no doubt. [0:35:03.7] DS: Let me ask you then a little bit about your experience, if you’ve had children with hearing loss on your caseload, what does that look like? What kind of condition are you usually seeing? I mean, I know I definitely have children on my caseload who have hearing loss and autism, hearing loss and CP. I have a young girl who has hearing loss secondary to meningitis and so she exhibits a lot of like non-verbal abilities. I’m curious what your experience has been in that realm and kind of like how you’ve navigated that? [0:35:28.8] SK: Sure, a lot of things that you just mentioned, paired with autism, CP, but also a list of syndromes that one kid I saw yesterday, Kabuki syndrome was the syndrome and Rett syndrome. There’s a handful of other syndromes that we’ve seen throughout the years that are kiddos that are non-verbal but also they have hearing loss and again and I kind of repeat the same thing I said earlier, I’m leaning on those other team members, the teachers of deaf and hard of hearing, to make sure we’re good. Did you read the audiogram correctly? Is he aided appropriately? Do we have in the classroom what he needs as far as he or she needs? Sound filled systems and things like that. Do we need anything as far as AAC to ensure that they can hear? Because I mean the auditory feedback is obviously extremely important for them as well, certainly, and I said that earlier a little bit about external speaker because we have worked with companies and AAC device comes with an external speaker, which can make it five times as loud as just the internal speaker for an iPad because it gets kind of loud but in a classroom when a lot of ambient noises are going on, kids, teacher, then it’s like, “Okay, you can’t hear this iPad whatsoever.” That is typically what we consider but again, we have good team members that we lean on to ensure that we are good to go in that kind of realm. [0:36:48.0] DS: Got you. I have a kiddo in my caseload who has a history of cancer and he is pretty immunocompromised and he did a year or two of school where he was remoted-in, and this is pre everybody remoting into school, he was remoted into school, he was kind of on a tablet, on a stick on wheels. He was at home. I never got to see that play out but he talks about it so fondly, like he really thought it was the funniest thing, he loves it. I’m curious, is that kind of something that would fall into your scope? Is that a different person at the school? Because that is like an assistive technology but not necessarily just communication. I’m curious about that. [0:37:23.7] SK: Yeah and so we’ve had that situation and I didn’t really have much to do with it. I am doing more AAC stuff at this point in time. We have an AT lead who kind of handles a little more of that kind of stuff and most of the time, my head is down doing ATE vowels, trying to serve kids and things like that. I know that occurred a few years ago, almost the exact same thing that you were referring to. It was an immunocompromised, way before COVID, immunocompromised, I don’t know the details of the situation but they were running around. They were controlling it from home, yeah. [0:37:59.5] DS: Yeah, you could roll around the room. I mean yeah, he was fully in control of it. It was really cool. [0:38:03.8] SK: Yeah, it was the same deal with the iPad and wheels on it and I saw it but technology, because we have Ed Tech, we have IT, instructional technology within our school district, which is a little bit different, which is kind of confusing sometimes and then you have AT. We have three different settings and I think that was under IT, instructional technology, kind of ran point on that because it wasn’t even, I don’t think like, the child did not have an IEP or anything like that, so you know it is kind of one of those technical things. [0:38:33.0] DS: I see, yeah. [0:38:33.8] SK: Things like, “No, this is yours,” which I would have been glad to be a part of that. [0:38:37.7] DS: Yeah, that would be a cool gadget to check out. [0:38:40.1] SK: Right, yeah that would have been fun but yeah, so I have heard of that situation. It is kind of interesting. [0:38:45.6] DS: Sure. Okay, another question I have for you, because I knew you mentioned that the apps can be really expensive. I mean iPads fortunately, tablets in general, are becoming more and more inexpensive but they are still not like very cheap. You know, they can be pretty cost prohibitive for a lot of people and I know that the technology that these apps that are being used are probably extremely expensive and then I guess the more advanced in terms of technology that you need or I guess the more complex access method in terms of joy sticks and these kinds of things sound pretty expensive too. I am curious because I know you’re in the school system and one of the really frustrating things in audiology is a lot of children need things like a hearing aid and if they don’t have Medicaid, most private insurances, at least in the State of South Carolina, don’t cover hearing aids whatsoever and so I am curious, what has been when I was in Georgia, some students, some children, their private insurance wouldn’t pay for their hearing aids, so they didn’t have any personal hearing aids but when they went to school, it was in their IEP, so they had school hearing aids. Then they didn’t have anything at home and so it was like a mind boggling situation and extremely frustrating. I’m wondering if it is a similar thing for you guys where, I mean, a hearing aid is an extremely expensive piece of technology and AAC device is an extremely, I’m assuming, I’m just going to learn more about that, expensive piece of technology. How do you guys navigate that world and what are families doing right now? [0:40:00.6] SK: Sure. I mean again, it’s a lot of problem solving. It’s very expensive. Yes, I mean you’re absolutely right. I mean all of these apps are at least 250, $300, that’s only the best software. You know, that’s on top of the hardware already and if you have any type of warranty, a case, a tempered glass, a kickstand, a strap, you know it’s like, “Oh okay, we just spent a thousand dollars very easily.” So we have funds within, so that’s one part of it, within the school district we have some funds that we happen – and it is based on student need. If it is in the IEP, you know, by law you have to provide that. That’s part of our team's purpose really is justifying equipment and saying, “Okay, we need this and this is the reason why we need it for this specific kid, blah-blah-blah-blah.” That is one part of it and we have a lot of kiddos that we have to determine, does this make sense to go home? Like you’re talking about, this is the school’s hearing aids but they didn’t go home. Same deal, it doesn’t make sense for this device to go home and go back and forth, so we have to look at those individual situations but then another thing we run into with that is broken iPads, lost iPads, the little brother grabbed it and threw it across the lawn or whatever. We have to really kind of determine what is appropriate and what isn’t appropriate and that’s hard, you know? In this, we talk about each of these individuals. I feel like I’m on repeat there but we have to look at each individual situation to determine what makes sense. Something that we started just really last school year, so we weren’t allowed, and I think this was kind of like a school district policy that people – we changed it overtime, so we weren’t really allowed to do any private funding. We couldn’t do Medicaid funding for devices or anything like that so we are purchasing them and so the AT team pitched, “Hey, well I mean these kiddos need this device,” and then it becomes their device, working with a company, tech support with that company, replacing devices. Then it can go back and forth, they can go home, it could be with them over the summer and then it can come to school if the parents agree to that and then they can have it all of the time and we really started pushing more of the older kids too. Because it’s like what is going to happen when they graduate? They are turning 21, they’re graduating, so what are they going to do now? [0:42:21.8] DS: Yeah. [0:42:22.3] SK: You know, we provide a bit this entire time and this is all very complicated to figure out. Finding devices is not easy so just, “Good luck parents,” you know? It’s hard enough for me to figure it out working within the field. Sometimes like, “Oh you needed that piece of paperwork? I’m sorry, I’ll sign and get it to you mate. How am I still confused?” but anyway, I feel like it’s unrealistic to expect parents to be able to navigate that world. Anyway, we’re trying to help out with that and provide these Medicaid funded devices. Private insurance sometimes covers it but that’s a battle in it of itself right there. At this point in time, we’re doing all of it. We’re trying to fund it with the school funding. We are trying to do private devices where it makes sense, and again, we’re looking at those individual situations to say, “Okay, what’s the best thing that we can do for them?” and then we start trying to do it but yeah, a big portion of my job is figuring all of that out and trying to get funding for all of these different things. If my other team members were here, they would be like, “Yep, yeah we’re always asking for things.” We’re always trying to get things to work because funding is always going to be an issue. [0:43:33.7] DS: Got you. Wow, yeah I mean, as someone who is also in the technology world on the other end, in the hearing spectrum, like it’s awful. It’s so bad that oftentimes it’s someone who needs it the most that you’re having to fight the hardest to get it for them. Yeah, I definitely feel that struggle. [0:43:49.1] SK: There is something that just actually happened, I mean this is timely that you’re asking about it because Medicaid within South Carolina just decided to deny all DMEAC devices for no reason other than it is out of network. If it wasn’t produced within South – yeah, yeah. If it wasn’t produced within South Carolina, then it is out of network. We don’t produce any AAC, you know, speech generating devices within our state borders so all of it is going to be out of network. I’m not sure how that happened or why it happened but we’re getting it fixed. You know, as kind of an aside working with SCSLHA, the South Carolina Speech Language Hearing Association, you know we have a VP of governmental affairs, we have a lobbyist that is on staff so we work with them as well as just passionate AAC SLPs within the school, within the state, excuse me. Sharon Steed jumped in and was a big player in that, so we’re not done. We haven’t put a bow on it yet but it’s been a few weeks of just outright denials for no reason other than out of network, so that kind of threw an extra wrench into it which was completely unnecessary. But we’re probably, by the end of this week or it may bleed into next week, but we should be wrapping that up where they’re no longer doing that and hopefully fixing that for the long term. It’s like, “No, we cannot.” They are already too many barriers, let’s not add some more. [0:45:17.0] DS: Absolutely, you would think that you know, Medicaid might be old reliable in most of these situations, they typically are there for these kids that really need it so yeah, that’s extremely frustrating. Another thing I wanted to ask, so for audiologists like myself, honestly, since I’ve been in my current role, I don’t think I’ve had any kiddos come into an appointment with an AAC device. I have seen it elsewhere but I don’t think I have had anyone come and I’ve had a couple who mentioned that they are working with one but for whatever reason, it’s like not there when I see them. For any audiologist, or even SLPs who don’t primarily work with AAC or audiology students or whoever who is in this realm who might brush up into these technology but not really specialize in it, do you have any tips or advice or things to look out for, things you wish these other professionals knew who might have it misconstrued or misunderstood? [0:46:07.7] SK: Sure. If you are working with a kiddo that has an AAC device, my suggestion would just be model. You can model directly onto their device and it’s called, the fancy research term is like aided language input or aided language stimulation. Something to think about, and I know you have a young child at home, a very young child, as I do as well. [0:46:29.8] DS: I was going to mention that. I was going to say, anyone out there who thinks we sound tired or anything like that, there is a reason. We both have a little one. I think yours is maybe three or four months? [0:46:39.5] SK: Five months, she just turned five months. [0:46:41.6] DS: Yeah and I’ve got a two-month-old, so you and I, we’re barely scraping by but yeah, go ahead, you were talking about modeling. [0:46:48.2] SK: Yeah, so for our little kiddos, obviously we’re providing a lot of linguistic models for them. We’re talking to them, we’re reading books to them, we’re singing songs, we’re doing all of these different things. For AAC users, they’re really not getting those models for the way that they are communicating, right? They are trying to have output through this device, but have they had any input to that device? Probably not. You are really showing them how to do it and where to do it on their device. So, how do you get this thing done? As you are talking, we typically want to model just above where they are currently expressively. Let’s say that they are using one word at a time, that’s kind of their MLU, just one word. They say a handful of things, just one, so maybe you want to speak to them and then you say, I don’t know, “I want to eat a hotdog,” or something and then I would model on their device “want hotdog” or “eat hotdog” or “want eat” or something like that. You don’t have to model your entire complex sentence because that’s going to be like, “Okay, this is way too much,” but you are modeling just above where they are currently expressively with their device, so that would be the thing that I would suggest to all of these different team members that may not feel comfortable with AAC. Don’t be afraid to get in there and model, and don’t be afraid to mess up. Obviously, you don’t know where things are. You have not developed those motor plans that we’re talking about earlier, that’s okay. They probably haven’t either, just don’t worry about looking stupid or anything like that. Just start messing around with it and I’m sure that kid will appreciate it and then they’re like, “This is cool. They are using my device too.” So modeling is probably the biggest takeaway that I would say. [0:48:31.1] DS: Yeah, that’s a really great piece of insight that I wouldn’t have thought. I feel like – [0:48:36.0] SK: It’s not, yeah. It’s not an innate thing, yeah. [0:48:38.8] DS: Yeah, what’s really funny is, I feel like it’s a great parallel because my thought would be, “I don’t think I should touch this thing, this is probably like an expensive tablet and it’s their thing and it’s set up for them and I don’t think I should be messing with this,” which I know, my wife is a teacher, or other people who, I have a friend who is an SLP, they have somebody who has a hearing aid and they’re like, “I don’t want to touch this thing. This thing is expensive, I can break it out, I don’t know,” you know what I mean? It’s really funny but I would say, “No, touch it, like listen to it, you got it check on it,” you know what I mean? It’s the exact same – [0:49:07.5] SK: That’s funny. [0:49:08.5] DS: Mentality, where it helps normalize it for them and it helps them have a little bit more advocacy for themselves and be like, “Okay, other people know what this is and this isn’t so strange to anyone else. They know what I’m going through.” I think that’s a really great insight and something that – [0:49:24.1] SK: I think I probably will steal that in the future because I would be the same way, like, “I am not touching your hearing aid, I don’t want to break anything or make anybody mad.” I guess with the caveat of make sure, if they’re a young child and their parent is there, make sure it’s okay with the parent. You may anger the parent or something like that so make sure you ask permission before – [0:49:43.6] DS: Always check. [0:49:44.9] SK: Always check, let me say that. [0:49:45.9] DS: It goes both ways. Yeah, always check. [0:49:47.7] SK: Yeah, always check but if it’s okay, it is definitely a good thing to model. [0:49:52.1] DS: Perfect, any other things you want to share? I wanted to ask you a little bit about Speech and Language Songs. Just a really cool YouTube channel, a great resource for families and I guess children who are – and I am not sure if it was specifically set up with an AAC in mind or if it was just more developing spoken language skills. Some of the videos I’ve seen are, I don’t really – I’m like, “I wonder, because I know he does AAC,” like is this related to that or is this just, I’m thinking of like “he goes” or it’s like kind of that language modeling? Could you break down, I guess, how you started this, what it is and what your goals are? [0:50:25.2] SK: Yeah, the main piece to it is like core vocabulary songs. So targets that make sense for language development, even if it is oral language. Obviously my main focus is AAC, so I was thinking about it within that lens. I was pushing it into classrooms and that was kind of the initial impetus was, I’m pushing into classrooms, I am creating these lessons so I would go and actually play the songs with a guitar and sing and then I would have like a footswitch. I was using a switch to rotate through the slideshows. [0:50:57.3] DS: Oh, cool. [0:50:58.3] SK: The teacher, one teacher, this was a few years ago now, the teacher was like, “I wish I could do this when you weren’t here.” I was like, “Oh okay,” and so then I started recording those songs and sending them and honestly, initially it was just easier to put it on YouTube. I can just share a link versus “I’m going to send you an email with this file.” “I lost the file or the file didn’t go through. “Oh, I’m going to create a page,” and then the other – I was just sending it to those individual teachers and then other people started using it but I didn’t know, and commenting and sending me emails. I’m like, “Oh I didn’t realize that someone else would actually look at this,” or you know, it just didn’t really occur to me and I was like, “Oh okay,” so then I started doing more of those but the focus is really AAC and core vocabulary for the majority of it but then I have like basic concept songs and articulation songs. Whatever I am kind of interested in doing, I would jump out and do that, but my main focus in the future really is going to be AAC. I want to redo a lot of those songs that I have that are just text and I am going to – and I am starting that process now. I’m working a ton over the summer to create a lot and then kind of drop them over the course of the next school year. But they are going to be with different symbol sets. Use unity, you know, PRC symbol set with LAMP words for life. Do those same songs, but now I have those images there, so if a kiddo is using that system, they can see and recognize familiar symbols. [0:52:23.6] DS: Their own symbol, yeah. [0:52:24.7] SK: Yeah and then it's kind of like core vocabulary karaoke, so the kids can play along with it. Or if we’re talking about modeling or aided language input, the pair professional, the teacher, the SLP can be in there modeling and showing them right along with the song and it is meant to be kind of slow, redundant. So we have a lot of practices going through in these pretty simple sentence structures and I try to really focus on core vocabulary, words that are going to be used across context versus a bunch of fringe words that won’t be used that often. [0:52:57.1] DS: Got it. It’s really interesting. The first time I listened to it I was like, “What is going on here?” because it had a lot of views on YouTube and I’m like, “Am I missing something here? The song is pretty good but it’s kind of simple in terms of the words here.” But now it makes more sense in terms of the set-up here and what you're modeling. I think the karaoke idea is really cool, letting them fill in the blanks, but they’re also like singing along in their way. That is really cool, that is really exciting. [0:53:24.3] SK: Cool, thank you. [0:53:25.7] DS: Yeah. Well, thank you so much again for joining me. I mean this is super eye-opening. This is something I’ve always been so interested in. I’m like, if I ever did the SLP track, I would have to do this because it’s all the technology and it’s fun and the troubleshooting, it just sounds like definitely a lot of fun and you break it down really well. [0:53:42.5] SK: I really appreciate you having me on. Yeah, thank you. [END OF INTERVIEW] [0:53:47.7] DS: That’s all for today. Thank you so much for listening, subscribing and rating. This podcast is part of an audio course offered for continuing education through Speech Therapy PD. Check out the website if you like to learn more about the CEU opportunities available for this episode as well as archived episodes. Just head to speechtherapypd.com/ear. That’s speechtherapypd.com/ear. [END] OTE 21 Transcript 1 © 2021 On The Ear Podcast