Skip to content

AI Avatars & Video-Based Learning: In Conversation with Kevin Alster

ai-video-based-learning

Welcome to CommLab India’s eLearning Champion video podcast featuring Kevin Alster, Head of Synthesia Academy. With over a decade in emerging tech, Kevin has crafted learning solutions for institutions like General Assembly, the School of the New York Times, and Sotheby's Institute of Art. He also shares innovative learning and technology experiences in his podcast ‘The Video Learning Lab’, and is “really obsessed with the power of using new forms and formats of technology to help people understand and make sense of the world.”

Click Here To Read Transcript
CommLab Podcast with Kevin Alster - Meeting Recording

0.01 – 1.12
Hey there, welcome to the E-learning Champion pod. I'm Shalini, your host for today and I'm very excited to have with us our speaker for the day, Kevin Alster, who is the head of Synthesia Academy. His mission is to help enterprises close the gap between understanding information and putting it to work. Kevin has over a decade in education and leveraging emerging tech for video communication, and he's crafted learning solutions for notable institutions like General Assembly, the School of the New York Times, and Sotheby's Institute of Art. Kevin also has his own podcast called The Video Learning Lab where he talks to incredible guests about innovative learning and technology experiences. So, a very warm welcome to you, Kevin. We are thrilled to have you with us, and we'd like you to share a little bit about yourself to add to what I've shared.

1.13 – 2.25
Thank you, Shalini. It's always strange to hear your work repeated back to you. Whether it's an education or looking at learning through a technology lens, I think what's always struck out to me is how our relationship changes when we encounter different types of information. Throughout my entire career, that's really what it's been about, whether it's in the K 12 classroom and seeing how computer-based learning can help students to read better, understand letters and sounds, and how to begin literacy. But also, with the school of the New York Times, it was how do we leverage this expertise and how can we use video and production and experiences to bring readers closer to the expertise of the times and help them understand things like how do critics like AO Scott, what does he do with the movies differently than I do? And how can I watch movies better? I'm really obsessed with the power of using new forms and formats of technology to help people understand and make sense of the world, and more realistically, follow their curiosity. So I'm excited to keep on doing that, and now I'm doing that with AI video and so far, so good.

2.26 – 2.40
Great. So Kevin, can you share a little bit about the evolution of video-based learning?

2.41 – 3.10
So I think with video-based learning, it's something that we see in so many formats now, whether that's on mobile or you're learning on YouTube or even through TikTok or you're taking a master class, there's so many different forms and formats for video-based learning. Sometimes I like to go on my time machine back into 1990, which is where I first started interacting with e-learning, where the goal of e-learning was really to replicate lecture-based training. In the early 90s, it's ‘you couldn't be in the classroom, so how can we still get that information of the lecture out to you?’ Who is going to be able to watch it somewhere else? We can record that lecture, and you could watch it somewhere in the library on your own time. And so where video-based learning really started out was recording lectures and a lot of the early e-learning reflects that, where a lot of it seems lecture-based, where every module is broken down into ‘here's your objective, here's the knowledge you're gonna learn, or the skills that you're going to see being done, and therefore we’ll assess you by you taking a knowledge test afterward to see if that video made the information stick in your head’.

3.11 - 6.56
Right.
And then where I think things get interesting is around 2010 because that's when we start to see a lot of video-based learning that's more on-demand and embedded in things like learning management systems and learning experience platforms. What's exciting about that is they're no longer recorded lectures or highly produced educational or training videos. Instead, they are made by everyday professionals with 4 K cameras and webcams. So from about 2010 to 2018, we start to see more and more content being available. You're seeing different types of experts that are featured in video that we suddenly had access to. Things really explode in 2018 on into the present, because then the cameras that everybody is using, they now fit in our pockets, and they're now embedded on our computer. So you start to see a lot more user-created content so that can be internal within your company, or it could be external on YouTube. We see all these different use cases for video-based learning exploding such as looking up skills on YouTube or if you need to learn how to make a pivot table in Excel, you can find somebody who does that on YouTube or it's podcasts on Spotify much like we're doing now or knowledge management and capturing expert knowledge or even webinars. And these are done by everyday professionals rather than these super polished on-camera folks. So what I've seen is, if we look at the evolution of video-based learning, it started out as just really trying to replicate what we see in a classroom setting, a lecture style setting. And as the technology has gotten easier for us to use, such as cameras in our pockets and on our phones or it's the platforms that have gotten better at serving up this type of video-based learning, we suddenly have all these different types of video instruction, from entertainment and edutainment like TikTok and some micro learning, all the way to the other end where you have entire courses where it will teach you generative AI. And we've got different skills and objectives and but also ways to practice that have evolved from that lecture style that we see in the meeting, so it's been an interesting time. Where we are now is where I'm most excited because what we're able to do with generative AI is, we're saying, hey, we're actually going to make it even easier for you to create video. We're gonna take the camera out of it. So, Shalini, you and I don't have to be on camera, we don't have to get out our microphones. And for certain types of instructional media, we can use an AI avatar, which is a photorealistic person to stand in for us as a presenter, or facilitator, or course guide, and build different types of video experiences. So we're just scratching the surface of what that means now. But that really takes us on into 2024.

7.02 – 7.29
Right. And I think we all remember the early days when the avatars were more robot like and now you can hardly make out the difference. They're so realistic and so human. I think what you shared about how video-based learning really got started, the whole intent was to replicate the real classroom virtually in some sense and pass it on. And one of the biggest limitations of conventionally learning is actually this absence of the live instructor.

7.30 – 7.38
Yes.
So how avatar-based learning really revolutionized the game and helped replicate the real classroom virtually?

7.39 – 9.54
Before getting into the avatar-based learning, I think there's one thing that I want to add on to this legacy that L&D has for building learning programs that are based off this lecture style. I think in this decade, we're actually catching up to the fact that giving somebody a knowledge test at the end of a learning experience is not an effective way to apply knowledge concepts to work. We're not asking people to pull out knowledge out of their heads. Instead, we're asking them to do a task for a particular job, and that's really what we're using to measure the effectiveness of our learning programs. And this is where I think we can talk about how avatar-based learning is changing the game. And I think there's 2 main ways that it's doing this. The first is, with avatars, it's providing a more personalized space and experience that makes you feel not alone. As we were talking about, up until 2010, a lot of the e-learning that we see is very much just kind of going through slides and kind of delivering information in just one direction. And it's often with a recorded voice where the delivery of the e-learning really doesn't differ that much from the in-person, only that with the e-learning often you're just watching, you don't even have the presence of the instructor, you don't have the engagement of seeing what they're crafting in real time, you're just watching something on your own time. But that's not to say that before 2010, there wasn't a really engaging experience. If you wanted to get a really premium experience where you'd go instead was things like executive education or if you're attending conferences and keynotes where there's a high budget or production, you really have these engaging performances that make people feel special. Now, why didn't we have those until now? Well, because we have a lot of obstacles such as cost and editing skills, or the capacity to make these things. So as we were talking about AI avatars, we're lowering the threshold for what it takes to make a polished video so that overall video gets easier to create. And there's a standard of what that could look like, a better standard of what good looks like.

9.55 – 12.12
Just like when we all had camcorders, I think about my mom recording my old cello recitals, where it's shaky and jittery and she doesn't have a tripod. But now we have video stabilizers in our cameras that make a smooth video better. I think when we're talking about this idea of presence and what makes people not feel alone in e-learning, I think this is where we have an opportunity to add more avatars so that what this might look like is, let's say you have 300 new employees who are going through a new onboarding experience. Rather than giving them a series of emails or documents to consider, now you can give them each a video with their name and their home language with the click of a button. We forget how powerful that experience is to be welcomed and to hear your name, even if it's just in a presentation or even in this conversation. I think what people are seeing right now has the most potential for AI avatars. It's not suddenly we're gonna have AI tutors everywhere, or you're gonna clone somebody, and they will train people for you. But we're thinking about more of this engagement, marketing and personalization approach to using video.

So those that weren't able to create these videos before, now they can. But where we're starting to see avatar-based learning really start to come to light is when we get to this idea of ‘we had something that that didn't exist before that we have now’. In its current form it is large language models and programs like ChatGPT, which allow you to actually converse with information or allow people actually to dialogue with information in their own natural language. This to me is where AI avatars as a face for that conversation, and those discussions, that's where we're starting to see a lot of potential, whether that's in video, whether that's AI tutors that get skinned over with an AI avatar, or it's thinking about AI avatars that are serving in different pedagogical roles which we can get into later in the conversation.

12.13 – 12.55
Thank you, Kevin. That was really interesting and it's true the use of avatars and in videos, they have in a sense democratized the learning field a lot, made more people feel included, those who couldn't or can't for reasons of cost, attend conferences at certain hours, and so on. So it's brought right to their doorstep in a very warm, welcoming way. And I think that really plays a very big role in making sure that they buy into whatever are being presented with and eventually as you said, application on the job.

12.56 – 14.06
And I think it's important to mention as well that avatars themselves are nothing new if we're talking about an avatar as a representation of a person or character, or particular role. We've had avatars before in different pedagogical roles, but either they've been cartoon characters, which in this day and age, if you work at a Fortune 1000 company and you have a cartoon avatar for compliance training, there's part of me that is like, I don't know if that's really appropriate for this context. So that's one thing, but then also, industry standard tools like Articulate Storyline and Rise, these tools that we use every day, they too have avatars, but they're more static characters where they almost appear like 90s infomercials. It's not to say that the new version of avatar is replacing everything, it's still just this chest up representation but seeing it express itself, is something much more powerful and engaging than a static picture of somebody looking confused, or an animated character.

14.07 – 14.37
The gestures, the movements, the expressions, they all collectively add up to a very engaging experience. There's no doubt about it. You've shared a little bit about generative AI and its role. So what role do you believe technology per se should play in modern learning and development strategies? And how do we leverage them effectively?

14.38 – 17.33
I'm very aware that this is a very big question. There are 2 parts of this that I want to answer.

The first is how we use the technology. And the second is what I call the learning dream.

So that first part is what we're seeing with the role of tech that it should play in modern learning and development strategies. I think it's always important to acknowledge that the learning comes first and the learning theory, that's what we're paid in our expertise for. We're not here to necessarily teach courses, but we're supposed to be an expert in learning theory and how the brain takes in information and puts it into action. And it's important to look at technology and the different formats that we use and understand how the format that we're applying can help humans towards a particular outcome. The example that I like to frame this around is going back to the 90s, cause that's where I really started interacting with technology. If you wanted to go on a road trip going from Harrisburg, Pennsylvania, where I was born, to Saint Louis, Missouri. My mom would be in the passenger seat, and she'd have to pull out a different map for each state, like a paper map. And she'd have to trace her finger. And if we cross state lines or went to a new area, she'd pull out a different map. But I remember distinctly in the 1990s, there was this thing in the states called MapQuest, in which you can look up your directions and print out a series of instructions that my mom could read out. Each instruction told you how far to go, what type of turn it was. And that changed how we get from point A to point B because my mom can just refer to a single document. So the task is getting easier and easier. Then in the mid-2000s or the late 2000s we suddenly have Google Maps on iPhone where it's trying to achieve that same task of getting from point A to point B. But now as I'm driving down the road, I don't have to look down at my directions and say, oh, I'm on step 7 or step 8. I'm being given information at the right place at the right time. And again, it's the same task. But when a new technology comes along, it allows you to achieve a similar outcome in different ways. And I think that is what our role is as learning and development professionals. When we look at a tech stack or new technology that's coming out, it's not for us to suddenly think incredibly outside the box and build things that we don't have in our skill set, but more so to look at the jobs and the tasks that employees are trying to get done and figuring out what actually changes with this new piece of tech. I don't have to adopt everything that comes along, but what does change? And so every time the formats change, new technology becomes available, the way that we present that information matters to the employees that we’re impacting.

17.34 – 18.28
So what does that mean for us now with AI video kind of democratizing who can use video to communicate information? Well, it's not like it's going to change drastically everything overnight, but I think what we're seeing from a lot of SaaS companies and what we're seeing from news organizations is that we're living in this video-first world where we expect to see text or video depending on what we're trying to get done. If you look at the New York Times, something they do a lot now is they serve up the article that takes about 10 minutes to read, or they might have a 2-3-minute video right next to it with the exact same headline that's designed for people who are on the go or people who are on mobile versus those who are able to read on their laptop or in another format. So that's the first part. When the technology changes, so does the way that we deliver the information.

18.29 – 21.11
The second piece which I call the learning dream, we know that the ideal situation for humans to learn is to be trained one on one, and it's what is typically referred to as Blooms 2 Sigma problem, which is where we know that students who are tutored one on one surpassed their peers who are in group instruction, almost by 2 times. The huge technological shift that we've seen is this use of one on one when it comes to instruction. And this is not just for K12 but also for professional learning as well where ChatGPT and large language models allow us to rethink who are the other people that we are sharing learning with, and how can we create more opportunities for them to engage with information, but also what Guy Wallace calls, how do we institute more purposeful practice? I think when we're learning things on the job, we need opportunities to practice in a safe place, that's safe to fail and get feedback on our performance, so that when we apply it to the job, we have some experience trying to use a new skill or complete a new task towards the outcome that we're driving towards. I think we're seeing a lot of learning and development folks who are really driving towards this idea of how do I build a one-on-one tutor with the information and the experiences that I have available as well as what are the different roles that I can use an AI avatar to embody? Is it a coach? Is it a mentor? Is it a fellow employee who's performing worse than the person that's going through the training so that you can give feedback? There's a really big paper that was released by Ethan Mollick, who's been driving a lot of this conversation about how do we educate with AI called ‘Instructors as Innovators’ where he details different ways that you can use ChatGPT to build custom GPTs, to build different types of simulations, or to put your students in different roles. So I would definitely suggest folks, check that out if you're interested in this learning dream of us being able to scale ourselves and bring more of a one-on-one experience to our users and our learners.

21.12 – 22.12
Thanks, Kevin. I think you touched upon very interesting aspects, and I really like what you shared about not getting carried away with every shiny new toy that comes along, with seeing how we can apply it in the here and now given the environment we are in, the limitations, the opportunities. And you're right. I think the learning dream sounds more and more achievable or realizable by the day, thanks to these leaps of technology. I'm sure to a great deal having a wide variety of avatars also helps in defining the exact kind of learning experience, because now it's not a cookie cutter approach, but you have so much of choice, and you can use a variety of them. You know you're not stuck with just using one, so you can make it a very interesting mix.

22.13 – 23.52
I think when it comes to the conversation around AI, it's like, what does AI allow us to do differently? Well, one of those things is scale yourself, so figure out scaling your different roles as L&D professionals to different parts of the organization – to some as a technologist, to some you are there as just comms to keep people informed and aware of. For some people you're an authority. If you're using AI avatars, it allows you to kind of divide yourself, or you can even clone yourself. If you need more video or different types of asynchronous video messaging, you can clone yourself to serve in the role of somebody who can speak 160 languages if trying to connect with a global audience. But that's what I think is new, these AI avatars allow you to scale your presence, your team, yourself. So it's definitely an exciting time.

Yeah. Thank you for that Kevin. So, where do you think this kind of avatar-based video learning is particularly impactful? Are there any specific areas or training topics where it's not very suitable? For instance, for sales training or a specific kind of audience? Or is this format more popular as opposed to another kind of an audience? Is there anything you can share about this?

23.53 – 26.18
Yeah, for sure. So I think there's a couple of considerations. Just like with any other format, there are different affordances or kind of high-level questions that you always ask when you're trying to say, is this type of format going to be a good fit for the information? And so when it comes to AI video, if you look at the current state of the technology, meaning how well the avatars can perform or how quickly can a video be made or what are the different framings that are available. Because right now, they're not going to gesture in any way that’s effective if you want to do a game show or something like that, Avatars are very much limited. It is a very expressive, but it's also a very still performance where it's from the mid stomach up to the top of the head. That's the typical framing because that's where the tech is trained. So what are the questions that you should ask yourself for what is fit for AI video? The first is, is this a message that needs to go out to more than 5 people? 5 is a bit of an arbitrary number, but it's meant to capture the fact that if you're just dealing with one or 3 people, it's just better to get somebody on the phone or on a Google or a Teams call. Nothing beats that at the moment, that is going to be your best bet.

So is it going to go out to more than 5 people? Because then we're talking about scaled messaging. Is it going to be high in volume? Meaning will you need to send out lots of videos over a period of time? For example, if you're doing data storage or data reports, you want to be able to report on that week on week. And maybe a 2-to-3-minute video is going to better capture that little bit of your presentation versus sending an entire data dashboard out to your team. Another big question to ask is, should it be in multiple languages? This is an area that's new, but we work with a lot of global teams, where with a click of a button they can send out the same content in up to 160 different languages if you're at a global 1000 company, where if you have more than one audience, you probably should be serving that content up in another language.

26.19 – 29.00
Right.
Also, what is the nature of the information that you're trying to serve up? At this point, it should be instructional or informational in nature. We have a lot of folks who approach us about marketing. They want to develop marketing content, but the avatars are not yet able to express the whole range of emotions. We're still trying to figure out what do we actually want to allow people to produce realistic video on when it gets into marketing? How do we know that the information they're saying is truthful or not? So we really stay in this area of instructional and informational content, anything from compliance training to airline safety videos to HR and internal marketing announcements. So what fits into this category is a lot of sales enablement training where you're trying to onboard folks very quickly, and you're trying to get them to understand different frameworks and different approaches to engaging with customers. Another big use case for this is product updates, especially at a SaaS company. It's not like the iPhone where you get an update every month. Often there are changes to features, to processes that are happening daily and so if it requires an explanation or some sort of screen recording or some sort of visual to help people understand what is changing, that's where it's a good fit. Another good fit is for data reports. We're seeing this a lot with large organizations where you have business leaders who need to make data informed decisions. You don't always have a data analyst on hand to whip up a report. Well, now you can use AI video to compile a report and have that narrated over with an AI avatar to help facilitate the understanding of that information. It's not gonna be as good as replacing an in-depth report, but sometimes you need just the right amount of information, and you need it on the go in video. We're also seeing it for internal comms. One of the biggest problems in HR and also more specifically in learning and development is we're not the greatest at marketing our resources, we kind of push resources out and hope that people will look at them. Or we're dying to know that folks are going to go to our learning management system to see the new course or resource we've built, but we know that probably that's not going to happen. So creating marketing videos for your resources internally to build engagement is a really great place to start.

29.01 – 32.20
The last thing I'll say is that AI videos are especially a really great fit for a lot of employees who work without computers. This would be if you have warehouse staff or front of line workers where they're not on a desktop or a laptop and instead they're on their phone. And it's really hard to serve up text-based content to be read on a small screen. They're also typically a tech-phobic audience, where they’re not knowledge workers necessarily, they’re frontline workers. And sometimes, if it's a global franchise, you're serving up this content to English as a second language audience. And so what we're finding is there's actually a huge need for QR codes again, because if you're working in a warehouse, there's a new processing, you put the QR code, they scan it with their phone, and they can get the resources served up there. So you need again a high volume of video so that they can stay up to date on what's happening, but it needs to be vertical. So that's a great place for a video.

Right. Wow, that's really something I hadn't thought of Kevin. Through this format, you've really widened the reach, and made sure it's available to in all kinds of learning environments. A busy shop floor and manufacturing units, and so on. That's really interesting. Another thing that surprised me was you mentioning that AI avatar-based videos are actually effective for data reports. Typically, when one thinks of data reports, one imagines a lot of detailed graphs and diagrams, and a lot of analytical content. But what you mentioned about giving a very high-level overview of that report through videos, I think that's really fantastic. A very engaging way to get people to know what they need to know in the quickest time possible.

Yes, and that is to me my mission brought to life where it's how do we get people the information that they need in order to either do their job now or to tackle that obstacle that's ahead of them so that they can just get to work quickly? And again there still needs to be a lot of skill in designing a high-level data report, what are the features of a high-level data report that somebody who's gonna be on a marketing team needs versus a business strategy team? But it is thinking of how you're serving that information up. The idea is nowadays we're very used to engaging with video, so this is what allows you to communicate that information in a useful way.

And also, the use of these videos for marketing the upcoming learning experiences, I think that's terrific because sneak previews or teasers build up the anticipation and the momentum. Yeah, that's a very good area. So, but Kevin, I was just wondering. Are these AI avatar-based videos popular with a certain demographic? Is there any particular age group or Gen Z for instance, who takes to them more readily than another age group would or is that just a myth?

32.21 – 35.26
So what I found in my experience working with clients or speaking with people and showing this off at conferences and the different talks that I do, I'm seeing that up until a month ago or a couple weeks ago where, this is the only time I'll put on my Synthesia hat. But we have the latest model of expressive avatars where you can see them go through 3 different emotions in one minute. You can see things like eyebrows raised and voices and pausing and all these little details that make a human performance. That's where we've seen it really grow. Before, when I presented this, elder millennials and older have all been like, avatars, I don't like this. I think it's weird. I think it's gonna replace me. And there's a lot of scepticism there. And I'm not gonna say that with Gen Z that has changed. I think Gen Z is equally sceptical, but they have been using this tech much longer than the rest of us have. Meaning AI voices have been around on TikTok for years. And the AI voices on TikTok, they're not even that realistic. They are used to communicate a certain type of information sarcastically or ironically. And so I think everybody is very sceptical of AI avatars. People are impressed with the more recent advancements. But I think Gen Z is much better equipped to figure out what can we actually do differently with this tech. How can we use it to communicate different types of messages than previous audiences before that, even in as much as digital natives? I'm a digital native and I'm looking for new ways to immerse people who are my age and older in this idea of an avatar-led experience. And I really think it comes up to making sure we use these tools in the right ways as opposed to just saying, hey, I had a human here before, and now I have an AI avatar here. I think where I'm really hoping for the audience, if you are listening here today, what AI avatars are not for is if you're giving a presentation in person and you have a slide deck. The worst thing in the world that you can do is to put an AI avatar on that slide deck and have it go through your slides. It's not a rewarding or engaging experience for anyone, so if that's your takeaway from today, that's what not to use avatars for.

Thank you for that word of caution. I think listeners are hanging on to it. Alright, so Kevin, this is more curiosity along how do you stay abreast of industry trends and best practices in L&D and how do you actively integrate them into your work? I mean there's so much happening there, there's an explosion.

35.27 – 39.58
So this might be controversial, but I do not get my trends from other learning and development folks. The one person who I do follow is thinking about how we use these tools correctly and in smart ways. There's a gentleman by the name of Ross Stevenson who has the ‘Steal These Thoughts’ newsletter where he offers some very practical engaging tips for prompts you can use or how to fold this in your work. But I'm really finding that learning and development is a bit behind the curve when it comes to using generative AI technology. One, because learning and development is kind of stuck. We're stuck in what we're expected to deliver to the business and what the expectations of the business have for learning and development, which is ‘I need training courses and resources.’ And it's really hard to break out of that cycle or use this tech in new and different ways if that's the expectation for us.

So I typically like to look and see what are they doing in journalism. Because I think companies like the New York Times have always been a little bit ahead of the curve and thinking about how can we deliver journalism differently? And we saw that with virtual reality back in 2017, although it didn't take off. They were really thinking about VR storytelling and how do we bring people to new stories in different ways? Or back in 2017 as well, that's when podcasts were just taking off as a medium, and now they're everywhere. But back then, they had to rethink how do we use this technology to provide journalism in a more engaging or different way? What different about this forum? So I often look at journalism to see what they are doing with AI or what formats they're using or what types of stories they're telling. I also really like to stay abreast of trends by looking at what those who have a ton of money are doing. Meaning I like to follow what EY is doing in the AI space, what PwC is doing in the AR space, because I think as an individual user of AI technology, I only really have access to the products that are available to the rest of people. Meaning I got ChatGPT at the exact same time as everyone else, I was experimenting with AI avatars maybe a bit before because I work at Synthesia, but I'm also learning and growing and figuring out how to use AI avatars along with everybody else. But what it takes to really find novel use cases and actually do prototypes and experiments and put them to use, it takes a lot of money to power the technology in and of itself, and then to have the resources to serve up the products of that research or those prototypes in a meaningful way that for the rest of us takes a ton of money and resources. So, I'm always following what McKinsey EY and less recently, PwC, what they're doing in the work of how can we take a lot of information and scrape it to create something that is going to again shorten that gap between ‘we have all this information, how do we give people access to it and how can we use large language models that help them discuss and dialog?’ Another person that I follow is Allie Kay Miller on LinkedIn to see what companies are doing or what to expect down the line. But this is all to say I really follow those who have the resources because again, everywhere you look, everybody seems like they're an expert or an AI strategist, but I personally did not learn machine learning last year or become a data scientist. And I know that everybody is just learning and using this technology. We're all doing it at the same pace. So who do I trust? I trust people with money and resources, to figure out how to productize this, because I think the technology is getting farther and farther apart from what I can actually wrap my head around and understand and put you put to use and work. That was a very long-winded challenge.

39.59 – 42.22
But it was very fascinating. I think there were a lot of actionable insights and next steps for our listeners as well because every L&D profession wanting to grow their career would want to know how quickly, how efficiently can they make this happen.

And that brings me to one more point. Typically I like to wrap up the conversations around generative AI and especially if folks are exploring ChatGPT by having them participate in what is called the unbundling. What we're going through is a great unbundling event where we have all these roles and jobs that we've had previously, where suddenly if I'm an instructional designer, I must be designing instruction or redoing systems. If I work in support, I'm helping customers to find solutions to problems, but what's happening now is called the great unbundling, which is a lot of future of work. L&D and higher-level CLOs are looking at the workforce and saying, ‘Actually, these roles don't make any more sense”. So what jobs do we need humans to do? What tasks do they need to complete to get those jobs done? And where can we plug large language models or ChatGPT or different AI enabled products into somebody 's flow of work? So what I do suggest people do now is to make a list of projects that you've worked on in the past 3 months. Go back through those projects and look at the actual tasks that you did. What did you start with and what was the output or what is the general outcome? Look at the tasks that were there and then start to mesh that with your understanding of ChatGPT and what it can and can't do. Because while it is increasing the foundation of what's expected for us to work, for example, copywriting and creative thinking, that's where we're seeing Chat GPT can really just raise the overall foundation, versus business strategy and problem solving, not yet so much. So figure out what are the tasks that can be handled with the tools that you have so that you're prepared for this big unbundling of roles and projects that we're going to see.

42.23 – 44.29
That's really very practical advice, Kevin. Thank you so much for sharing that. And since you mentioned ChatGPT and how we can utilize it better and so on, as we near the end of this podcast, I just want to share with our listeners that we're very excited to have Kevin with us once again at our L&D event called Learn Flux, which is a virtual event for the L&D community of practice. And this year, the theme is AI enabled learning. And we are very excited to have Kevin join us as one of our speakers. Kevin, is there anything you would like to share about your upcoming webinar at this 10th Gala edition?

Yes. So the titles to look out for, I'm hoping you'll find engaging and interesting. It's called how AI and avatars in parentheses will change the way that we learn and work. And if the printed-out directions to Google Maps story captured your attention, it's the same task but enabled in a different way that that makes it safer or easier or quicker for us to put information to work. That's largely what I'll be presenting about, which is really what is the spectrum of AI video and avatars that you can expect, which is less a learning problem to solve and more a communications problem to solve. But I'll be going through the different use cases that say, hey, this is where we are today, this is how you can make use of AI video. As well as give you a preview of some of the projects and products that we are working on at Synthesia with our strategic customers to give you a sense of what tools might you see coming down the way as we continue to build out these prototypes with the folks who have all the resources, which is where the change will come from to take it back there, but I really hope to see you there, some actionable stuff for you if you're dipping your toes into AI video and avatars, but also a little bit of future thought as well.

44.30 – 47.26
Yeah. Thank you so much, Kevin. And we're really looking forward to that event and not to brag, but at CommLab India, we have been using ChatGPT extensively to see how best we can put it to use. Because we're very agile as a company and hey, if something can help us get our job done faster, just go for it. So that's one of our driving mottos. And for our dear listeners, I think it'll be great if you can join us for our workshop on using Chat GPT because we actually take you through how you can use it to make your life as an instruction designer easier.

And thank you, Kevin once again for this very interesting discussion and all the insights that you shared and can't wait to have you back as our speaker for our Learn Flux. Thank you once again and thank you listeners. Do continue to follow the E-learning Champion podcast and also
Kevin's podcast, Virtual Learning Lab, where I'm sure he's going to give us all those insights based on his research into those companies which have the resources to explore and share their learnings.

For sure.

Thank you, Kevin once again.

Here are some gleanings from the interview.

Can you share a little about the evolution of video-based learning?

We have many forms and formats in video-based learning now, but it started out as recordings of classroom lectures for those who couldn't be in the classroom.

Around 2010, video-based learning began to be embedded in LMSs and LXPs. These were not recorded lectures or high production training videos but made by everyday professionals with 4k cameras and webcams. From 2018, cameras became available that fit in our pockets and were embedded on our computers, leading to more user-generated content, both internally and externally. So, video-based learning evolved from trying to replicate the classroom to different types of instruction. And now, you don't even have to be on camera or need microphones to create a video.

How has avatar-based learning helped replicate the classroom virtually?

Until 2010, a lot of eLearning was just going through slides, often with a recorded voice, and without an instructor. Creating good video-based eLearning was not easy due to the cost, lack of editing skills, and the difficulty involved.

With AI avatars, personalized videos have become easier to create. For instance, instead of making new hires go through a series of emails or documents for onboarding training, they now can be welcomed by name with a video in their own language. That’s where avatar-based learning is changing the game – by providing a more personalized space.

In the past, avatars were more like cartoon characters, even those of Articulate Storyline and Rise being static. But the new version of AI avatars, though still only a chest up representation, is much more expressive and engaging.

What role should technology play in modern L&D strategies?

I’ll answer this in 2 parts. The first is how we use the technology. And the second is what I call the ‘learning dream’.

How we use technology: It's important to prioritize the learning and understand how different formats of technology can be applied to help learners towards a particular outcome. A new technology is not going to change everything overnight, it simply allows us to do the same task in different ways. That’s our role with new technology as L&D professionals. It's not about building things that we don't have the skills for, but figuring out how the new tech can help employees perform their tasks better. So, when the technology changes, so should the way we deliver information.

The learning dream. The ideal situation for learning is one-on-one tutoring. Bloom’s 2 Sigma problem found that students tutored one-on-one surpassed their peers in group instruction almost by 2 grades. So, we need to consider:

  • How to build a one-on-one tutor with the available information and experiences
  • The different roles for which the AI avatar can be used (coach/ mentor/ employee performing worse than the one going through the training)

AI also allows you to scale yourself for different roles as L&D professionals – technologist, comms, or authority on the subject. Using AI avatars will allow you to clone yourself for asynchronous video messaging, for example, as someone who can speak 160 languages when connecting with a global audience.

Where is AI avatar-based video learning particularly impactful? Are where is it not?

Just like with any other format, ask yourself if this format is a good fit for the information. Right now, avatars are very limited. Though they are expressive, they cannot gesture effectively. So, you need to consider if the content is a good fit for an AI video.

If communicating with less than 5 people, instead of an AI video, it's better to talk to them on the phone or a Google or Teams call. If it is going out to more than 5 people, you need to consider:

  • Is it going to be high volume?
  • Will you need to send out lots of videos over time?

For example, if you need to send out weekly reports for data storage, a 2-3-minute video will capture that information better than sending an entire data dashboard to your team.

If you need the message in multiple languages, an AI video is a good fit to send out the same content in up to 160 different languages for global teams.

Also, informational content is a good fit for avatar-based videos – from compliance training to airline safety videos to HR and internal marketing announcements. It can be used for sales enablement training to onboard folks very quickly, to get them to understand different approaches to engaging with customers.

AI videos can be used with good effect for product updates, for data reports for business leaders who need to make informed decisions, and for marketing upcoming learning experiences.

AI videos are also a great fit for tech phobic employees who don’t work with computers (warehouse staff or frontline workers). Warehouse workers can scan the QR code for the product with their phone, and get the resources served up there.

What an AI avatar is not good for is when giving an in-person presentation with a slide deck. The worst thing you can do is to put an AI avatar on that slide deck and have it go through your slides. That experience will be neither rewarding nor engaging for anyone.

How do you stay abreast of industry trends and best practices in L&D?

L&D is a bit behind when it comes to generative AI technology, because of what they’re expected to deliver. It's hard to break out of the ‘I need training courses and resources’ cycle or use AI tech in new and different ways.

Companies like the New York Times have always been a little ahead, thinking, “How can we deliver journalism differently?” They were considering VR storytelling in 2017, about using technology to provide news in a more engaging way. So, it’s good to look at what they’re doing with AI, what formats they're using, and what types of stories they're telling.

You can also stay abreast of trends by looking at what people with money are doing in the AI space, because it takes money and resources to find novel use cases, do prototypes and experiments, and serve those research products in a meaningful way for the rest.

Any advice for L&D professionals wanting to grow their career?

We're going through a great unbundling event where all our previous roles and jobs suddenly don't make sense. We need to ask:

  • What jobs do we need humans to do?
  • What tasks do they need to complete to get those jobs done?
  • Where can we plug large language models, ChatGPT, or different AI enabled products into somebody's flow of work?

Go through the projects you've worked on in the past 3 months, and look at:

  • What were the actual tasks that you did?
  • What did you start with?
  • What was the output or general outcome?

Look at those tasks and mesh that with your understanding of ChatGPT and what it can and can't do. Because while Chat GPT can raise the overall foundation of, for example, copywriting and creative thinking, it’s not so useful for business strategy and problem solving. So figure out what tasks can be handled with the tools you have and prepare yourself for the unbundling of roles and projects that is going to happen.

podcast-promo-1

corporate-lnd-trends-2025

Corporate L&D Trends 2025

Design winning learning experiences for the new-age workforce. Build efficiencies with AI.