The Promise And Barriers Of A.I. For Learning
Featuring Stella Lee, Ph.D. of Paradox Learning
Larry Durham: Welcome. At the end of each HIVE podcast, we ask our listeners to send in topics they would like us to cover and one topic that we received a number of requests for is on the topic of artificial intelligence, more specifically it’s use in the field of learning. On this episode of The Hive, we will tackle the topic of AI. In years past the mention of AI may have conjured up visions of robots and supercomputers, but with the advancement of technology, AI is now often used to refer to automation, machine learning, and similar topics like that. So, with the limited time, we have in this episode, we wanted to narrow down AI to specifically AI for learning and to be even more focused, really discuss the promise and barriers for AI related to learning.
We’re honored to have doctor Stella Lee, a recognized thought leader in the AI for learning space. By way of background, Stella earned her Ph.D. in the field of computer science focused on such activities as human-computer interaction, adaptive learning, self-regulated learning, and the like. She was also a postdoctoral research fellow where her research focused on providing advanced techniques, technologies, concepts, and theories related to learning analytics. She’s also a technology columnist authoring numerous articles on the efficacy of future focus learning. So with that background, we thought it would be great to have her as a guest on this topic where we cover AI. So Stella, welcome to The Hive podcast.
Stella Lee: Thanks, Larry. Good to be here.
Getting Involved With Artificial Intelligence
Larry Durham: Well, let’s get started. I know I shared some of your credentials, but why don’t you share with our listeners a little bit more about your background and what got you interested in learning and more specifically AI.
Stella Lee: I started as a painter, in a visual arts field. So, that was my first degree and then I started working my way to different disciplines. I had a master’s in communications, a Postgrad credentials in teaching and learning and eventually received a Ph.D. in computer science. So, I’m always very interested in the interaction of art and design, technology and education. Along with that, I’ve had 20 years of experience in eLearning and blended learning, internationally. So, because of my interest across disciplines, I was sort of just slowly discovering various applications in AI and learning, particularly in the past few years. As we live our lives more and more in additional domain, including education, we’re just creating more data that we interact with when we consume learning content on our phones, watching a video, or commenting on discussion forums. When I’m working with my clients in the various organizations, they often say we have a lot of data, what is for, how are we going to organize that? How are we going to understand, analyze and use it for perhaps predicting learning behaviors and giving feedback? I’m just curious about how we can make use of this data and how we gain insight into a learning process for learners and instructors or course designer to inform your learning design decisions.
Larry Durham: Sure, sure. Well, like so many of our guests, they didn’t start in learning, they started with a different background and interestingly enough, all of those disciplines work together to bring you to this point. That’s a great background. So, as I stated before, AI is used these days to reference a variety of things. Stella, maybe you could help our listeners understand what constitutes artificial intelligence.
What Constitutes Artificial Intelligence?
Stella Lee: Well, I’m glad you asked. That’s a great question. I’m also really glad you pointed out at the beginning of the podcast you said that when it comes to AI, you think about robots and that that notion, still holds true. I think when you talk about that, people still think about robots. I think even, what exactly is AI, is a topic for debate in itself. I was just recently looking at an article on a definition of just intelligence and there are over 70 definitions, never mind artificial. So, I think it’s worth spending a bit of time talking about that. I like to be very simple in defining terms. So, how I like to define it, is that it’s essentially technologies with their ability to perform tasks that would otherwise require human intelligence and capability.
So, in terms of human intelligence or human capabilities such as our ability to recognize people’s faces, whether we know them or not. So, that’s visual or facial recognition. Speech recognition is another example. We can pick up the phone and you can know this is my mom or this is a friend or this is somebody I’ve never heard of. Language translation, you can have, Bilingual or multilingual and have the ability to look at a street sign and translate that mentally. So, I think the key emphasis here, I wanted to point out is the machine or the technology appears to me what humans do. It’s not exactly going through the same thought process of what we do, but the end outcomes are that it looks intelligent. I think that’s an important point in AI.
What we all can agree on is if you’re in computer science then it’s nothing new, it’s been around for over 60 years, it started early on from Alan Turing, his paper on computing machinery and intelligence. And that’s why he conceptualized an idea of an intelligence machine and within that, there are different approaches and techniques to AI. I think you mentioned machine learning. The misconception is that people think AI and machine learning are interchangeable and I wanted to clarify that actually, they’re two different things. They’re related. Machine learning is one way of achieving AI. So, I think when people talk about AI, they often just think as just a machine learning algorithm, but robotic is AI. So, its natural language processing, but you know, these are not machine learning approaches.
What Does AI Within Learning Look Like?
Larry Durham: I think with most of our podcast listeners, being learning practitioners and learning professionals, maybe let’s get more specific and talk about what does AI within learning look like and possibly how is it used. I think maybe classifying that and beginning to talk about that may really bring it to a head for most of our listeners.
Stella Lee: It’s been a hot topic over the past couple of years, right? I think, with pretty much any way you look, within the learning and development field you increasingly see more and more platforms and technologies or AI enabled or, it’s built by AI. So, there’s a little bit of confusion and also exciting that we’re seeing so many different applications out there. In general, I like to classify them as three main areas that AI is being used in the corporate learning field.
The first one is on knowledge management and knowledge sharing. This is more in the form of having some sort of conversation, more of Socratic tutoring principles such as chatbots. As you know, chatbots, there’s nothing new. It’s been around for ages but in the old days, chatbots are a predefined set of answers. So, you have to do part coding to sort of pattern match answers that people are going to type into. Nowadays, things are getting a lot more sophisticated and it’s being integrated a lot with part of the larger corporate learning strategy, especially in an area of knowledge management and they can find answers in all over the application chatbots are being used in onboarding or in teams working together, you know, looking for answers with a FAQ. So, it’s a huge application as a quick reference guide.
Larry Durham: A good example of that, I just wrote an article on the power of voice. KFC restaurants here in the United States are using voice and AI so that line cooks can say, “At what temperature do I fry this type of chicken?” And it gives that information. So, I think that’s a great illustration of performance support Real-time using voice.
Stella Lee: Yeah, that’s a great example. I think we naturally use that in our daily lives anyway with Google Home and even on our phone, you know I asked my phone to set a timer for me. So, I think it’s a natural extension when it comes to performance support for learning and also I see that used a lot for coaching, especially on sensitive topics. When people might not feel comfortable or they feel judged if they are talking to you, a human coach. My emphasis is not that chatbots are used to replace humans. It’s used as sort of like the first level of knowledge that is being can push out and you can have these sort of more basic understanding of a topic and then when things get a little bit more complicated or when we need to unpack that a little bit more with a more specific context, that’s when you can introduce the human coach, right?
So, it takes away that first level of work for a human, in terms of that coaching conversation. Also, perhaps for people to feel at ease when they talk about a topic. There’s an e-learning module that integrates chatbots in a workplace harassment training. The idea is that maybe people feel more comfortable asking Chatbots questions rather than perhaps feeling embarrassed. So, I think there’s a huge application there and that’s developing now.
The second application is on personalized learning or exploratory learning. Examples of that is a recommended system, this is where I think a lot of talks on the Netflix of learning or Spotify learning. You know, I don’t think it’s doing us justice, but it gives you an idea of what it is, right? So, it a playlist that basically recommends some content based on your explicit area of interest or what you said based on an algorithm of your past performance or your test score.
There are different ways of adapting and personalizing your learning path. Sometimes with sequencing, you’re learning components at a more micro level than at a macro level. So, at a curriculum level, for example, that could be doing some sort of pre-assessment for you and say these are a competency that we think you would need to develop and it could be based on test scores. It could be based on your interaction with previous content. It could be based on your demographic information. There’s a number of other attributes that you can take into consideration. Some also do behavior modeling and content analysis for prediction. Recommender systems are also used for content curation. When there’s a large body of content that you want to sort of make sense for your learners. Content curation and recommender system is something that I’ve seen a lot lately in the learning space.
Larry Durham: We’re seeing that same thing. We’re starting to see a lot more around curation and learning experience platforms and recommendations. I will say we see a lot of the early stages, what I call the Amazon effect where if you if you liked this course, you may also like this course. And so it’s a recommender, but maybe at a fairly high level. But I think you’re absolutely right with curation and other data and information, the AI will get much better to be able to make personalized learning paths and recommendations. I think you’re spot on with that.
Stella Lee: And of course that comes with a lot of challenges and a lot of things that we need to consider. We can talk a little bit about the challenges and as threats and what should be done better later on, but I have lots of thoughts on that. I just want to get to the third application and it’s closely linked to the other two it.
It’s learning analytics, like performance indicator and prediction. So, based on your current and your past interaction with your learning content to identify, how likely are you going to do well for the rest of this learning trajectory? So, it’s sort of two ways, right? A lot of these have some sort of visual dashboard to sort of give you an overview of identifying each individual learner based on attributes. Again, like maybe a number of times you log into your, LMS, you know, whether you click on certain things, the duration of time, the duration of your log in your engagement in general and do you post anything? Did you ask the question? So those are some indicators that they take into. It does not measure the quality of your interaction. It mostly just takes in more quantitative data. So, I’ve seen that in some other LMS systems building in this kind of learning analytics and making predictions based on that. The risk of that is you don’t know who your people are, you don’t take into other contextual information. Right. perhaps the learner is going through some tough times. So, their performance is not up to par but it is a temporary situation that the systems are not good at taking into account. So, we’ll just be making a general analysis and prediction to say you’re likely to not finish this course. Or are you likely to do well? The nice thing about that though is to just sort of give you some idea of where you’re at as a learner and also as a course designer and as an instructor. It gives you some ideas to say, is there some intervention I need to do, do I need to provide these people more support? Is the learning content not sufficient? Is it not interesting? Is it not engaging? On the plus side, it does give you some insights on how you can intervene in learning, but of course, you could very easily generalize and put people in categories that you shouldn’t have.
Promising Applications For The Use Of AI
Larry Durham: I think those are three really good areas. Performance support, personalized learning, and learning analytics. I think that’s three really good categories. Maybe at a broad level. As we think about this, I know when we were putting this podcast episode together and I was thinking about the interview questions, I had talked about the promises and the obstacles for AI and learning. What do you see as the promising applications for the use of Ai? I know you’ve mentioned a couple, but what do you see as the promising applications and maybe some of the obstacles? Just briefly.
Stella Lee: I think lots of points here. The most exciting thing for me is really just giving us learning insights that we didn’t use to get as instructors or as facilitators. You’re in a classroom, face to face, you can see how well your learners are doing based on the verbal feedback you get. You can read their body language, you can look at your room of people and you can adapt your learning and your content on the fly, in a classroom setting. So, what the technology is giving us is this opportunity you can do it online now with more nuanced observations and I think you can do something about those observations. You know, you can take that a step further, not just understand and gain those insights, but you can also use that to enhance your learning.
The problem is, is that its sort of two folds. One is there’s a lot of learner-facing tools that we can use to help enhance that and improve that learning experience. On the other hand, there’s a lot of sort of back-end-facing instructor or caused designer facing tools that also help us understand and improve on how we create learning as learning designers. I think the other exciting thing is, it helps us take away more mundane or tedious work that we don’t like doing so much. I see this a lot more in the academic world for assessment, for example, there’s a lot of tools now to help you assess the test results or essays. It’s applying to corporate learning now, sort of taking some tasks for you so you can focus on a more creative aspect of creating learning.
So, I think that’s really exciting and I think that again, it’s not 100% automated. The key is still human-machine collaboration. So, there’s still human inputs that are needed, especially at the level why you’re revealing qualitative information. The other exciting thing for course designers and instructors are, there is sentiment analysis out there, sort of information that helps you understand good learners. How do they feel about learning? Are they confused by it? Do they like it? Are they enjoying it in terms of maybe what you consider your user’s emotional state when you’re abducting and making a recommendation? I think that’s also very powerful, and from a learning perspective, I think the excitement and promises are essentially putting your learners in control and getting more support from your learning.
The idea about recommended systems, about adaptive learning is really about giving you options, giving you choices. The better recommender systems have a more open algorithm whereby as a learner you can adjust or you can reject things, right? So, getting back to the example, of like Amazon, you can update the platforms assumption about the type of books you like to read, for example. But, sometimes you buy a book as a gift for someone else. And it forever recommends that genre for you that you don’t like. And that’s a wrong assumption. I once bought some a book for somebody about Italian cooking and I’m getting all the Italian cookbooks on my recommender system and I was like, no, no, no, I don’t need that.
Larry Durham: Maybe they’re trying to change your behavior. You never know.
Stella Lee: Exactly, so that’s a promise. It’s the nudging, the notion of nudging, right? Learning is much more complex. It’s not just giving you things that you like. As you know, a good coach is not just going to give you things. It’s going to push you. So think about from a nudging perspective how AI can also nudge and say, well, you’re doing really well, but I’m going to give you something a little bit harder now or you’ve done really well in this one area and I know you didn’t mention this other area you wanted to work on, but we’re going to give it to you now. I think that’s exciting.
What Are Some Of The Obstacles Around AI and Learning?
Larry Durham: Obviously, we could go on for days about the promises of AI and learning. Maybe just highlight a couple of the obstacles. I know there are a number of those as well, but for our listeners, what might be some of the obstacles to consider?
Stella Lee: Oh boy, the list is even longer. I’ll highlight a few that we struggle with and we’ve been struggling with for some time. So, in terms of recommender systems, in terms of adaptive learning or personalized learning. I think the challenges are it’s really hard to evaluate as an instructor or as a designer because you don’t know if people have gone down this other path or if they used this out of sequence, would that work out better? So, it’s challenging from an evaluation perspective. As a learner, it’s also very difficult because you don’t know what you don’t know, you don’t know what you missed out. So, it’s kind of hard to ask for things that you don’t know where it exists. It’s like Netflix is a database of a million movies. I’m exaggerating, but there’s a lot of movies and if it recommended, a handful to you that you know, it saves you time to browse through other movies.
But, at the same time you’re like, wow, if this is not quite spot on, how do I know if there’s something that we didn’t want to watch that’s not fitting in the recommender systems profile. Of course, there is a problem. It’s what we call algorithm black box because of some of the algorithm, well, most of them are not transparent. And some of them, to be fair too, this kind of method, it’s very difficult to be transparent. You can look at deep learning or what’s known as neuro-network. There are many layers involved and the decision process is not clear. It’s really hard to sort of go in and dissect that. Even for people that are developing the algorithm, they don’t know, so they can’t explain it to you. So, I think that’s very challenging because as you know, everywhere we look now there’s a lot of talks about this and with the coverage in the media about biases and AI, for example, facial recognition system.
It can be really good for certain people. The error rates for recognizing a white males face is one percent, but for African women, the error rate is 35%. So that’s a huge discrepancy with how it’s recognized because they just don’t change the type of data that’s put in there for darker skinned woman, for example. So the training data, it’s a problem. Sometimes we generally don’t have enough of them. Sometimes, depending on who our developer is they might not be aware of their own biases or they just put in training data. They have access to all they think are fair representations, but they are not.
Barriers To Effectively Utilizing AI for Learning Within an Organization
Larry Durham: So, obviously there’s a number of obstacles with AI. If I think about, obviously you can’t just flip a switch and begin using the AI within learning in organizations, there’s a lot of effort, but what do you see the barriers being to effectively utilizing AI for learning within an organization? What might someone who’s just now getting into this expecting to come upon when they’re implementing new AI tools
Stella Lee: I think in general we need to educate ourselves and our field a little bit better about AI. There’s lots of confusion about various AI. The main confusion is, you know, there are two broad types. One is General and another is narrow. General is when this machine can do everything. Everything a human can do. So, essentially it’s no different than different than human. And of course, we’re light years away from achieving that. But narrow AI is what we’re focusing when it is task specific. So, I think by and large people are confused about that. I’ll always be wary of the hype and companies sales strategies in terms of inventing, you know, fancy words for really just old products. So, I think we need to educate ourselves and really ask these companies probing questions like, is this really AI? You know, what does it do?
You know, intellectual property is this, you know, that, that sort of thing. So that’s a barrier. Cost is a barrier. I think the other barrier is that in the learning world, we just don’t have that much data. And it’s only interesting for AI when you have a lot of data and ways as learning professionals, we were not very good at collecting data and there’s not enough diversity of data. So there is that limitation there.
Larry Durham: I think that’s an excellent point. Learning an L&D within organizations have been historically challenged when it comes to collecting and utilizing data. And I think if someone’s going to get into really seriously considering AI and how to use it with learning, if they’re not committed to collecting and sourcing the data and getting, you know, historical data and forward thinking data, future data that may come along, the value of the AI is going to be significantly reduced. You know, we’ve covered so many things. I’m convinced we’re going to have to have you come back for a second episode. There’s no doubt because there’s a lot to cover here. Maybe one last question before we let you go. And what advice would you share with our listeners and other learning leaders that might be considering utilizing AI for learning in their organization?
What Advice Would You Share With Our Listeners That Might Be Considering Utilizing AI for Learning in Their Organization?
Stella Lee: I would say ask questions, think about is this really AI and to think about is AI genuinely adding values? To you’re learning problem I think we are caught up in the hype, but you know, it’s expensive. It also involved a lot of human input, along with configuring the system. So, think about the opportunity cost too. It’s not just the setup and the purchasing call center, you know, operating costs. Think about is this genuinely bringing you value that it’s worth spending your time and resources in that because sometimes perhaps, having a human design, certain learning, maybe it adds more value to you and you have to really understand that and to really understand is this even solving a learning problem?
Larry Durham: I think that’s a great point. You know, we’ve talked on prior episodes of when e-learning came onto the scene many years ago, probably 20-25 years ago in earnest. And the goal was to convert much of the classroom into eLearning because it would save many, many dollars and budgets and things like that. And the reality is I think AI, you know, we go through that same cycle. AI has great promise, a number of barriers as you’ve mentioned, but it is not the cure-all to all these problems. And it does take a significant amount of resources, whether that be money, time, an effort to make it work. So I think that you hit the nail on the head with that.
Stella Lee: Can I add another point though. This is a big, big point and we don’t have time to go into all that details, but do think about how are you going to deal with the privacy issue in terms of you know, for a data collection and prediction and applications, you know from that perspective, how are you going to communicate with your organization in terms, privacy and potential biases and ethical consideration, all that have a policy in place and communicate it openly with your people. I think that’s critical before you even implement anything.
Larry Durham: That’s a great point. In fact, that may very well be the title of the next episode we do with you which is, bias and privacy related to AI and the implementation of that. I think that’s a great prompt for our next episode.
How to Reach Out
Larry Durham: Well Stella, thank you very much for sharing your insights with us today. I know I enjoyed it. I’m sure our listeners have benefited from your expertise as well. If our guests would like to connect with you, what’s the best way for them to do that?
Stella Lee: I am most active on Linkedin, so feel free to reach out to me and have a conversation. I’m also on Twitter as @Stellal and of course always my own website paradoxlearning.com.
Larry Durham: Great. If you as a listener have any topics you’d like us to cover similar to AI today, send me an email and that will do it for this episode of the hive. Thanks for listening and we’ll catch you next time.