The Learner Focused Learning Blog
November 14, 2024
I tried AI in my classroom and it bombed; here's what I'll do differently next time.
November 14, 2024
Learner Focused Learning
I'm always banging on about "failing forward" to my students; last week I had to remember that lesson myself.
It started, as always, with what seemed like a great idea. Codeacademy, a "one-stop-shop" for learning to program online, have some fantastically sequenced learning materials for programming. What's more - they're self-marking as well so, double win, I get to give individualised feedback without having to actually mark anything. I'd been wanting to try out one of these online programming platforms in the classroom for years and was excited to finally get stuck in once I'd found one that met my budget (aka, free thanks to Clever). My Year 9s were going to step into the future of learning to program and we'd never look back; needless to say I was excited.
You don't need to know or teach programming to work out what the appeal is here. You can see in the below screenshot you've got the knowledge, the instructions, the place to write your code, and the place to see if it's working all in one place. Student reads instructions, completes code, checks it works, moves on. Add some scaffolding, modelling, AfL, and all that other teachery goodness and the job's a good-un.
That button at the top, however, that cheeky little button that says "Ask the AI Learning Assistant", is where it all ultimately came undone.
The Successes
To be clear on a few things:
1) I do believe that if we can develop the right learning skills with our students, particularly those around parsing, critiquing, and checking new information, that Large Language Models like ChatGPT can function as what Vygotsky would have called a "more-knowledgeable other". This is particularly true when using LLMs not to generate answers, but to help to break down and understand trusted, but difficult to access, sources. I'm talking usage techniques like asking ChatGPT to re-word or break down a difficult-to-parse text by prompting it to "explain like I'm five". Doing so mitigates the risk of hallucinations as ChatGPT isn't coming up with the answer, it's just re-wording and scaffolding an answer you already know is correct.
2) It wasn't all bad. Some students really took to this and made really great progress by having a dialogue with the AI assistant. It wasn't just the high-fliers either. By the end of the week I'd worked out a few stock prompts that supported students, who otherwise would have really struggled with programming, to access the tasks. One in particular was "Can you create a step-by-step checklist for the instructions for this activity?". Off the back of that response we then looked at asking follow up questions. If the checklist had something like "Update this variable to be X" students would then ask "How do I update a variable" and apply the knowledge to the challenge at hand. On more than one occasion, I watched these interactions prompt the "eureka" moment that anyone who's ever tried to program knows can be oh-so-elusive. Long story short, LLMs can act as MKOs - it can work, I've seen it work.
What was the issue then?
In short? Motivation is built upon more than just a sense of Competence. In non-self-determination-theory speak: "just because you know how to do it, doesn't mean you'll want to do it".
If you haven't come across self-determination theory before, it's worth a google. In particular, I've always found its model of motivation to be quite a compelling explanation of motivation as I've seen it embodied in the classroom as engagement. I do find it a useful heuristic as well to help work out what support students might need. To summarise the idea: Motivation ranges on a continuum from external (I'm doing this to avoid negative consequences) through to internal (I'm doing this because I enjoy it), with a few distinct categories along the way.
This is relevant because, I think, it helps to identify the type of student (a significant minority of the year group) for whom this approach to learning programming categorically did not work. Generally speaking, I'm always trying to get students to at least the “Identified Regulation” category; "I understand the utility of this, even if it's not my favourite thing to be doing." The issue I had last week was that the approach did not provide enough structure to those students who only fall into the External or Introjected Regulation categories. The students who are only doing-the-thing to avoid a negative consequence.
These are the students who immediately worked out that they could trick the AI assistant into giving them the answer, and no amount of scaffolding, talking about independent learning skills, examples of wider context and relevance, or anything else I tried would get them to stop doing that. Programming was hard and they "weren't going to need this in the future" (cue arrogant teacher eyeroll because I've forgotten what it's like to not know what the world is like) so they'll just get the answers instead. I can't complain, I had that mentality for most of my degree (annoyingly, before ChatGPT could make my life easier to be fair, maybe this is all just karma?) Either way, the end result was that a good proportion of able-but-externally-motivated students weren't accessing the learning, they were just completing tasks.
I don't think that's entirely down to the AI. I think the other mediating factor here is that, as every teacher knows, getting students to read the "fantastic" question is a never ending battle. Where previously I had created my own videos to scaffold tasks and instructions, the move to purely text based instructions seemed to be a barrier for some of those students as well. The oft-repeated refrain of the week was "what do the instructions say? [student reads instructions]" "ohhhhhh". That goes back to the routines mentioned earlier and the first takeaway for next time - teach them to use AI for assistance on something they definitely already know, not something new. This tripped me up because whilst they had done programming last year, I underestimated how much they would have forgotten in the time since. It would be akin to asking a language learner to recall one terms' worth of content after a year. Do they have some knowledge? Sure, but we need to do a lot of work to bring it back to the forefront. Likewise, this wasn't their first use of AI. We have covered that before as well (we run a Generative AI unit in Year 8 and ran it for both Year 8s and Year 9s at the beginning of last year to make sure everyone was up to speed) but as using AI for learning is not yet a widely embedded practice, those skills were clearly rusty too.
The second takeaway is one that I had already planned for but, unfortunately, not planned for well enough - "is my accountability system setup to handle students using AI for answers rather than learning?". I'd argue on this point I actually did do well - it's not like I didn't know they were just using the AI for answers. I've been experimenting with flipped and asynchronous classroom formats for a couple of years now and, as such, have established a few routines for "passive accountability" (i.e, working out did you learn the thing properly or did you just lookup the answers?). My favourite for programming is the "mini-viva-voce", where I pick an activity, line of code, or key idea in their code and ask them to explain it and why it's there. If they can't explain what it does or how they solved it, it prompts more research until they can. The problem I encountered here was that the speed with which they'd complete tasks meant that I couldn't effectively run the viva-voce system because everyone needed work marking at the same time. Point to the students on that one, would love to see the game theory analysis on what the various optimal strategies would be in that particular scenario.
Next question then:
Why in the face of the failure here are you still confident LLMs can be a force for good in learning?
As I said earlier - I saw those eureka moments. I saw the moments where students, who otherwise would have been left behind by the pace of a traditional synchronous programming lesson, engaged in dialogue with the the AI and found answers that they actually understood. That alone warrants further exploration. If we can replicate that, if we can turn LLMs into self-scaffolding tools, the possibilities for supporting students who otherwise find themselves without that support could be game changing for young people.
The other reason, however, is that before I used it with the Year 9s I used the same platform with my Year 10s - and they absolutely smashed it. Computer Science is an option at GCSE and I'm fortunate to have a really engaged class - all of whom would fit into the "Identified Regulation" category or somewhere further to the right. I didn't have the same issues of performativity or optimising for the path of least resistance. The students wanted to learn programming, and they used the AI assistant dialogically to support that. It's not a ground-breaking conclusion, as it's always been true, but it is yet another place where motivation mediates engagement in authentic learning experiences. The question it does raise, bringing me to my final, and I think most important, takeaway from the week:
If we're to unlock the potential of AI as a learning tool, are we as schools and educators truly ready to confront the challenge of building internal motivation?
It might seem like a niche question, but it has huge ramifications. If we look across our industry (and, in particular, at the current "behaviour" discourse) the focus is on external motivation. We have our accountability systems in the form of our behaviour policies but they seem to be generally designed to create to inculcate external forms of motivation. I'm not saying that's not necessary, but do they do enough in an age where the "path of least resistance" or the "path of performativity" in learning is easier than ever to follow? Do we truly know how to build the more internal forms of motivation? To move students from the left hand side of the motivation taxonomy to the right? To get students to value learning and knowledge and not just the completion of tasks and the ticking of boxes? And can we do that with the same reliability that we teach knowledge in English, Maths, and Science? Perhaps most difficult and terrifying of all, what concessions are we going to have to make in the flexibility of our curriculum and our systems in order to maximise more internal forms of motivation? How are we going to plan for that freedom?
The answer to those questions are complex and, in some cases, age-old and resurfacing with the emergence of AI. My concern is that we charge headlong into the AI revolution without actually changing the way we as educators and schools operate and, when it all comes crashing down because we've not effectively managed our transition, we swear off AI forever. For what it's worth, I don't feel I'm ready to tackle the challenge of motivation. I don't think I can reliably move students toward internal motivation. I have succeeded in doing it, of course, but I have definitely failed to do so as well.
So as I fail-forward into the next endeavour, and lick my wounds as I go back to my traditional lesson planning, that'll be the question I take forward with me: "How am I going to more effectively build more internal forms of motivation in my classroom? And how will I use that motivation to support students to support themselves?"