Skip to content
Home » Expertise » Podcast » Season 8 » S08 E04 – Lenses of Literacies

S08 E04 – Lenses of Literacies

In this episode, Helen Beetham, an author, consultant and researcher in the field of digital education talks to Doug, Laura and guest host Ian O’Byrne about AI in relation to the future of media and information literacy.

Books

  • The Volcano Lover by Susan Sontag

Find all of our guests’ reading recommendations at our The Tao of WAO book club.

Links

Transcript

Important note: this is a lightly-edited AI transcription of the conversation. If you require verbatim quotations, please double-check against the audio!

Doug Belshaw: [00:00:21] Welcome to the Tao of WAO, a podcast about the intersection of technology, society, and internet culture with a dash of philosophy and art for good measure. I’m Doug Belshaw.

Laura Hilliger: [00:00:30] And I’m Laura Hilliger. This podcast season is currently partially unfunded. You can support this podcast and other We Are Open projects and products at opencollective.com/weareopen.

Ian O’Byrne: [00:00:42] And I’m Ian O’Byrne, guest hosting this season of the podcast where we’re looking at the future of new literacies as part of a submission to the Winter 2023 edition of the Journal for Media Literacy. Or, as we might refer to it, our essential question for this podcast series is how might we define the future of media and information literacy in theory and practice?

Doug Belshaw: [00:01:05] So today’s guest is Helen Beetham, author of Rethinking Pedagogy for a Digital Age and consultant and researcher in the field of digital education. So welcome to Helen.

Helen Beetham: [00:01:16] Hi, Doug. Hi Laura and Ian. It’s great to be on the podcast with you.

Doug Belshaw: [00:01:21] Our first question, as always, is what is your favourite book?

Helen Beetham: [00:01:27] Well, this is a difficult question, isn’t it? But I was mentioning Susan Sontag in a recent Substack post of mine as a cultural commentator, and I think it’s the only novel Susan Sontag wrote, in fact. But there’s an amazing novel from the early 90s called The Volcano Lover, which is kind of about the lives of William and Emma Hamilton, but seen through this incredibly contemporary lens. And I feel like for me, until we had the Wolf Hall trilogy, which I also love, this is like the novel I go back to for a view of the past that feels unbelievably sort of fresh and exciting. It’s full of gender politics and other politics. And yeah, so she’s my kind of. That’s the novel I go back to, I think, very regularly.

Ian O’Byrne: [00:02:15] Uh, Helen. So, as we said before, our essential question that we’re teasing out throughout this season is how might we define the future of media and information literacy in theory and practice? Could you please explain some of the work that you’ve done or are currently doing in this area to help give some context for listeners?

Helen Beetham: [00:02:33] Thanks, Ian. Yeah, I’m kind of nervous of the idea of future thinking and futureology, because I guess my tendency, as that novel might suggest, is to look back to the past and think, what can we learn from how things might have looked, you know, in the past, in the past of some of these terms that are defining our present, like artificial intelligence. So I guess in the most recent writing I’ve been doing on this Substack, Imperfect Offerings, I’ve been trying to look back at the recent history of artificial intelligence in terms of how the large language models, the large image models that we’re using have actually been constructed. You know, where they came from. And I’ve started now to really look further back, some of the work that people like Meredith Whittaker are doing around the origins of the term artificial intelligence, going back to the 50s, I think is really important for us all to think about in terms of how we become more literate, about what the project is, you know, what the what the interests are at work in it, what the possibilities of it are. And actually, my most recent post, I’m going to be going really far back to Charles Babbage and the Origins of computing, and how that was all tied up with a new way of thinking about intelligence as well. So I guess it’s a way of saying that I feel my contribution to thinking about the future always takes me back to the past of what we’re dealing with in the present, as a way of mapping, you know, possible paths forward. I don’t know how you guys feel about this, but we tend to get presented with technology as an inevitable story about the future. And I’m always interested to think about how it could have been different, which gives us the hope that it could be different again.

Doug Belshaw: [00:04:16] Yeah, I remember Audrey Watters saying that the best way of inventing the future is to issue a press release which which seems to be about right.

Helen Beetham: [00:04:26] Well, artificial intelligence has been an infernal and eternal press release as far as I know.

Laura Hilliger: [00:04:32] Yeah. It’s interesting. I feel like the past is actually one of the themes that has been coming up during this season and the series. We’ve talked to a few other people, and we’ve gotten sort of into the weeds about, you know, the history of media going all the way back to iconography in the beginning of writing and how, you know, societies and cultures change even with, you know, the invention of a pencil. And so I absolutely agree and completely understand why why it’s interesting to look at the past because we can learn just so much from, you know, how society unexpectedly changed when people were offered books or when the radio came. And I’ve always been really interested in sort of the the fact that we think that the internet is such a massive change. And it is, don’t get me wrong. But, you know, books were also a massive change. Printing was also a massive change that literally changed everything about the way human beings were interacting, so.

Helen Beetham: [00:05:33] And writing was a massive change. I mean, when I wrote the sort of first long piece for my current Substack, it was about language and language models. But I very much got, as you say, into the Weeds about why we write and why we write at university is a subset of the question why we write? Because I guess I’ve been involved in the angst in the higher education community about what this new set of technologies means for student writing. But the question about why we write at university, as I say, it’s a subset of the question of why we write. Why do we write at all? You know, how does that help us to develop as people and as communities? And then I included some quite, I’d have to say quite speculative anthropology in that post. But I do think speculative anthropology is not not completely alien to these questions of what we’re doing with technology and what technology is doing with us. Given that, I think we have always been as far as we’ve ever been human, we’ve always been technical beings and mediated, therefore, you know.

Laura Hilliger: [00:06:41] Yeah. So in this series, we have defined a series of lenses that we’re trying to think about literacy through a couple of specific lenses, just to sort of unpick how literacy or literacies shift, move through these different themes. And the themes that we’ve chosen are race, gender, AI, and geography. And we’re taking an intersectional approach, of course, because we can’t break these things up. But you’re based in the in the UK, and so we’d love to hear a little bit from you just about how location actually impacts literacy, if you have any thoughts there.

Helen Beetham: [00:07:23] Thanks, Laura. Well, I guess in the contemporary world, you know, the advent of large language models keeps reminding us of how hegemonic English is as the language of the internet, doesn’t it? So, you know, 1st May wish to no longer be sitting at something that looks like the colonial centre, but, you know, that continues to be the reality that that, you know, we are sitting in the heart of the kind of where power lies in terms of what is being gathered up into these language models. It’s, you know, very much predominantly despite the new versions based on English language, it’s very much based on a European and North American centric view of what knowledge is, what knowledge matters, you know, logic and problem solving as kind of core parts of the curriculum. So I guess I feel the responsibility of sitting, you know, despite the decline of the country, England, in which I’m sitting, sitting at that kind of imperial heart of the of the AI project and very interested in people like Meredith Whittaker and Simone Brown who are tracing the the racial and colonial origins of computation, factory and labour management divisions of labour, all of which come together in, I think, in thinking of artificial intelligence as new ways of managing labour. I think that’s kind of at the core of my interest in it. So I think there’s a real intersection there between the intersection, I guess, is colonialism, between issues of race and gender, racialized and gendered labour, and the idea of intelligence and the technologies of intelligence, including measuring intelligence, which are profoundly racial and gender technologies, and which is where the idea of intelligence as one thing actually comes from. It actually comes from the statistical technology of measuring that Spearman came up with and applied to, to this thing he called general intelligence. Actually, in the face of a lot of opposition from other statisticians who said there’s no evidence. It’s one thing. And so, yeah, I guess I’m being being a materialist. I’m seeing some of those intersectional issues that you’ve just raised through a frame of thinking about how these technologies are actually materially structured, how they come about, what forces are at work in them, drawing on the work of lots of people who’ve done that. And also, I mean, I can’t help noticing as a feminist how these new mediating technologies are so dominated by men. I mean, we’re on audio. But I shared with Doug and what I thought was an extraordinary pie chart when we discussed doing this podcast, which shows that about 3%, I think, of of designers and engineers in the AI industry are women, which is quite incredible. I mean, I can’t think of another industry where you would say that that was the case. So yeah, I think, I think in thinking about the material construction of these technologies and how they then shape our mediated reality, these issues are very much to the fore in my mind. Yeah.

Laura Hilliger: [00:10:38] That’s interesting. 3% is exactly the percentage of women in open source. So that was I think, that study. It’s a few years old, but Mozilla, Creative Commons, Wikipedia, they all got together and they did this massive survey of open source, open source projects, maintainers, coders, designers, all of it. And the 3% was the statistics. So I wonder if it’s just, you know, women in tech. 3%.

Doug Belshaw: [00:11:06] Yeah. We should not accept that in any way. We first got to know each other a bit through the work that you were leading at Jisc and was supporting around kind of digital literacy program. I know you’ve continued to do that for for quite a long time. We probably haven’t got time to go into the depths of that, but I wondered to what extent some of what you’re doing. You talked about being in higher education, some of that having that view of of all of higher education, really, not just just being in one institution affects your views on some of this as well.

Helen Beetham: [00:11:38] Thanks, Doug. Well, I am very grateful, you know, that Jisc funded that work. I think it represented, as Mozilla also was doing at the same time. I think there was a shift in emphasis away from, let’s get this technology out there. You know, let’s just pile technology into schools, into colleges to thinking really about people as users of technology and thinking about practice as being mediated by technology. So this idea of digital literacy or digital capability for me was a way, really powerful way of shifting the discussion. And I know that you were excited about that as well, shifting the discussion away from what does the tech do? To what do the people do and away from, you know, empowering people with tech towards empowering people with know how awareness. And I guess, you know, ideally with critical awareness, which has always been key for me in thinking about digital and media literacy. Much more interesting than the Russian doll debate. You may remember about whether digital literacy went inside media literacy or the other way around. And I think that debate about criticality is, is really evident in the present moment, when suddenly a new kind of surge of technical development, or at least technical rollout comes along, and we really have to fall back on resources of criticality, of thinking, of awareness, don’t we? So what I think about looking across the pieces, I’ve been very lucky to work in a lot of different institutions of higher education on that issue of digital literacy, also in health care, actually in health education and in librarianship.

Helen Beetham: [00:13:22] And I think I always relish the opportunity to hear from people about practice and to discover just how critical people are, in fact, in their understanding of technology, understanding of mediated practice, understanding of digital media. But at the same time, I worry that any attempt to kind of bring that awareness in a more systematic way into institutions tends to then become reified. You know, you have your framework, you have your little boxes people have to tick, can you do this? Do you think about that? And for me, the real requirement is for very detailed conversations within communities of practice. The time to engage, the time to think about the history, the time to imagine alternative futures, to test them out with people who have different perspectives, who come from different disciplines, you know, students and academics thinking together. And something that the new media does in its emphasis on speed, acceleration, efficiency is tend to remove those slow moments for conversation. So we’ll always take the digital capability framework because it looks like a quick solution to a problem we have. But will we provide the slow spaces, the resource spaces, the open spaces in every sense of open for a really deep discussion and encounter with technology, with each other, with and through technology to understand the possibilities of it.

Doug Belshaw: [00:15:00] It’s interesting what you said there. I want to bring Ian in here. Ian, you work in a in an academic institution. And I remember as a teacher, you know, there being initiative after initiative. And so every academic year, you get out your scheme of work and your curriculum, and then there’s another box to add on the end, like the latest government thing, the latest framework, whatever you have to do. Can you go through and tick it. Can you drop any of those columns because they’re no longer on the the agenda. And as Helen says, it’s not a it’s not a discussion. It’s an implementation. And that’s that’s a very different way of looking at it. And I wondered what what you thought about this in terms of the way that affects your work, and also any questions that you’ve got for for Helen.

Ian O’Byrne: [00:15:43] Yeah, I was going to you know, I’m thankful for Helen’s comments about, you know, looking at the future by thinking about the past. And, you know, one of the the things that we keep seeing come up again is, you know, with, with AI, there’s a lot of hyperbole and hysteria right now, you know, and in my institution, a lot of institutions in higher ed, but also K-12, there is this belief that let’s just ban ChatGPT. That’s, you know, and I’m like, okay, well, it’s more than just ChatGPT, but they don’t want to. There’s no desire to have discussion or thought or think about power or money. Um, and the point I try to make is that these technologies have been in our lives for some time now. So your Netflix queue or your Amazon queue or, you know, your Spotify playlist. These things have been paying attention to you for some time. And so now to a certain extent, it’s it’s empowering. But we need to have that that discussion and that thought process. But throughout technology, throughout, you know, waves of new technologies entering our lives and entering our systems, we’ve had opportunities to have these discussions and we have not had them. You know, we had these opportunities when we thought about most recently, Covid, you know, and the move to emergency remote teaching. And now that zoom is is global infrastructure where. No, none of us used it previously. Um, and so we, you know, we routinely have the need to have these discussions about the role in place of these new and novel technologies in our lives, but we have not. And now with AI, we have another opportunity. And as the next large proportion of people that are going to enter the internet are non-native English speakers, we have another opportunity to have a discussion and deep thought. Um, and once again, I have a feeling we’re not going to have those discussions. So my question for you is, you know, for the for those of us that are listening, for people that would think about this research on an on an individual level, what critical lenses should we employ? So I’m an educator, I’m an academic with a lot of student loan debt and fancy papers on my wall. My sister, who is a stay at home mom, the K-12 teacher at the end of the street as we hear all these things about these new apps or these new AI things that can change our lives on a day to day basis, what lenses should we employ as we try to make sense of like what’s important and what isn’t worth our time?

Helen Beetham: [00:18:28] Wow. Well, I think calling them lenses in itself is really useful. And the topics you’ve selected are really great lenses, aren’t they, for thinking about some of this? Um, I think I want to say two things in response, and one is about the lens of timing. So it feels like, although you’re right, we’ve had these wave after wave, you know, we had I mean, I compare the, the kind of natural language interface that we suddenly have on lots of search and databased operations as like the graphical user interface. I’m old enough to remember how exciting and compelling that was in terms of changing your relationship to technology. We’ve had these waves of change. Um, but it feels like the window in which they’re visible and before they’ve got integrated is getting narrower and narrower. So, you know, we’re already seeing as we go back to the new academic year, that generative AI models are embedded into most of the learning platforms. They’re deeply embedded into search. Microsoft only funded OpenAI on the basis of embedding it into their entire office suite and their entire workflow. That’s the business model. That’s that’s the kind of where they’re going to leverage profit mainly from it, not from individual users. So already these are the ways in which using ChatGPT, for example, changes student writing, which is a conversation. I’m sure many of us have to be cautious because I do teach for a couple of universities, but I’m sure many of us this summer have been looking at student submissions and going, how? How are they different? What does this look like? I mean, banning it is obviously ridiculous because we know that every student is using it, it’s just a question of how intelligently they’re using it, how well resourced they are. So we had this summer, which suddenly feels like a precious moment when those conversations with our colleagues actually happen. Like what’s changed? It’s very subtle. In fact, it’s the differences are quite subtle. How is it changing ourselves? How is it changing how we write? And I think timing is really critical. And that that time window we’re in and like just like you said in the time window for that Covid shift, I was lucky enough to interview more than 50 academics for my own research. Just in the aftermath of that shift and beginning to come back onto campus. And it was so visible to those teachers how the online relational space was different to the embodied classroom space. Not necessarily better or worse, better for some students, worse for some students, different for everybody. It was so amazing to have those conversations about teaching and about the relationship of teaching in the aftermath of an experience where embodied cues had been massively reduced. People had had new ways of participating, you know, chat, windows, emoticons, whatever it was, the whole of teaching was potentially being seen through a different lens. You know, the technology was the lens there. And I guess the other thing, too, the timing window would be one of the things that came out of those interviews I did with academic staff, which were mainly about critical media literacy, what that meant to them, how they thought about that was about expanding the context. So the narrow context is the user is the user experience, isn’t it? Is this technology working for me? Does it do what I want it to do? And that’s the basis on which a lot of digital and media literacy functions. And then if you expand that context a little bit, you might start to think about myself as a producer. You know, how could I participate in this media creatively in an empowered way, with agency collaboratively? And then I think there’s a I’m doing things with my hands that show onion skins moving out. But then I think there’s another layer where you start to ask, okay, if I look at this from somewhere else, maybe from the perspective of a non user or from the perspective of the person who’s building this technology, or from the perspective of the people who are trying to make money out of this technology. How does this space look, you know, all the way out to thinking very systemically about it. And I think those last few few moves are the ones we all need to be making. But of course, they’re the most difficult moves, aren’t they? They’re the most challenging. If I think about the kinds of expertise we need to really understand what’s happening in the interface between technology, media and learning, which is the interface I care about. You know, there are so many intersecting knowledges, perspectives, lenses that we might need to apply. It becomes quite daunting.

Ian O’Byrne: [00:23:02] Well, no, I mean, one of the things we always talk about is power and privilege in literacy. And you know your point about Covid, you know, I agree, had a lot of discussions with colleagues and friends. And, you know, one of the points that I would make is that the this Covid provided us an opportunity to strip bare the systems that we thought were in place to support all individuals. But, you know, what was working previously wasn’t working for all you know. And then all of a sudden when we moved back to home and we moved virtually, we realised a lot of people don’t have access to the internet or they don’t have rose gold MacBook airs, and they don’t have opportunities to sit in a private office and have their own space to work. You know, and now we’re seeing it again with AI, where we and with machine learning technologies, you know, we privilege standard academic English in all of our spaces, for better or worse. And now we have a tool that, you know, my modicum of intelligence is, you know, reflected by my GPA, my ability to speak and have standard academic English come out of my mouth, my ability to write lots of words on paper. But now we have tools that can replicate a lot of that. So for non-native English speakers, for people that didn’t grow up in a certain part of the world, for people that might need modifications or accommodations, there’s a way to, you know, level the playing field and know that causes some challenges for people. But there’s an opportunity to level the playing field for and perhaps empower a lot of other individuals.

Helen Beetham: [00:24:39] Yes, it’s very tricky, isn’t it? Because I agree that accessibility is the first requirement. You know, access to technology is the first requirement to have anything to say about it. And I guess my question would be whether the playing field is the right one that we’re levelling to, you know. So, um, one of the problems with that approach, I think, is that it implies that it’s everybody needs to have access to the same kind of words coming out of their mouth, the same kind of text being written on the page. Would it not be better, perhaps, to reframe what an academic contribution looks like in a mixed group of students, that it might look like it’s very various. It might come from a whole number of different poles of perspective. There might be students with some abilities in language and not others. Students who are operating not in their first language, for example, are a massive resource, including a cultural resource because of the barriers they’ve had to overcome because of the different cultural perspectives they bring. Now, that’s not to say, you know, I don’t want people to have access to the ability to write, to pass and interested in the phrase cosplay.

Helen Beetham: [00:25:49] You know, we ask our students to pass in as competent students by performing certain kinds of linguistic tasks, which can now be passed for them by by these to some extent, by these models. The question then becomes, well, are we asking them to pass in the right ways? Are we creating the right playing field to level, you know, could there be other ways? We were assessing what students can do can say that don’t mean. That a probabilistic model which essentially sees the world as a series of variations around a norm, not as a series of qualitatively different cultures or perspectives, but literally a series of variations around a norm. If we, you know, off source writing and thinking to that model, what are we losing? We’re certainly gaining a level playing field, because that is the entire purpose of the model to be a level playing field and to see everything that’s not on the level as somehow aberrant and, you know, capable of being brought back to the norm. But that suggests to me that perhaps the things we’re valuing and the ways we’re assessing them could be made more open, more various. As an alternative.

Doug Belshaw: [00:26:54] I find it fascinating where you’ve just said there about the like the going to a norm, and I hadn’t really thought about it exactly in that way, but it reminded me of an article I saw which asked Midjourney, the AI generative model, to create like a group selfie of like people in the past, or different groups around the world or whatever. And this article was saying that they’ve all got an American smile. They’ve all got like the amazing white teeth smiling in the same way kind of thing. But people around the world sometimes don’t smile for photographs, first of all. Second of all, don’t smile in the same way. And there’s the all of those cultural shifts which is being homogenised into a particular way of doing looking at the world. And just this morning I saw that if you if you and this is why it’s terrifying, all this kind of stuff if you search apparently Google for man, you know, the guy who went up against the tanks in Tiananmen Square. Apparently the top photograph on that at the moment is an AI generated selfie of Tank Man in front of a tank. But which which blows my mind a little bit. Anyway, yeah, I just thought there’s like a, there’s a visual version of what you’re talking about there with the homogenisation to.

Helen Beetham: [00:28:15] And there’s just very quickly there’s a learning technologist in the UK called Dustin Husaini, who did some really interesting work on images asking for images of of different cultures, of different countries. And the stereotypes that came up from Mid-journey were, were shocking. And I guess because they’re visual, they’re more shocking. But some of the textual stereotypes, you know, are also problematic.

Ian O’Byrne: [00:28:39] One of the things that we’re seeing, one of the things that you brought up, it reminded me of another interview in this series where we’re talking about the use of language and the focus on English and that, you know, some of our, our, our participants said that they they valued the use of English and, and the, the, the opportunity to learn English because that was what they needed to communicate and interact online. And for my privilege and my perspective, I pushed back and I said, but, you know, don’t couldn’t we imagine a future where you can just go online and you can speak your native language, where we can have a diversity of languages online and, and, and that really there was some disconnect. And then I spent time afterwards talking to Doug and Laura and saying, well, is it something that’s about my my worldview or my privilege and my perspective that I’m getting wrong? But I would I would rather have a very diverse internet. Um, and so one of the challenges that we’re we’re having in this research, or possibly it’s part of the findings, is, you know, we go into this and we say we have these buckets of these constructs of race, gender, I or international translanguaging we have these buckets that we see that we want to explore a lot of that’s informed by our own privilege and perspective and identity. Um, but then as we talk to individuals, it’s interesting that many don’t want to subscribe to those buckets or speak for other buckets. So they might say, okay, yes, I am an international scholar, but I don’t really have a lot to say about that. Or yes, I may present as female, but I don’t have a lot to say about being, you know, about gender and technology. Or I’d rather I see some confluence between I and these other spaces. So, you know, as we’re thinking about the future, as we’re learning from the past, and we talked earlier about critical lenses, how do we individually sort of like frame that when there’s so many competing streams and so many ways, you know, Laura referenced it before. You know, we the the intersectional approach, how do we make sense of what’s happening by thinking about our own identity in that mix.

Helen Beetham: [00:31:01] I think I should respond to that by saying, first of all, that I know many people in the kind of world of open who are experimenting with small and mid range language models to serve the needs of particular communities. You know, I think there are really interesting possibilities around indigenous knowledge, for example, preserving indigenous knowledge in particular models in particular ways. And so I don’t think I don’t think I am opposed to the technology of language. I mean, you know, the technology of language processing really came from the digital humanities. What these large language models are doing is not that different to what has been done with text density. It’s just that the scale, the intensity of compute, the, you know, the the capital required to train these massive models is of a different order. So I think if we think about the future of media literacy, you know, it will be in a positive future. I think it would be wonderful to imagine these interconnected walled gardens of of communities of interest that have, you know, maybe it’s a very small community that’s really, really got fanfiction community around a particular writer, you know, that they absolutely adore, and they just want to mine the possibilities of that one writer’s complete oeuvre inside a model that they train and refine and they own that. They own that little space, that little model. And it’s possible to think of minority languages doing the similar things. It’s possible to think of, you know, for good and bad kind of radical political groups deciding that they want to train a model on, you know, their particular way of thinking about the world. And I guess in a positive vision of the future, these walled gardens would be small communities where what I’ve described as the kind of massive, you know, the kind of divisions of labour within the actual technical model itself could be undone a bit. You know, the librarians, the curators, the users, the model designers could be of a community. They could share their expertise with each other. These walled gardens, in my positive vision, would be interconnected. So, you know, you wouldn’t just sit in your own bubble. You’d be able to maybe a bit like Mastodon instances. You’d be able to kind of range and venture from your walled garden. I think there’s a couple of really good reasons why that is unlikely to happen. And I guess as a critique critic, it’s really useful to put that positive vision forward and then think in the real world about why that isn’t happening. I mean, we could we could look at that. But I don’t know if that answers your question about, you know, what’s positive.

Ian O’Byrne: [00:33:32] Why don’t you think it will happen? Because I you know, when we talk about these, you know, there is sometimes a tendency for people to get on the hype cycle and get excited, but then there’s a lot of us will say like, well, this is not the way it used to be. Especially with AI. There’s a belief that this is just going to be the end of society. And so lately I’ve been trying to be a little bit more positive and a little bit more hopeful for the future of technology and society and empowering individuals, you know, and just and maybe it’s just me telling myself lies to keep myself sane. But why do you see that? How do you see that tension and how this might not ultimately turn out the way that we want it to, because we’ve seen this happen? My advisor, you know, when new literacies research began, my advisor famously, you know, started off the, the the foreword to this entire tome saying, you know, this will change everything. This will empower everybody. And then we looked back a decade later and said, how foolish were we? You know, the more you know, the more things change, the more they stay the same. So what tension do you see and why might we not really see difference occurring?

Helen Beetham: [00:34:48] Well off the top of my head, and I think it’s really important to be hopeful and to be positive about new technology. I think off the top of my head, I’d say number one, and you guys know as much about this or more than I do. The nature of open I mean, Laura, you started out by observing the gender bias of, you know, open source developers. You know, that’s not because women aren’t interested in open source development. That’s because you actually need some residual resource to give to community, to give, to open. You know, if you are insecure, you’re not you haven’t got a surplus to give to a community based open project. So the people who have security tenure that might look like in academia, who are working perhaps in a dominant language. So there’s lots of resources who are highly educated. You know, these are the people who are going to be able to step into the spaces created by new technologies and make it their own. We know this in the open community. We create structures and policies and ethical frameworks to make that not happen as much as it might. But you know, the world is the world. So I guess the nature of open kind of would give us all again, looking back to look forward would give us all some concerns about the shape of that. I think the nature of the models themselves gives me some concern because they moving on from symbolic to probabilistic AI, you know, and I studied AI in the 1980s when it was much more of a symbolic effort and that had its own problems, you know, who counted as an expert being the main problem with designing expert systems. But we’ve moved into an era when scale is literally everything. I mean, the amount of compute you have determines what kind of model you can build. And then prime mover status is very significant. So we know that these models are being trained and retrained on user conversations. You know, you you kind of release these models before really frankly before they were usable. But you get people locked into them into how amazing they are. You get people using them. You build up your capacity to retrain your own model. So I think the nature of the model and how they have emerged at this particular point in the history of tech with a few very, very dominant companies, you know, the amount of compute that Microsoft was able to throw at this and Nvidia was able to throw at this on behalf of OpenAI will never happen again. And all these open models are being built on those structural foundations which embed inequality, inequity forms of colonialism. So I think, you know, you could say that of almost any technology, but this is happening right in front of our eyes. So, you know, maybe we need to worry about the nature of the models themselves.

Doug Belshaw: [00:37:30] It’s interesting you talk about the colonial aspect. So I’m reading a book which I’ve been meaning to read for ages recently called The Mushroom at the end of the world, which is about mushrooms, but also about capitalism and colonialism and stuff. And, you know, I used to be a history teacher. I used to teach this kind of stuff. But the way that it was presented in terms of alienation, you know, you’re taking the sugar cane, which isn’t naturally from that area, and the people who are planting it do not know how it works and are cultivating this, this monocrop on an island in the middle of the sea and then bringing in the labour from Africa, and there’s alienation all the way down, and the way in which that is explicitly outlined in the book as being the start of the factory based system and the ways in which you we talked when we were planning this kind of podcast episode, the ways in which managing and disciplining labour is built into all of these technologies. And that underpins kind of the literacies which sit on top of the technical systems that we’re talking about here.

Helen Beetham: [00:38:37] Absolutely. And as well as that book, I mean, I’m inspired by Simone Brown’s work on race and surveillance. And as I’ve said several times, Meredith’s Whittaker’s. I think what’s really fascinating is the way that the idea of intelligence itself can be, can be taken back to the idea of disciplining and dividing labour. So, exactly, you know, once slavery was over or legally over, nominally over the need to discipline labour on the plantation and the need to discipline labour in the 1830, in in the colonial heartland, which was being driven by luddism and, you know, factory strikes and so on, the need to discipline labour without making an equivalence between those two locations. I would never want to do that. But there was a common need to find ways of disciplining labour. And what Babbage wrote about mainly was not computation. It was about the ways that you divide up the task into smaller and smaller and smaller unskilled units, and then you take the intelligence that they used to be in that task, whether it’s farming or weaving, the native intelligence of how how to look at the weather, how to think about crops, how to create a pattern that was culturally specific in your weaving, that intelligence you take out. And you put it in something like the Jacquard loom punch card, which is the first ever computer program, and you put it in the heads of the managers and you leave the workers with without the intelligence, you know, without the knowledge of their own work. And this is one of the key ways you discipline labour. Now the idea of intelligence arises at that time, previously people talked about mental faculties, mental values, sensations. You know, the whole idea of phrenology, the Victorians going around measuring your head bumps came from the idea that intelligence wasn’t one thing. It was lots of things in different regions of the brain. The idea of intelligence is one thing. A kind of executive function came out of the time when factory labour and plantation labour were being disciplined through division of labour. Babbage’s idea of the difference in analytic engine was about dividing up mathematical functions in the same way, and it wasn’t just a mathematical kind of analogue. The first labourers he intended to put out of work with his discipline and a difference engine were mainly women and girls who were called computers, who did the basic maths to create nautical almanacs. You know, every year there had to be a new nautical almanac for the colonial vessels to keep sailing to the colonies. And it was massive amount of mathematical labour and most of it was outsourced. And it wasn’t done very accurately. And it was certainly very, you know, could be done more cheaply by these engines. So right there you have the origins of computation, the divisions of labour, the idea of intelligence. And then you look at the contemporary foundational models and how they divide absolutely divide so that they will never meet the kind of skilled, intelligent labour of the designers, the coders, you know, the computational geniuses who create it, the vastly unpaid and often quite traumatic labour that I call in the middle layer of all the annotators, reviewers. You know, there’s lots of evidence that OpenAI drew on people in Kenya to paid about $2 an hour to annotate all the outputs of the early models to make them human usable, and that annotation goes on and on and on and on. It’s, you know, there’s if you look at the share prices of those annotation companies in the middle, they’re going through the roof. Because I depends on badly paid labour of human beings making judgements about data constantly. And then there’s the labour of the user, which is all of us, which is getting sucked into these models and then used to retrain them and used to tie us in even more to the things they can do for us. Now, you can’t look at that history. With any degree of you know and feel at all happy that these models are not just a kind of toy to play with. They’re getting embedded into every single part of our. There are new interface basically on every computational function we might want, whether it’s search, data management, coding, these models are coming between us, just as the graphical user interface came between us and data functions back in the 1980s. And if you think what the graphical user interface did for Apple, which was create the most profitable company on the planet, this is what these models are going to do for OpenAI and Microsoft as prime movers. You know, now that that is something we need to know about while we’re going, hey, my computer can talk at me in new ways. That’s so fun. Of course, we’re going to think that we shouldn’t be ignorant of the other stuff.

Laura Hilliger: [00:43:15] I feel like we could just continue talking for hours. You’ve touched on so many interesting points. I feel like there’s so much to unpick and sort of delve deeper into. But in the interest of time, I would love to ask you just a final question about if there was something that you wanted to discuss during our time together today that you, that you haven’t mentioned that you didn’t get to?

Helen Beetham: [00:43:43] I think I would like to bring us back to digital literacy and media literacy. Because I feel like it sounds as though I’m saying, you know, that digital and media literacy kind of doesn’t matter because they’re these monolithic technologies just sitting there. And what can we do about it? And I really don’t want to leave with that impression. So I guess I feel like the kind of conversation we’re having now is creating opportunities for critical digital literacy and digital media literacy. It’s not peripheral to that project. It’s very much part of it. And I guess I’m would be really interested to hear from all three of you, you know, of what you think the, the not the content of a kind of AI literacy framework might be. But what are the opportunities for AI literacy that you see in the spaces that you’re in? And by the time you get back to me, I might have thought of what I see. The opportunities are.

Doug Belshaw: [00:44:48] Well, I wrote my thesis on digital literacy, and part of the the kind of the meta analysis was trying to create an anti framework of eight different elements of what a digital literacy framework should, should consist in. And so I actually asked ChatGPT to apply my model to AI literacy. And what it came out with was actually pretty useful. But the point of it, I mean, that was just a bit of a parlour trick to put on a blog post, but the point of doing that is to have a conversation like the point is to say, okay, there’s these different elements. What is that going to look like for our institution? What is that going to look like for our community or whatever? And as you’ve said, Helen, like the point is the conversation to bring in lots of different, diverse perspectives and to think about what it means rather than glossing it all with some kind of, um, uh, monocultural approach, which doesn’t take into account any kind of diversity or nuance within the entire system. So, so yeah, that’s that’s kind of how I’m coming into this, like trying to think about, well, what are the different bits that we really need to kind of show people and ask them what they think about those things.

Laura Hilliger: [00:46:06] I think for me. Something that I’m have been quite interested in in the last few years. Maybe it started with Covid. Maybe it just has to do with getting older. I’m not sure I’m really interested in the things that we are losing from technology. The stories that we lose, the perspectives that we lose. I think that Covid reconnected a lot of people to each other, to hobbies that were not tech based. And I’ve been reading a lot of stories from oral cultures and trying to sort of unpick the way that I feel about existing in the world. And I think for, for me, the ability to use technology to get to know ourselves a little bit better and then connect with ourselves, each other and the world in maybe not always via tech, but really like in meatspace. And I think there’s something there about literacy, about helping people to use these tools to be able to discover bits of our world as humans that maybe they hadn’t thought of before. So I’m, I’m quite I’m quite interested in the resurgence of offline relationship building and not just between different humans, but also, you know, what it looks like to build relationships in your neighbourhood, what it looks like to build relationships with plants, with animals, with our environment, with city structures. Like there’s so much happening in our world where we can have impact. And I’m interested in using those tools or finding ways to help people have have that kind of impact to make the actual world a better place as opposed to just their own piece in it, if that makes sense. So more of a collective literacy, perhaps.

Ian O’Byrne: [00:47:59] I’ve been a literacy and technology technology vagabond for a number of years. You know, I studied in the New Literacies research lab. I worked with these incredible individuals on the web literacy work. I’ve done stuff with digital literacy and Multiliteracies the last couple of years. So I have that background and I’ve seen some fluidity across those spaces and, and try to urge others to have that same fluidity. Um, but then the last couple of years, I’ve been doing more work in computer science and computational thinking and trying to make sense of how do we say, how do we teach this to teachers? So looking at Prada as a lens pattern recognition, abstraction, decomposition and algorithms. And so it’s interesting that like my background in cognition instruction and thinking about CS and computational thinking is helping. You know, for the last couple of years, I’ve been thinking about how do I teach teachers how to talk about computational thinking or CS in their classroom to tiny tykes and now be, as of the last six, seven, eight months now I’m like, okay, how do I teach people how to talk to their computer? Because a lot of this is around thinking algorithmically. And how do we. Think in a way that the computer can understand because computers are relatively dumb. And so it’s like, how do we. So now it’s there’s this interesting confluence, you know, that that’s going on.

Helen Beetham: [00:49:34] So Laura, I was really interested and kind of moved to hear your description because I think that’s where I have gone in the last few years since lockdown. So I’ve gone from being really a sort of political and community activist. A lot of that energy for me has gone into community growing projects, because post-pandemic, that kind of foundational economy seemed really important at local level. I guess, as with open, you know, it depends on communities having the resources to put into that kind of thing. So now I’m looking at how to take that interest out to communities that don’t innately have some of those resources. And, you know, sharing how we can be more in touch with our own conditions of being and nourishing our, you know, the things that we need from the world. And of course, technology’s always been great for connecting activists and connecting up networks in those ways. So starting with the foundational economy and using the technology laid on top of that. But I guess also to respond to you, Ian, I think that idea of algorithmic thinking is really useful. I guess the one layer of labour I didn’t mention, which I’m going to now mention, you know, properly, is that all these models are built on creative work of writers, human writers, human artists, music makers. The whole actor’s strike and screenwriters strike in Hollywood is literally about that fact. You know, that there’s the possibility of that creative work being taken away from the people who made it. So I think we need to encourage these literacies with an awareness of those divisions of labour. I talked about. On the one hand, you know, there is this new opportunity to value the uniquely human, and I tend to cringe when I hear that a bit because I think, one, it’s a sort of naive idea of the human as being not already technical technological, but I also really value that instinct that we need to talk about what human skills might be while defenestrating humanities departments, you know, but universities are still talking about very much about what are human skills, what are soft skills. And I think there’s there’s a real opportunity. What is it you can uniquely create that might be of value in this world where knowledge does get recycled and recirculated and made use of. I think that kind of algorithmic thinking, with the aspiration of my little fantasy future, where everyone is part of a number of communities that have their own access to knowledge, their own ways of modelling knowledge, their own ways of enriching and annotating knowledge.

Helen Beetham: [00:52:08] You know, if everyone is going to be part of such community, then they need to be those algorithmic thinkers. There also need to be the curators, the librarians of knowledge, the the users, you know, who are finding value in knowledge. And that’s a great aspiration to have. I think the problem is with those two aspirations, is that they pretend that nobody’s going to end up being the workers in the middle there. Nobody’s going to end up being the ignored, the underpaid annotators. Data enriches whose labour literally gets disguised as a data engine. And that’s literally what these annotation. Corporations offer to the AI developers. So we just need to be really careful that as we aspire for our learners to have human creative capacity, algorithmic understanding, we’re not ignoring the fact that we don’t want to put them out into a world where the people can have and get value from those wonderful skills, only on the backs of other people being made part of a machine. We don’t want to be that, to be part of the world. So I guess that value, that understanding of how divisions of labour work in this algorithmic world is really important as well.

Doug Belshaw: [00:53:20] Well, Helen, that’s been a wide ranging and fascinating conversation. You have the benefit of being having the name. If someone types it into their favourite search engine, all of the different things that you’ve published over the years will pop up. But you’ve mentioned your Substack a couple of times. Do you want to direct people towards that? They can find out more.

Helen Beetham: [00:53:39] I’m very happy to and hopefully the link will be alongside with this with this publication. Thanks, Laura, for nodding. And yes, it’s called Imperfect Offerings. I really began it as a distraction from a book I’m supposed to be writing, but it’s become quite a lengthy distraction, and one that I’ve really gratified to be able to use to connect with other people like yourselves and other Substack writers and podcasters who are thinking about I, you know, in a critical way, which doesn’t necessarily mean as negatively as I have perhaps come across critical also in the way that Ian’s outlined of, you know, actually, how can I really create value for my students from these opportunities, value for users from these opportunities? But just in that awareness of the wider context that I think is so important. So yeah, it’s called imperfect Offerings, and it would be great to see some more readers.

Laura Hilliger: [00:54:31] Thank you so much!

Doug Belshaw: [00:54:32] Thank you very much!

"