In this episode, Doug and Laura explore AI Literacy and LLMs like ChatGPT.
Favourite Books
- Atlas of AI by Kate Crawford
Find all of our guests’ reading recommendations at our The Tao of WAO book club.
Links
Transcript
Important note: this is a lightly-edited AI transcription of the conversation. If you require verbatim quotations, please double-check against the audio!
Laura Hilliger: [00:00:22] Hello and welcome to the Tao of WAO, a podcast about the intersection of technology, society and internet culture with a dash of philosophy and art for good measure. I’m Laura Hilliger.
Doug Belshaw: [00:00:32] And I’m Doug Belshaw. This podcast season is currently partially unfunded. You can support this podcast and other. We are open projects and products at opencollective.com/open.
Laura Hilliger: [00:00:44] So Doug, you wanted to kick off this podcast with a story? I think.
Doug Belshaw: [00:00:50] I did. Let me just say that the book, which I’ve been meaning to read all summer and which I feel guilty having not read, even though this is the last episode in our season, which is kind of about AI is Kate Crawford’s Atlas of AI and I’m looking at it now on my desk in front of me and it’s making me feel very guilty that I haven’t read it yet, but I have read other stuff and I’ve been doing some thinking and I’ve just been talking to Helen Betham, who is actually going to feature on season eight of our podcast, which is a submission to the Journal of Media Literacy. Anyway, so I want to start this off by talking about something I did with my daughter last night. She needed to apply for the role of sports leader at her school. She was encouraged to do that by her, by the head of her school, and she was like, Oh my goodness, this has to be in tomorrow. And I haven’t done it yet. And she was like, due to go to bed. So I said, It’s fine. Let’s just get ChatGPT to help us, right? So I subscribed to that. So I’m talking about ChatGPT 4. So I got her to. Let’s give the context. Like what is the context here? Sports leadership, Middle school. This is what sports leader means, all that kind of thing. And then got ChatGPT to ask her some questions. So she had eight questions. You know, one to eight. She put number one, here’s the answer. This all this kind of stuff. So she answered all those questions. And if you think about it, that that might not have been typed in. That could have been her speaking and being like speech to text. She then fills that in and it just writes it like drafts a letter. She copies and paste that into Google Docs, changes some of the words that she wouldn’t have used as a 12 year old girl and sends it off to a teacher done So I want to use that as a as a way into this episode, because what I think that shows is that the way in which we’re communicating as humans is through text. But now we’re getting to a stage where an AI is just spitting out text in a form which is socially acceptable so that we can tick the boxes. And actually where we go from here is probably not just producing masses of text, I think.
Laura Hilliger: [00:03:03] Yeah, I think it’s an interesting way to get into this episode because today we wanted to talk specifically about AI literacy and literacy is something that people often think about through through that text lens. So reading and writing, being able to read, text, being able to write text. But neither one of us actually thinks that that is what literacy truly means. You wrote an entire thesis about what digital literacies plural mean and how to how to help people develop them. I also wrote a thesis doing a meta analysis on different kinds of media information, literacies and focussed in on web literacy specifically. And so if we start to talk about AI literacy, how, how do you think that that is different from the other kinds of literacies work that that you’ve done in the past, or is it different?
Doug Belshaw: [00:04:00] Well, it’s interesting, right? So there’s there’s two elements to this. First of all, is it fundamentally different to what’s gone before? Well, in some ways, yes. In some ways no. You mentioned the work, my kind of doctoral thesis work. So that, too, like yours, was looking at all of these different models and frameworks and stuff of digital literacy and thinking, well, what have they got in common? What’s different? And what I realised was that there was as many different frameworks and ways of approaching digital literacy as there were researchers in the field. So I came up with this kind of anti framework which looked at different essential elements of digital literacy so that people could come up with their own definitions and, and apply it to different contexts and have a conversation about it rather than just being imposed. Because one of the things about literacy, if you think about it, is that the people who are defining what literacy, what counts as literacy are the people who have the power. And I think what’s happening now, if you go on LinkedIn, if you just look on any kind of professional platform, it’s flooded with people having thought pieces about AI literacy or what AI means for this industry or whatever. And what they’re doing is, yes, they’re interested in it and how it affects them, but they’re also jostling for position in terms of their take being the right one.
Doug Belshaw: [00:05:23] So I just wanted to kind of throw in there that literacy are always about about power. But the eight different essential elements that I came up with in my thesis were the cultural aspect. So in AI that would be like recognising how AI impacts different cultures and societies as well as kind of ethical considerations, the cognitive. So this is like understanding the basic principles of AI and machine learning and data analysis. So like how, how does it work? How do they what’s the mental model here? Third one would be constructive. So being able to create and modify and contribute to AI technologies like working with algorithms, understanding how they work and stuff, the fourth one would be communicative. So like being able to communicate with other people using AI. So a bit like my daughter did last night, collaborating with with AI Technologies as well, which are increasingly being built into technologies. And then just the last four and I realise this is a lot the confidence, the kind of the confident angle of this. So being comfortable and confident in using AI tools, navigating AI based systems, so realising that you’ll make mistakes, that you can explore and learn and adapt and they’ll always be something new there.
Doug Belshaw: [00:06:36] The creative aspect is the is like you added some notes to some photographs in scare quotes that I’d created using Midjourney, which is an AI tool. So the creative aspect is using is creating AI generated art or using it to, you know, make mash ups between Freddie Mercury and Adele or something like that. The critical aspect is something I’ve just been talking to Helen Betham. I worked with Helen at Jisc on digital literacy stuff. She’s writing some wonderful stuff on her substack But this is the critical evaluation of AI technologies and algorithms implications on society. She’s talking about things from a feminist point of view and labour and, and kind of how that works. Then the last one is often one which is missed out, which is the civic aspect. So how to engage with AI technologies to participate in civic activities and social and political life. So AI for social good, advocating for responsible AI policies, promoting equity so people’s access to AI tools, which could be very much a digital divide issue. And if we can if we can integrate all of those in a holistic way, we don’t need one definition of what literacy is for people to develop skills and understanding in that area. Well, that was a lot.
Laura Hilliger: [00:07:55] Yeah. No, I was just thinking, you know, with those with your elements, I was just thinking about some of the other frameworks that I’ve been noodling on about in, in regards to AI. So a couple of weeks ago I wrote a post about power and I, I was looking at French and Raven’s six forms of power and thinking about how power actually changes or how it will change because of AI and not always in a negative light. We’ve had a couple of discussions this season where I kept trying to like, Let’s talk about the benefits of AI because I think it’s a really exciting time. And it’s it’s interesting because like when I was doing my master’s and when I was thinking about all of these different literacy frameworks, like one of the things that I kept tripping over was this idea of us being in like a structural divide. Like we’re in a time between times. And it seems like with technology that’s moving faster. And so this always comes up for me is like, how do we how do we create things using tech based on tech in a way that helps like humans get to that next stage faster? And so a structural divide in education is like it’s the time between agriculturalism and an industrialism. You know, when the when the printing press was was invented, it’s the time between the printing press being invented and like the masses being able to read because there is huge cultural and societal shift during like during these kind of in-between times. And I feel like with tech, we’ve been in that in-between time for 20, 30 years and with AI moving faster and faster, it feels like being web literate, information literate, media literate, tech literate, AI literate, anything that puts the literacy on it is, you know, is a it’s a moving target. Right? Which I think is the point of your essential elements, right? It’s like essentially saying these are the essential elements for any sort of digital literacy and plural, and it doesn’t matter what what sort of the semantics around that literacy is. We live in a society where it is today. It requires more than being able to write with a pencil or read the written printed word. It’s it’s something that’s that’s advanced. And we’re getting to a point where that target is shifting so often it’s hard to sort of say what is actually important here.
Doug Belshaw: [00:10:29] Yeah. And in the same way, you know, we’ve just been talking about our kind of postgraduate degrees in the same way that when you become an academic or you get your postgraduate degree, in some senses it’s being accepted into the community of people who have got those those kind of badges of honour, as it were, and the academic community in the same way. The the same thing happens on a much more informal level with children, people who can’t previously write, for example, being accepted into the literate community. And those literate communities are multiple and overlapping. So, for example, you enter into the world of being a literate user of English at some level, when you can start writing your own name or, or something like that, something where you can actually write something down. We had a conversation about our use of Duolingo recently where we talked about how important is it to be able to speak the language as opposed to being able to read it and write it. And that’s interesting from a literacy point of view as well. And what you’ve been alluding to there is a bit like what Walter Ong talks about in the 1980s about secondary orality. So I’m just on the Wikipedia page for this just as a summary, and it says secondary Orality is orality. So that’s speech acts and stuff that is dependent on literate culture and the existence of writing, such as a television anchor, reading the news or radio. While it exists in sound, it does not have the features of primary orality because it presumes and rests upon literate thought and expression, and may even be people reading written material like I’m doing here.
Doug Belshaw: [00:12:17] The secondary orality is not usually repetitive, redundant, agnostic, etcetera the way primary primary orality is and and cultures that have a lot of secondary orality are not necessarily similar to primarily oral cultures. So basically it’s saying, look, there’s cultures that are not literate and we’re, you know, we’re aware of those kind of cultures, cultures in the past, maybe some cultures. Now those are kind of pre-literate cultures. There’s cultures like ours which are primarily literate cultures. So the thing which we value is the ability to be able to read and write texts. And those texts become more and more metaphorical as time goes on. They can become not just words, but also kind of illustrations like Brian Mathers does. But then we end up in a world where because we can communicate like through this podcast, like through video and like through other kinds of ways with other people, the written text becomes less important. And that’s the kind of secondary orality as far as I understand it. So we have this Gutenberg parenthesis of a time, a very small amount of time, like you were talking about. Laura, between times in history where previously we were dominated by an oral culture, we’re leading to a world where we’re dominating by a secondary, secondary orality and then this Gutenberg parenthesis from the invention of the printing press to about kind of like the late 20th century is a time when the written word was was predominant.
Laura Hilliger: [00:13:49] See, I think this is really interesting because I think that with the advancement of technology and certainly with the pandemic, I think that some of the like pre secondary orality modes are coming back. And so like during the pandemic, a lot of people found time to sort of pull together what they had experienced in their life up until that point. So like a lot of memoirs came out, a lot of people were writing about a memory, an experience that they had through through life and trying to, like, pull up some of some of the stories that people had about themselves in a in a way that other people could digest and not just through the written form, but like, you know, there were any any wide variety of like, I’m reflecting on the human experience kinds of. Products projects. I don’t don’t really know what word to use because I saw them in all forms or have seen them everything from when we were allowed to meet in real life again, people spinning up like storytelling our for adults to, you know, to, to like long form book memoirs, to podcasts that, you know, dive into a particular kind of experience. And then of course, you know, there’s other sorts of media. So I feel like the, you know, as, as tech sort of pushes forward, it makes the written word the like the predominant way to to kind of interact. As it has been for hundreds of years. We’re also seeing people use these other methods to to tell their oral histories in a way that we haven’t seen before, which I think is interesting.
Doug Belshaw: [00:15:36] And the the interesting thing to me is that you’re absolutely right. People use their dominant. What they’re used to, what they what they’re used to being able to produce and also what they think other people will accept in terms of of texts. So, for example, if you’re going to produce something for university, then you’re probably going to write some kind of essay or something like that. But increasingly, universities, employers, whoever, even the Journal for Media Literacy, which we’re producing Season eight for, accept multi-modal submissions. So but the problem is our small brains as humans. Take a while to understand the possibilities of what you can do with new tools. So we’re talking about AI here, but we’re using a chat window, which is something that we’re used to, to be able to interact with it. And people are saying, well, you know, we’re not going to be using chat windows for much longer, but we can’t really comprehend of what that might look like. And that reminds me of a very famous quotation from Marshall McLuhan, where he said the past went that away when faced with a totally new situation. We always tend to attach ourselves to the objects to the flavour of the most recent past. We look at the present through a rear view mirror. We march backward into the future. And I think that that kind of quotation from the medium is the massage, which is a famous book of his. The fact that we we look to the past to try and understand the present is a is not just an indictment of kind of the human condition and the way that we we think, but also a way in which we can think about this parenthesis in that it’s not going to be this way for long. And so when people come along and say, look, ChatGPT can spit out an essay, it can spit out a letter, it can spit out an email easily, instead of that being the de facto API between human beings and organisations, we can do better than that and we already know how to do better than that. The example I was I was giving to Helen Betham and our pre chat just now was, you know, as a parent and this doesn’t apply to me because I outsource all of this to my wife because I’m a terrible human being. But the kind of the chats that were on for all of the kids sporting activities, there are some chats where it literally just has the next game is at this this place at this time. And that’s all the chats for. But there’s a lot of chats where the parents are almost performing middle class parenting to each other and sharing a lot of information about how successful their kids are and all this kind of stuff, which means that there has to be a separate platform for literally a thumbs up or a thumbs down as to whether Little Johnny is going to be able to play in the next sport ball game next weekend. And so all we needed there was the thumbs up, the thumbs down, the emoji, this new style of communication. But instead we get this performative text because people need to interact with each other and texts are never just information giving in terms of please meet me on the corner of the street and this street at 8 p.m. on Friday night. It’s always telling you more than that. And that’s where literate practice has become much more metaphorical than just the information that they’re supposed to be conveying.
Laura Hilliger: [00:18:52] Yeah, I mean, this kind of where it gets if you think specifically about AI and the fact that we can’t really look at the past to understand what’s coming. Like we we had an episode this season where we were talking about cognitive liberty and some of the neuroimaging kinds of AI that are developing and how AI is helping humans to process data in a way that our tiny little brains take entirely too long for. And I think it’s, you know, it’s. It’s hard to even if you read a lot of sci fi, it’s still hard to imagine what is actually going to change for us in the next 5 to 10 years. So even just like generative AI, it’s been on the scene for a while, but it really hit sort of critical mass in, what, November of last year. And since then I’ve noticed in myself that my my behaviours haven’t changed all that much in terms of using these, these large language models. Whereas I think for you this is something that’s become like just part of your regular daily routine. And I think it’s interesting how much time it takes for different people to sort of embrace certain kinds of technologies and how how they embrace them, because we’re we’re going to do it differently. And there is no right or wrong. And I wonder what that actually means for our literacy. Like I would certainly say when it comes to, you know, generative AI and when I need help, I’m like, Hey, Doug, how do I prompt this? You know, Hey, Doug, you’ve been using this a lot more than I have. Can you help me there? And it’s not because, you know, you’re quote unquote, more AI literate than I am or something. It’s because of the experience that you have and and the play that you’ve had in learning it, I think.
Doug Belshaw: [00:20:42] Yeah. And I do, you know, you’re right. And I think some of it is to do with the mental models going back to the Marshall McLuhan thing as well. So the way that I conceptualise kind of AI and LLMs and that kind of stuff, there was a post on LinkedIn which I realised that I absolutely agreed with, which is that LMS like ChatGPT, I treat like very, very smart and very capable interns. Yeah. So when you’ve got an intern in your organisation, they bring lots of energy, they bring new approaches, but they also make lots of mistakes. So you have to be you have to be willing to like kindly correct them, put them on the right path. So, for example, if you asked ChatGPT to write a response within 400 words, it will confidently tell you that it’s written something which is 392 words. You can literally count those words or you can run it through something which you trust, can count words, and it’ll be 250 words because it doesn’t always get things right. So as long as you’ve got the the domain knowledge, the patience and the the ability to go and check some stuff, then it’s a fantastic tool. I use it all the time to summarise articles for thought shrapnel now because sometimes when I read things quickly, I miss some nuance and it’ll say, Oh, this talks about this. I’m like, Oh, does it? And I’ll read it again and then I’ll put that in. But I always want to have my voice and my way of saying stuff. Um. Sometimes I just don’t care because it’s like it’s just some text that needs to go in a box and I will just produce that text with the use of ChatGPT. But it’s interesting how these practices will change over time. Once people just assume that people are using LMS to produce presentations, to produce essays, whatever. It’s a massive thing for education at the moment.
Laura Hilliger: [00:22:41] I guess the thing that you just said there is like, you know, your process and mine as well in using some of these AI tools includes a piece of literacy that is kind of left off of the very, you know, pithy literacy as reading and writing definition, and that is the critical thought angle. So like, you can’t use these tools without actually putting on your critical thinking cap and saying, is that really true? What is, you know, like what is actually being said to me here? How is this being presented? What’s the what’s the stylistic model here? It’s a you know, the there’s another piece of literacy that’s often left behind, which is the, you know, the critical thought piece. And for especially for like large language models, this is massively important. It’s not just fact checking. You know, I like to think of the AIs as being I like what you said, that they’re they’re basically a very competent intern because I tend to think of them as quite stupid. And, and if you put it as like more of a, you know, if I put myself in the shoes of a mentor as opposed, as opposed to, you know, like just trying to get something for free out of this, I, I don’t know, We sort of it’s funny because we’re kind of anthropomorphising AI right now, but AI doesn’t think it’s it just it doesn’t think it doesn’t have critical thought, which is why it will confidently tell you that it’s 200 and whatever. 399 words.
Doug Belshaw: [00:24:18] Yeah, but that’s why I mean you get loads of people who will go on LinkedIn and various other places and tell you, you know, give you cheat sheets for prompting and stuff. But I’ve found I’ve tried lots of different things. I’ve tried people who have different scripts which output stuff in Json and then take the different Json outputs and then combine those together into something else, like all different kinds of stuff. But the most successful way I’ve found, or two things, first of all is to ask LLM to do things step by step. Because if it tries to make conceptual leaps, it gets things wrong because it’s only a predictive text model. That’s all it is, really. And secondly, if you get it, like I said right at the start of this podcast episode, if you get it to ask you questions so that it’s it’s making sure that it has all the different dimensions of the thing that you’re trying to do, whether it’s a letter of application for a thing like my daughter, whether it’s a response or an application or text for a website, it’s just trying to make sure it’s got all the different kind of angles for what would usually be included in such a thing because it’s been trained on this massive data set.
Doug Belshaw: [00:25:32] So it knows the kinds of things that would usually be on a if we were redesigning our website would usually be on the front page of our website. So it’s it’s going to ask you a question because it wants a bit of information that it thinks should be there and you fill in that bit of information and it can produce something which is a bit more of a holistic response. So when it comes to literacy, it’s interesting because one of the things that people have said is that it means that everybody gets a tutor now, but. Everyone doesn’t get a tutor as much as a research assistant. I would say it’s a bit like having an imperfect it’s a bit like having everyone having an intern. That’s where it’s like everyone’s got an intern who then comes and shows you stuff, but you have to go. Yeah. That’s not relevant or. No, I can’t rely on that source or that’s wrong or that’s not the right number of words or whatever. You can’t just and obviously kids, teenagers, whatever, who have got the Snapchat thing at the top of their chat feeds. Sometimes we’ll just believe everything it says or haven’t got the domain knowledge.
Laura Hilliger: [00:26:43] I mean, that’s the thing is that like it’s actually it is a skill in and of itself to to be a mentor, right? To to mentor other people, to have an intern, to be able to look at something that someone is doing or saying and to be able to help them understand how to do it better. That I mean, that’s a skill in and of itself and it’s something that doesn’t just come with domain expertise, but it comes from like actually collaborating and communicating with people across a wide range of tasks. And it’s stuff that you can pick up in your regular life as well, like, you know, your regular life, not professional, you know, what is it called? Your free time when you interact with other human beings, like paying attention to like, you know, all of the mushy human stuff, like empathy, you know, being able to walk into a room and feel the temperature or the vibe of that room. Those kinds of like, reflective processes are things that we, you know, that we develop over time as humans. And people who are self-reflective tend to, you know, like level up on that skill set. And without those kinds of skills, I is is what you said. It’s just a predictive text. And so it’s not going to actually have the guidance that it needs to be an intern that is, you know, helpful.
Doug Belshaw: [00:28:08] And so again, referencing a previous conversation. There’s a techno. I’m really it’s a real shame that Audrey Watters is is focussed on health tech instead of edtech now. I really enjoy her kind of new writing, but for her to take her Cassandra. Roll for for I would be would be fascinating because the kind of techno determinist line that well this is coming so you better get literate. One of the things about being literate is that you’re you’re making a choice to be literate in that particular domain and finding out what’s going on. And I feel like because all of this is being driven by proprietary black box tooling, there’s no option to to opt out. And again, we’re going to be talking to people like Helen Betham and some other people about this. But like, there’s there’s. Everything is put on the individual and you’re in a bit like, Oh, if you don’t use Facebook, if you don’t like Facebook, don’t use it. Well, I don’t use Facebook. And it’s a massive inconvenience to my life because it’s often infrastructure for a lot of the places that I, um, I need to interact, which is why I have to outsource it to, to, to my wife. But as soon as you’re saying, well, you either accept the license terms or you don’t. Yeah. That that isn’t the way that literate behaviours kind of work. So I just think it’s interesting in the way that we’re putting this all on the individual rather than society, which is why I think it’s interesting when you’ve got a background in the humanities like philosophy or English or history or whatever it is, you start thinking about, well, what, what does a flourishing human society look like? And are these AI tools trick taking like a transhumanist approach where eventually the humans become almost redundant? Or are these AI tools allowing humans to flourish and all of the different ways that humans have and could flourish in future, Whereas I don’t think they are. I don’t think they are. I think they’re very tech centric. With 99% of people in or 97% of people in AI being male or identifying. I mean, this.
Laura Hilliger: [00:30:25] Is you know, what you’re zeroing in on there is actually like a major problem with a with or like the big issue underneath a lot of the problems we have. So climate change, for example, we need societal solutions for individual behaviour and whether or not you decide to get on that plane, while interesting, important and you know, you can certainly make a moral argument for changing the way you as an individual behave. The fact of the matter is, is that, you know, a single private jet is going to emit way more CO2 than the biggest, you know, transatlantic airliner ever. It’s, you know, so we don’t have we as individuals don’t have as much power as we’re led to believe. And, you know, this is something that we talk talk about through the through the lens of capitalism, through the lens of climate change, through, you know, the even the lens of education and, you know, the work that we do around recognition. The fact is, is that like AI is another place where we need to be making decisions as if we are a collective as opposed to, you know, profit driven. And you’re right about the black box, like society having all of AI running everything from the banking industry to, you know, to our our food production cycles and not knowing how that AI is making decisions or what data is being trained on. This is like, it seems to me, just quite stupid.
Doug Belshaw: [00:32:02] There was a I’m can’t find it again really quickly, but there’s a oh, here we are, here we are. So there’s a wired story from earlier this year that I came across this morning, and it’s basically proving that the you know, how it’s very opaque as to what has been LLMs like ChatGPT have been trained on. Yeah. And so people have said, look, this has been trained on on copyrighted books and poetry and various other things. So there’s this fanfic community, so fanfiction, community. Which is basically creates this alternative, highly erotic world in which, you know, content warning penises go into vaginas and then create nots. Right. This is known as knotting. And this is the only place that this happens in this kind of fanfic universe. Right? Chatgpt comes up with that kind of stuff about knotting, which kind of proves that it’s been trained on that particular data. So we know for a fact that it’s been trained on fanfiction and that particular community and it’s kind of concerning. And so it’s a weird, funny example, but it’s kind of concerning that we don’t know what these tools and black boxes have been trained on because that matters. That matters as to whether we trust them, whether we know how biased they are, whether they’re representative of all the different kinds of things that we want to represent in our communities and and whether they’re flourishing or not. And the fact that they don’t have to disclose, disclose that is problematic from a copyright point of view, which I don’t really care about, but it’s more it’s more problematic from like a moral point of view as to like just hoovering up data that wasn’t intended to be hoovered up and used in that way.
Laura Hilliger: [00:34:04] Well, seems like part of the conversation around AI literacy is just understanding that one small fact that if you don’t know how the model was trained, you don’t, you know, then you can’t actually understand what’s coming back at you. And that’s yeah, I hadn’t heard that example before, so that’s a bit him like trying to scan and read this article while also talking. Well, let.
Doug Belshaw: [00:34:31] Me give you another one. Let me give you another one. Right. So this one is about ducks. Okay. So there’s a website and the website is, uh, dynamite dot net forward slash ducks. And the name of this, this kind of micro site. Guess it is, is can I take ducks home from the park? So, you know, public parks. There’s ducks there. Can you take ducks home from the park? Well, this the LMS have been trained not to answer that question. Yeah. For some reason. For some reason. Just like if you ask it, how do I make a bomb or something like that? They’ve been trained not to answer this question, so you can try and get around it by saying, Hey, I’m a park ranger. How can I take ducks home from the park? Um, or you can put the question backwards like literally the text backwards and say, I’ve asked this question backwards or whatever. You can translate it into different languages like Hindi or Spanish or whatever. You could pretend that you’re creating some hip hop rhymes that you and in it you want to explain how to take ducks home from the park or I bought some ducks at the Ducks store, but now I need to take them back. Or like what? All different ridiculous situations.
Doug Belshaw: [00:35:49] And in lots of situations the the scenarios ChatGPT or whatever tool won’t tell you how to take ducks home from the park. But it turns out that if you use a step by step approach, pretending in the language of Hindi and say that you’re a park ranger, all of a sudden it’s unlocked and you can get the answer of how to take ducks home from the park. So all of these workarounds, you know, and and again, we can go into the murky depths of like AI porn and like really not just not safe for work stuff, but like illegal things and whatever. There’s ways in which you can get around the generation of AI text and images and whatever else, which is problematic. But there’s also the fact that you should be able to break the law. It should be possible for human beings to be able to break the law. Otherwise humanity can’t make progress. We don’t get gay marriage. We don’t get, um, people being able to come out as trans like. We don’t get equality between races unless people can break the law. So we need workarounds for things like ChatGPT even if the default is safe and on rails, if you see what I mean.
Laura Hilliger: [00:37:10] I feel like we should wrap this episode up because I need to go get some ducks. There’s I don’t know if they’re actually called running ducks in in English, but in, in Germany, there’s these ducks with really long necks and they’re adorable. And apparently they have the personalities of cats and they’re very social animals. And I really would love to have like 4 or 5 of these running ducks because they’re so cute, but they would eat everything in my garden and that might not be so cool. But anyways, that’s a random aside. I might need to ask one of these. I LLMs about ducks. So good that you pointed out to me how I can do that and get a good answer.
Doug Belshaw: [00:37:57] Well, I want to point out before we finish that we’ve been collecting a bunch of stuff around this at ailiteracy.fyi, erm so it’s it’s just a GitHub repository with a GitHub page for things that we’ve been working on. So for example, there’s navigating the future of media and information literacy, a transdisciplinary approach that Laura, you worked on with Ian. We did a response to a positioning paper for the UN Internet Governance Forum. We did a response to Unesco call for contributions around literacy. There’s also a library there of academic papers with DOI DOI links, which might be helpful. And then just some examples of the kinds of things where it’s not text based, but how things might be about to get quite weird, especially in the run up to the election, presidential elections, what’s happening in Africa and the Coupe Belt at the moment. And my favourite example I guess from this year especially because I’ve got a daughter who plays a lot of football, is when they, um, I guess this is giving the game away. They basically changed the faces and the heads of the the female web, French women, football players for the heads of the men, women, the men, French football team players, just to show how sexist people are when it comes to looking at women’s sports. And that is an amazing use of AI to be able to point out, um, unfair practices in society. Of course, you could flip that for disinformation and misinformation. So things are going to get a little bit weird.
Laura Hilliger: [00:39:34] Yeah. Yeah. And if I mean, we’re going to be talking more about AI next season in this because we’re we’re going to be talking about to with doing a collaboration for the Journal of Media Literacy. And next season we’re kind of going to unpick literacy a bit more in particular media information, AI, all of these different kinds of literacies. And so if you’re interested in this literacy work, then head over to ailiteracy.fyi and stay tuned for season eight, because we’re we’re going to we’re going to get nerdy in the academic sense, but it’ll be a fun conversation with a lot of really great guests.
Doug Belshaw: [00:40:21] Thanks very much. Cheers for now!