In this episode, we talk about ‘Cognitive Liberty’ in relation to AI, based on Nita Farahany’s interview on the podcast Your Undivided Attention.
- Support We Are Open Projects!
- Inspiration for this episode from Nita Farahany on Undivided Attention, The Battle for your Brain.
- Deconstructing Power in the Age of AI
Laura Hilliger: [00:00:22] Hello and welcome to the Tao of WAO, a podcast about the intersection of technology, society and internet culture with a dash of philosophy and art for good measure. I am Laura Hilliger.
Doug Belshaw: [00:00:34] And I’m Doug Belshaw. This podcast season is currently partially unfunded and you can support this podcast and other we are open projects and products at opencollective.com/weareopen.
Laura Hilliger: [00:00:47] So today we were thinking about talking about cognitive liberty and AI. And this subject came because we’re kind of focusing season seven on AI anyways. But then, Doug, how did you stumble upon the podcast that spurred the idea for this podcast?
Doug Belshaw: [00:01:08] Well, two things, really. So this is a podcast from the Centre for Humane Technology and they’ve got a podcast called Your Undivided Attention. So I think we’ve been following their work ever since one of their co-founders left Google and started kind of spreading some warnings about dystopia. And also Aza Raskin used to work at Mozilla at about the time we were there, maybe slightly before. And Is Raskin interviewed Nita Farahany, who is the author of a book called The Battle for Your Brain. And this particular podcast episode that I listened to while I was on holiday and then sent to you immediately was talking about brain activity and this idea of cognitive liberty, and they’re sent to you and you listen to it. And we wanted to talk about it from a, I don’t know, not just a report on another podcast episode, but to kind of think about it in our context and maybe like take it down a notch in terms of. How worried we should be about this, So. Yeah.
Laura Hilliger: [00:02:10] So when he suggested that I listen to this podcast and then I listened to it and I felt like I learned a lot, I thought it was a great episode of your undivided attention. And, you know, we do follow the work of the Centre for Humane Technology, but I recalled a few months ago, I don’t know, like in the spring sometime I shared a video from the Humane the Centre for Humane Technology, about AI and where AI is going and sort of the risks of it for society. And you were quite critical about about that video and we’ve had, you know, non podcast recorded conversations about sort of the, the AI hype and the the negativity surrounding what’s going on in AI. And I think it would be interesting to I mean, we have to talk about the risks of AI, especially if we’re going to focus on cognitive liberty, then defining that and talking about it and talking about the risks. But I’m also hoping that in this episode we talk about some of the benefits because we’ve been playing with AI quite a bit. And I feel like every new day there’s a new tool to to play around with. And it’s I wouldn’t say that it’s fundamentally shifted the way that I work or anything, but I’m certainly finding benefits to some of the some of the stuff coming out.
Doug Belshaw: [00:03:35] So well let’s let’s dig into the the cognitive liberty stuff and then we’ll talk about some of the ways that we’ve used AI. For example, this morning we were talking about how it was really useful when we were at Badge Summit to summarise some stuff and brainstorm ideas and that kind of thing. I’ve been using it to quickly summarise things on my blog. I shared something about how we decided not to buy a house based on it. Like there’s lots of ways in which we can go here, but I just wanted to. It’ll be interesting to get your main takeaways from that podcast episode, but one of the things that I thought was really interesting was about a very real world situation that you can imagine happening. So lots of people have things like Apple, AirPods, which they put in their ears and which have sensors. So for example, if you take one out, it knows that you’ve taken one out and it’ll pause things or switch to a different mode or whatever. You can also imagine AirPods taking some biometric data, for example, your heart rate.
Doug Belshaw: [00:04:37] But what Nita Farahany is talking about here is the ability for the technology involved to be so small that it can basically read your your brainwaves and therefore infer your mental state, potentially even understand emotions and what you’re thinking about and that kind of stuff. And as the technology gets better, if it captures this data further down the line, like maybe months or years after its capture, the data, it’ll be able to data mine that. Now, as you’ve already pointed out, there’ll be benefits to this. If there weren’t benefits, then, you know, companies like Apple wouldn’t be able to sell it as a feature. But they’ll also be massive downsides to this as well, because you won’t necessarily know what they’re doing with that data, which is why Nita Farahany is talking about the need for in things like the Charter for Human Rights having something called cognitive liberty so that you are the owner and protector and your kind of thoughts are protected so that it’s not just leaking out into the world for anyone to be able to see and use.
Laura Hilliger: [00:05:44] Yeah. So she says in the episode that what we need to do is recognise cognitive liberty as a human right and codify it into law. And you know, it’s worth listening to the episode because they talk about how, you know, we, we haven’t actually needed to codify things into law until after we realised they were a problem. And the example they always give is the right to be forgotten. And we didn’t need that as a law until the Internet could remember everything. And at this point I’m sort of parroting the actual podcast, but I think it’s I think it’s like it’s interesting that that these advancements are coming. And what I find really fascinating is it’s it’s not just interesting from the like, what are they going to do with the data and what does it mean to actually, like, you know, have something in your ear that can read your thoughts and but also visualise them. So, you know, AI is getting to the point where it can actually visualise signal all kinds of signals. And, you know, I find that fascinating. And I am trying to remain optimistic and positive about what that how could it be used for good and whether or not we as a society can actually, like prove ourselves wrong and fix some of the problems that we haven’t yet fixed with technology.
Doug Belshaw: [00:07:06] So some people might be listening to this thinking, well, hang on a minute, what possible benefits could there be from something like Apple AirPods tracking my brainwaves? Like what could Apple possibly suggest that that would be a good thing? And so again, in the podcast episode, tracking your stress in a different way to your just what’s done on your smartwatch, your cognitive fitness, including things like early stages of depression or what’s something I suffer from migraines, dementia, Parkinson’s disease, all those kinds of things. And, you know, we’re one way of looking at this is that we’re on a slippery slope. So in 2005, I think it was was the first time when Gmail started scanning your your inbox and started serving ads against it. And that was a massive outcry at the time. People wouldn’t even blink now. And so one way of telling this story is the the massive invasion of big tech into our lives. But then the other side of that is, I would say that my generation, our generation is a lot fitter than like our parents generation on average. Yes, there’s lots of obesity, but the majority, I would say, of middle class people seem to be more fitted, more fit than my parents generation. And a lot of that is to do with being able to track your fitness, knowing what is what’s good and what’s bad and all that kind of stuff. So there’s there’s pros and cons for sure.
Laura Hilliger: [00:08:39] Well, I mean, that’s a that’s interesting because like, if you think about when our parents generation were roughly our age, it was, you know, the 80s and 90s and there’s a lot of regulation that happened in the 80s or 90s that was specifically targeted at health. It didn’t have to do with technology per se, you know, it had to do with research and like smoking, you know, in the 80s everybody was smoking all over the place inside, you know, like office meeting rooms, smoking. Like nowadays you can’t smoke a cigarette with, you know, within 25m of a public building or whatever in most of the Western world. And so I think it’s interesting that we, you know, that we when we think about technology, we we sort of wonder how much has technology or I wonder how much has technology influenced the state of society versus, you know, just like the general evolution of what we know and understand and and especially around health? I don’t know.
Doug Belshaw: [00:09:39] Yeah I guess so. But again, the reason that we know that things are good or bad is to do with improved technology and research and stuff. But in terms of the the kind of AI side of this, and there’s some really interesting stuff about you’ve written this week about deconstructing power in the age of kind of AI and machine learning and and also in that podcast episode that we keep coming back to, there’s there’s a lot of discussion between what constitutes coercion and what is persuasion. So. If something’s got access to your brain state and can generate an infinite number of things on the fly using AI, then it can it can know which buttons to press and which order to make you do stuff that would be like the scary, scary version of that. But I don’t think we’re just programmable machines in that way. The example that I gave to you recently, Laura, was I was driving back from Devon. So for those who aren’t in the UK, that’s right. In the south west of England, driving all the way to the north east. So almost as far as you can go in England. It’s about 7.5 hour drive, which for some people in other countries which are a lot larger, is nothing. But for us it’s a long way.
Doug Belshaw: [00:10:57] So I switched from Google Maps to Waze, W A Z E. Waze is owned by Google Maps anyway, and lots of the features from Waze have made themselves into Google Maps. But one of the things which is different about Waze is that if you’re stationary, it will suggest that you stop off at a location which is kind of on the way and give you a discount to stop off at that place because it knows where you started, where you’re going, that you’re stationary, and that taking this slight detour will only add five minutes onto your journey. Now that’s a benefit to the advertiser. It’s a benefit to the person who’s taking a cut of the advertising, aka Google. And it’s a benefit to me potentially, but it’s a bit weird when you see it for the first time because you realise that this is just for real, straight up trying to persuade you to do something which you were not intending to do as opposed to like adverts that you see on social networks, which you kind of scroll past. This is very much like on the screen right in front of you instead of the map that you asked for. So imagine that, but instead being a little bit more insidious.
Google Autonomous AI: [00:12:04] I don’t know, but I found these results on search.
Doug Belshaw: [00:12:06] Don’t mention Google when you’ve got a Google display next to you.
Laura Hilliger: [00:12:10] I hope that your microphone actually picked that up. We are also being recorded by Google right now.
Doug Belshaw: [00:12:15] But again, that’s something which I wouldn’t have done ten years ago. I wouldn’t have just had a device next to me listening to everything I say. But because nothing ostensibly bad has happened in my life, I forget to turn the microphone off and therefore, you know, it responds to stuff. And I only literally use it for my lights in my office and asking asking it about the weather and my back doorbell things which I could do entirely manually. But you sort of get used to it just being there and handy.
Laura Hilliger: [00:12:48] Yeah, you kind of get used to certain kinds of invasion of your privacy. And certainly over the last 15 years, 20 years working in tech, I know a lot of people who 15 years ago would would, you know, like definitely not have a recording device in their house. No smart technology. I know engineers who today, you know, are driven by that that fear of smart technology and will not put certain or any smart technologies in their houses. And I think it’s interesting because ten years ago or 15 years ago, people working in tech knew what that could be used for. And today we still know it. But we also know that even if you don’t have smart technology in your house, bad actors can mess with you if they want to. And so we kind of like look at the the way that risk has changed a little bit and are maybe a little. Yeah. I wonder if saying that on a on a public podcast is a good idea, like, oh, you know, Doug has Google in his office, what can you do with it kind of thing?
Doug Belshaw: [00:13:52] Well, it does have a my office every single room in my house, which is again, not something I thought I’d do, but it’s more useful in a family situation, I guess, than like if I was by myself, I don’t think I’d have it. But it’s just useful in a family setting. Like it just makes family life easier in some way, in a in a way that I wouldn’t be able to put into useful words in a in a sentence and or in ways which would.
Laura Hilliger: [00:14:22] Yeah, Let’s go back to the let’s go back to thinking about how how power and how AI is changing power because I you know, when I think about what is happening with AI in society and cognitive liberty, I think that like, you know, to take it a step further than what what was on the podcast that we keep referencing, I think that actually people’s people’s brains are being messed with without neurological tests tech because of the advancements in AI. And I’ll give a specific example. Nowadays your elderly parents could very well get a phone call from you that isn’t you. So voice modulation, AI, Voice modulation, or I recently read about a guy in Germany who he and. His family lost €50,000 because they were duped into investing in something that wasn’t real, didn’t exist. But the the they did their due diligence. And the article was really about how they actually really tried to to understand. And they believed at the end of the day that not only was the potentially I person answering them, not a real person and certainly not who they said they were, but they made up, you know, they made up a bank that actually exists, but they were not representing it. New marketing materials, logos, like everything looked completely legit. And this middle class family lost all their money. And I think that, like the the power of bad actors has certainly changed because of what AI allows us to do nowadays, which I think.
Doug Belshaw: [00:16:09] That’s interesting because that’s the flip side of cognitive liberty. So I can’t remember. I think in conversation with you, I was or maybe I wrote on my blog was saying that. When my daughter was too young to be able to speak, she was extremely frustrated because she couldn’t speak in a way that my son wasn’t. And she was so like we considered like teaching her makaton, which is simplified sign language, all this kind of stuff. But she was so frustrated that she couldn’t speak. And then as kids get a bit older, I’m just like we did ourselves. You you haven’t got the fine motor skills to be able to draw what’s in your head. So it’s extremely frustrating being a child or just not taking your artistic talent further because what’s in your head that you want to represent to the world you can’t do. What AI does is give everyone the ability to potentially show what’s in their head or make stuff which is beyond just crap drawing or a child’s version of a house or whatever, which gives everybody all of a sudden the flip side of this cognitive liberty, which is like unleashing your brain to be able to do stuff because you’re no longer just shackled to poor motor skills or whatever.
Laura Hilliger: [00:17:29] I think it’s really interesting because I’m also following the like the AI and art and AI and creativity and like, you know, the the Writers Guild of America has been on. So basically the writers for Hollywood have been on strike for months at this point. And part of, you know, part of what they’re looking for is like very old school capitalist motivations, i.e., we want to be paid for the work that we do, which is mind blowing to me that, you know, multi-billion dollar entertainment industry is going to tell writers no and, you know, small cost of living increases, but whatever. But the other thing is that they’re fighting against basically AI being allowed to be to take over script writing process. And I think this is really interesting because it’s happening in the art world too. There’s like illustrators, there are, you know, painters, like people who are are, you know, emphatically saying AI is not really art. Ai Writing is not really writing. And I find it really interesting because for me, art is about concept, and poor execution means that you can potentially not get your concept across, which is sort of the point. And it’s the same for creative writing. Like a poor, a poorly written novel is not going to embrace the reader or put the reader into a situation where they are actually like feeling the visceral story of whatever the novel is about. And I don’t think that AI is smart enough at this point to to create worlds for people in a way, in the same way that a, that a human can write. So like I’m interested in this like push and pull between what is creativity really mean? How creative is the AI versus the human? Because a human still has to control it, right?
Doug Belshaw: [00:19:24] Well, all art in some way is derivative because you have been you know, no one is brought up in a in a state of nature, a complete tabula rasa and then, you know, imagines things from kind of and brings things into being from whole cloth. I saw a wired article, I think I put on thought shrapnel about, you know, someone just got a scrunched up piece of paper, fed that into this new AI architecture model and asked it to design a building in the style of Norman Foster and Thingy. Hadid What is it called? And not only did it design the building, but it could do all of like how the utilities would come in, like how things would join together, where the sunlight would come from like really quickly. Now why would you not want that? Um, I think the interesting. Thing is, if you’re in an industry which is threatened by this, you want to reject it entirely. So like writers, for example. But that is a losing battle, like it’s been trained on potentially, like the sum of human knowledge, right? So why would you not want this kind of stuff? You want a job. That’s the thing that you want, right? But it’s a bit like I was listening to a podcast where they were poking fun at maths teachers, I think in the 1980s who were holding up placards about not allowing schools to have calculators and like, this is 30 years later, 40 years later, just saying how ridiculous that was.
Doug Belshaw: [00:20:57] You know, especially these days when kids have got smartphones in their pockets and things. Now for writers and for architects, they can see the impact of AI in their industry. But we in our work, we can’t see it like we don’t see, for example, the potential clients that we could have had for digital strategy work that have just decided to feed everything into an AI and see what comes out the other end. I think what will happen is come 20, 24, 2025, when people realise, hang on a minute, garbage in, garbage out kind of thing, you can’t just write, give me a digital strategy for my organisation. Like you need to know the context, you need to do the user research, you need to do whatever. I’m seeing people instead of doing the user research asking AI personas. And just thinking that’s enough to do, which is which is interesting. So I think all of this is going to shake out. This is this conversation has taken us a little bit away from the cognitive liberty. But I think there is something about at the end of the day, everything is to do with power and capitalism and the kind of economic structure which we live within. And if everyone had enough to live on enough money, you know, etcetera. We wouldn’t be having the same kind of conversation.
Laura Hilliger: [00:22:14] Yep. No, and that’s that’s exactly right. Is that like. We you know, we live in a world where people are having to fight for the bare minimum just to survive, just to just to have a roof over their heads, like the basic hierarchy, the bottom level, their physical needs fulfilled kind of stuff, despite the fact that for hundreds of years our technology has supposedly been getting better and better to give people more freedom and more space. So why, you know, why is it that in you know, that we continually come back to the same problems that we had before, the problems of power, the problems of capitalism, the problems of, you know, abstractionist tech, all of these things. You know, it’s funny, I started like before we started recording, I was like, well, let’s talk about the benefits of AI. And now I’m like, Oh, wait. It’s like, if AI is being used in a society that is organised, organised around corporatism and, you know, taking the, taking the, the little people for a ride, the little people, the masses, the, the common us normies out here, you know, we’re being taken for a ride. So how, how can this be beneficial to us when when the inequality in society continues to be so prevalent?
Doug Belshaw: [00:23:41] And I like the blog post that you wrote earlier this week about which you referenced before about AI and power structures and stuff, because you you used French and Raven six forms of power, which I hadn’t come across before. So legitimate power, reward power, coercive power, referent power, expert power and informational power, and then applied that to AI AI’s ability to manipulate and transform that power. And I wondered whether you wanted to go through that, because I think that speaks to our cognitive. Liberty and ability to act in the world.
Laura Hilliger: [00:24:17] So that’s where it all came from. So you shared that podcast with me, and then that was a while ago. We’re recording this a while after, after we actually listened to the podcast and I kind of I don’t actually remember how I got into the rabbit hole, but in the rabbit hole I did go about how behaviour and influence works in society and who has power and how that shifts and changes over time and these kinds of things. And I hadn’t seen that anybody had talked specifically about how AI is changing, um, you know, changing the way that power works. And I, you know, I was looking for a way to understand power, something that I’ve studied. It’s something that I’ve experienced and I and I came across this, this taxonomy from 1959, the six forms of power. And I hadn’t heard of it before either. And then I, you know, found the book on the Internet and I read a bit too much of it. And I was like, Huh, okay, let’s see if I can distil this. So this post is really just, you know, me, my brain basically regurgitating from some of these things that I was reading and trying to make sense of it all. And, and when you apply these different levels of or different kinds of power to to AI, you can kind of think about how things are going to change. And we’re already seeing it happen. So like the example I give about legitimate, legitimate power, you know, in the past, legitimate power or the way that it was defined in 1959 is, is something that’s derived from a person’s role or a position in an organisation or society. And it rests on the idea that some individuals think they have the authority to make decisions and expect compliance due to their status. And I love the fact that the the word think is in there because, you know, at the end of the day, we’re all just like a bunch of atoms flying around chaotically in the universe. And the idea that, you know, power is like a real thing is, you know, it’s not we made it up as humans, but thinking about the way that that changes is and that we’re actually seeing nowadays and how it changes with AI like legitimate power is starting to be questionable. So I gave the example of the guy who lost 50 grand because the bank was fake. You know, the bank, the pretend bank had legitimate power for him. He thought it was a real bank and he thought that the person, the quote unquote banking account manager that was emailing him was a real person and that they really represented that bank. He thought that.
Doug Belshaw: [00:26:58] It’s so easy to change people’s voices now. Like I saw a YouTube video. I think my son sent it to me where it was someone rapping as Kendrick Lamar, but just using their normal voice and it was getting changed on the fly. And one of the organisations I used to work at, there was a phishing attempt where an email from which looked like it came from the CEO, went to several people in the organisation asking them, could you just quickly buy some Amazon vouchers? Like I’m in a different country? It seemed quite legit. And then they got in contact via like a messaging thing to say, Actually, this isn’t me. It’ll be really easy now because their voice is out there a lot as a CEO to be able to create a very convincing audio file. And so all of this stuff that we’ve talked about before, the example that you gave with people being fleeced out of money, um, et cetera. That’s always been around. But just the level of fidelity that AI allows us to bring to it is great for things like art and creativity, but also, like everything can be used by bad actors.
Laura Hilliger: [00:28:06] Mhm.
Laura Hilliger: [00:28:07] Yeah. And that’s, I mean that’s why I kind of have been in a lot of conversations I’ve had about AI, you know, we thought, you know, the whole bad actors thing is a really interesting one. We, we experienced people who were afraid of open source back in the day, ten, 15, 20 years ago. We had tons of people say, Well, what if people steal my ideas if I make it open? Super common fear about open source. And that was something that we, you know, the open source community over years and years has sought to rectify and deal with. But also we’ve put different kinds of regulations in place to help manage that fear. And that comes back to this idea of something we’ve talked about on our blog before. The two loops model like we are in. We are developing alternative systems for society in various ways right now. Today AI is they’re doing it. And instead of focusing on all of the bad things that can happen and the fear like I really want to explore how how can this if we are smart enough as a society. To, like, put things in place like, you know, like, like privacy laws, which we do have in Europe. Sorry. To those of you in the United States or, you know, IP laws that allow for more open remix and sharing, like we’ve managed to modify our regulatory statutes to be able to encompass things that have to do with technology and the kinds of technology we have today. So if we’re able to do that for AI, what are the benefits to, you know, to us as a species? And that’s that’s something I’m really interested in thinking about, mainly because I feel like for the last couple of years we’ve definitely had like enough fear and pain and anguish and scary and like the world feels scarier than it used to. And so I’m like interested in finding ways of, you know, maybe just keeping, keeping hold of that tiny thread of positivity that I used to have.
Doug Belshaw: [00:30:12] Well, we it’s a sad way of looking at the world, but under the under neoliberal capitalism, we exist in an arms race against other people, which is a sad state of affairs. So, um, can I get more qualifications in you? Can I get more high status previous employers than you? Um, can I use an AI large language model to be able to out publish you? You know, all of these things are just ways in which we can compete with one another. And. And that’s that’s always going to be the case, I think, until we have a different way of organising our economy. So I’ve seen plenty of people say AI is not going to be taking your job, a human plus AI who knows how to use it properly is going to take your job. So I kind of I definitely take on board that and I’ve been using AI tools a lot more this year. Um, but there’s also a bit of a techno determinist angle to that, i.e. oh, well, I exists, so therefore we have to use it to its maximum extent. Like, you know, I’m not an AI maximalist, but at the same time I do think it’s a little bit naive that we’re just going to shove the genie back in the in the bottle. Um, and I’ve seen lots of people I think you’re quite centrist when it comes to, to privacy and are sensible and stuff. But I’ve seen some people be like, well, we need to ban AI because it’s invasive of privacy and people’s IP, which is just not going to happen.
Laura Hilliger: [00:31:56] I think I mean, this is this is a strange thing about that. I’ve always felt about this society, this competition thing. We have learned this from our early age. I think that the Internet, the Internet has and social media in particular has exacerbated the, um, you know, the, the way that we compare ourselves to others and the way that we feel about it. But I don’t actually think that this is a human nature thing. You know, like most, most humans, when you get to know them, they are quite giving and empathetic and like interested in true collaboration and cooperation. Um, and it’s so I feel, I definitely feel like more people are expressing that today in our various communities. Like I feel like people are more generous with what they, what they give and what they put out there. Maybe this is a, you know, because I surround myself with those kinds of people and I am a co-operator and an open sourcer, but I feel like with the climate movement, you know, and with with the younger generation, that collaborative spirit is sort of bleeding out a bit more. Um, and I wonder, I wonder if like the activism that’s happening today is going to topple the structures that we have in society that are like levelling up or holding up that competitive spirit because, you know, like we don’t live in Mad Men anymore, right? Like, it’s not it’s not like the 60s where well, I was about I was about to say, it’s not like the 60s where only men have jobs. But then I thought about the patriarchy and got sad.
Doug Belshaw: [00:33:40] Well, it’s interesting that you bring that up because at the moment we’re talking about AI as if it’s one thing.
Laura Hilliger: [00:33:47] Yeah, that’s true.
Doug Belshaw: [00:33:49] Um, but it’s actually lots of things. And the controversy, I guess there are lots of controversy points of controversy when things like the latest version of ChatGPT came out earlier in 2023. But one of them from the right of politics was this is a woke AI, this is a left wing. Ai Yeah, So. What I think is pretty obvious will happen is, well, first of all, which is pretty neutral eyes as I think you can do now with GPT 3 Turbo is you can feed in your company’s data into ChatGPT and then it will know everything about your business and be able to recommend things based on your business, which is great. But the other side of it is things will get trained on horrible stuff. So child pornography, I’m pretty sure that’s already going to be out there. Yeah. And it’s going to get trained on like awful hate speech and like the writings of Hitler and Mussolini and Nazi figures and whatever. And also, just like modern day fascists like Ron DeSantis and Donald Trump and all that kind of stuff. And so when just in the same way that people might, you know, in a neutral way, people check different weather sources and might have a favourite weather source or whatever, that’s pretty neutral. And if you take it up a notch, you might choose to watch a news channel or get your news from a newspaper or online source that focuses on climate change or doesn’t focus on climate change because you don’t think it’s real. And if you take that up a notch, you might only get your information via the filter of an AI, which specifically picks out information from the Internet that already chimes with your worldview.
Laura Hilliger: [00:35:45] Right, and that’s already happening, right? And that’s where that’s where cognitive liberty comes in. Like we already even without AI advancements of the last year, we are already seeing that algorithms are influencing how people react to different kinds of information. Like fact is no longer like it’s no longer a thing. Fact like facts can be argued at this point, which is, you know, so and I guess the like for me, what’s the cognitive liberty piece is like? How do we how do we ensure that people keep cognitive liberty if they if they’re being manipulated by these algorithms or by AI? How do we how do we make sure that that actually, you know, people don’t get so dug into their perspectives and actually try to see the other side? But this is a question that’s been on educators minds for probably hundreds of years because this is critical thought.
Doug Belshaw: [00:36:47] No, it is. And there’s there’s no better place to think about critical thinking, I think, than religion. Okay. So I read a book while just before I went on holiday called Cultish. I was talking to you about this on the porch in in Colorado, and it talks about the language around cults. And at the same time, I’ll not be able to find it quickly. Now, there was something about people who are kind of, I guess, um, what’s the word? Transhumanists people who are transhumanists and like almost creating a God like figure out of AI, which you can imagine this happening. So what I see in my adult life, of which I’ve had almost 25 years, is people don’t like making decisions. There’s very few, I would say percentage of the population who actually like making decisions. And so people like deferring decisions to a process, a system, someone else, whatever AIS will make a decision like for you, they don’t, you know, they don’t have an ego to get over. If you ask her to make a decision, it’ll just be like, This is the best option, whatever. So as that happens, more and more, well, I can see a world where I start being like the oracle and like being almost like a religion kind of thing. And I actually watched a Netflix film about this, which was incredibly average called Heart of Stone. It had Gal Gadot from Wonder Woman in.
Laura Hilliger: [00:38:26] Yeah.
Doug Belshaw: [00:38:28] And it was basically. The the moral of the story without spoiling the plot, which was paper thin, was that, yes, you can have an AI model which can predict the the likelihood of something, but at the end of the day, sometimes humans need to do unpredictable things and there’s something about the human spirit and whatever. But this is true. Like you can you can predict all of the things in the world. You can have very rational explanations of things. But that’s not that’s not the like the quintessential nature of being, of being human. And I think that the more we outsource decision making to AI in like a cold, hard, rational way, the more we diminish what it means to be. Human and a bit weird.
Laura Hilliger: [00:39:15] So I want to tell you about how I make very difficult decisions. It involves consulting the I Ching, which is an ancient Dao Daoist text. It was a book that was used to create predictions. And a colleague of mine from Greenpeace, Brian Fitzgerald, who I worked, I sadly only got to work with him for six months or so because he was kind of on his way out and he’s now running a very cool organisation called Dancing Fox and Brian actually got to got to work with them a bit. And anyways, one of his side projects is, is an app called The Ching. And what you do is you put a question in and then it, it does a casting for you and it lays out what the Ching would say based on that question. And it’s hilarious because it’s so it’s one of those things where it’s like having your palm read or, you know, it’s it’s like if somebody tells you that they are a medium or something and then you ask them a question and they have ways of like saying things that apply to everybody, but you think it really applies to you. And it’s sort of like that, except that it’s, you know, A34 thousand year old text. And what comes out is are these like very convoluted stories of warriors and mountains and kings and lakes and the, you know, the water doesn’t subside sorts of anecdotes. And when you read them, like, it forces you to really reflect on what your actual question is. Yeah. And I don’t use it that often, but I do use it from time to time when I’m really struggling with something in my head and I need something to prompt me to think about it in a different way. And I’ll and I’ll use that text. And it’s not a I, you know, writing this by hand and has spent 20 years translating this text into this app. But I think it’s interesting because like nowadays we can ask AI questions that will help us reflect, which I think is a really interesting use. But people have been doing that for hundreds of years. They just used to consult other kinds of texts that weren’t necessarily generated on the fly. That’s the end of my cool story.
Doug Belshaw: [00:41:38] No, that story reminds me of The Hitchhiker’s Hitchhiker’s Guide to the Galaxy, which is on my mind this year because I’m 42 years old and 42 is of course, the answer to life, the universe and everything. And The Hitchhiker’s Guide to the Galaxy, which I haven’t read for quite a while from what I remember, is that they ask the computer like, what is the answer to the to life, the universe and everything? And it comes back after a long time and says 42 and they’re like 42. And he says, Well, this computer says, Well, you didn’t understand what the question was. And then they send it off to find out what the question is. And the answer is seven times six. And it’s but it’s a good it’s a humorous way of saying that we’re always looking for the answers to stuff. But as you’ve pointed out, we don’t really know what we’re asking. And one of the great things that I was, I think, showing you the other day was one of the best things you can ask. Things like ChatGPT and other large language models is to say what you want to do and then ask it what questions it’s got of you, which makes you the question that asks of you means that you have to refine what it is that you want. And so it’s kind of a thought partner in that kind of way. Then you answer the questions and then it comes up with some stuff and that’s not quite what you needed, and then you adapt it. And that’s the kind of interplay I see with humans and I like it hasn’t it hasn’t meant that you’ve replaced me as someone to work with at the Co-op. Like I’m not out of a job now because of ChatGPT, it’s augmentation.
Laura Hilliger: [00:43:21] Yeah, I think I think this you know, it doesn’t. This doesn’t really tie back to cognitive liberty and that sort of the idea of cognitive liberty, but I think that it’s an example of it. So choosing to use I choosing to ask it questions, choosing to basically use it as a reflective tool instead of choosing to be afraid of it and choosing to focus only on the bad. Like I think that that’s, you know, that’s what cognitive liberty is, is being able to have a choice about how you make decisions, how you reflect on things, what kind of creativity you use, how curious you are, all of these things. And the thing is, is that AI is not a human brain. It doesn’t have a choice. It can only respond based on its model to the prompt that you’ve put in. And, you know, that’s that’s I think that it’s really interesting that people are, you know, that people are thinking so, so strongly about all of the, you know, how it’s going to take their jobs kind of stuff as opposed to how can this help me learn? How can this help me become a more fully fledged human being? How can this change my thought?
Doug Belshaw: [00:44:36] And we’re coming at this from a position of privilege, you know as well credentialed people. I’m not going to say that you’re middle aged. I’m middle aged and who work in kind of knowledge, work and stuff. And we’re recording these episodes this season out of order. So I might have already told this story, um, as part of this season. But as I mentioned at the start of this episode, I, my wife and I pulled out of buying a house recently and. I can imagine a way in which. This could have been done in a much more a much less rational, much more kind of, um, what’s the word I’m looking for? Hyped up kind of way. You know, they kind of Fox News like, Oh my goodness, this is happening, or the Daily Mail or like, migrants are coming to steal your jobs kind of thing. So we were wanting to buy a house which was in a flood zone, but it had flood mitigation, but there was still some risk and we didn’t understand the risk. We had a report commissioned, I wrote a blog post about it, etcetera. But what I got in response was extremely rational. But what I didn’t know was the meaning of it. Like I needed the human kind of emotion to be added, but not in the kind of like, Oh my goodness, do not buy this house. Are you insane kind of thing. I wanted to know, like what the cumulative risk was. And one of the things I did, which I mentioned in this post, was to do the exact thing which I’ve just said that you shouldn’t do in terms of doing user research, buy personas. I did get it to come up with a range of five personas and ask them and ask ChatGPT how those five personas together would decide how to make a purchasing decision for the property. And as a result of that, it allowed me to and my wife to reflect on what what was important for us. It kind of enabled the discussion. So I would say and it’s kind of stretching the definition of cognitive liberty. It expanded our ability to make rational decisions and therefore expanded our cognitive liberty because it wasn’t being invasive in terms of our emotional life or brain activity. It was actually making us make better decisions, if that makes sense.
Laura Hilliger: [00:46:52] That’s a link to that to that post. Got it. Yeah. I mean, I really we certainly went kind of all over the place with this conversation and talked about AI more generally. But I had a I mean, a I had a bunch of coffee before this talk, so I had a lot of to say. Um, I think the we’re definitely going to link to the episode that we talked about because they’re talking about cognitive liberty in a, in a different way, a more specific way, and specifically in relation to human rights and, you know, the individual right to self-determination and how to protect that in, in regulation, which is very important, of course. And I, I think that we’re we’re thinking about all of the different ways that AI is augmenting our brains as opposed to, you know, not replacing it, not taking over. And we didn’t talk about mind control and these kinds of things, which is certainly, you know, stuff that we think about.
Doug Belshaw: [00:47:56] But yeah, I think I think a lot of this is going to be, you know, like with blockchain and changing the world and all that kind of stuff, and it ends up being boring back office technology. I think the same will be true of of AI. So, you know, if for example, councils in the UK have had their budgets cut in real terms year on year and year and year, and our current government, which means that they haven’t got money to spend on stuff that they could replace with AI. If an AI can do 80% of what they currently can do much, much more cheaply, they’re going to replace stuff with AI. But the trouble is a lot of this, as you point out in your blog post, is in black boxes with models and data that are owned and accessible by a small segment of the technology industry is how you put it. So that means that effectively our democratic and civic infrastructure is being replaced by, um, decision making tools which are beholden to a small elite. And so it just entrenches the neoliberal capitalist economy. And so everything comes down to that. This could be amazingly emancipatory, but only if it fits into a more just way of of conceiving society.
Laura Hilliger: [00:49:14] And I think I mean, this is where the work that we’ve done for 20 years around, you know, open innovation and making sure that transparency is a core component of things that we do. And the reason that we put things out into the world and use Creative Commons licenses and promote open source and promote open tools and all of these things. It’s probably why we’re looking at those black boxes and going, okay, cool, what those black boxes are doing. But careful when they start to replace, as you say, these these civic and democratic infrastructures. You know, although to be fair, like open government is, is not something that every government is either. You know, it’s similar topics that we continue to work on. So. Shall we wrap it up?
Doug Belshaw: [00:50:03] I think we should. I think we should. And it’s easy to focus on the negatives of this, but I am actually optimistic that things like AI, especially if they’re not captured by the right of politics, for example, and climate change deniers and stuff, but potentially has a way of ensuring that we build and develop and mitigate and advance stuff to do with things like climate and democracy in a in a way which is going to be useful for humans. I think if we go beyond that and we start doing things like the transhumanist agenda, things which are good for machines and not humans, then we run into difficulties. But yeah, from the stuff that I’ve read, from the stuff that I’ve listened to, I don’t think we’re actually much closer to an AGI, like a like a general intelligence. I think we’re, we’re still kind of in specific the realm of the specific AI and that’s, that’s fine. As far as I as far as I can see.
Laura Hilliger: [00:51:03] Okay. Well, good conversation. I will talk to you again later. Thanks for listening, everyone!
Doug Belshaw: [00:51:11] Yeah. Next time, let’s talk about AI literacy!