Skip to content
InTechnology Podcast

#84 – What That Means with Camille: Cybernetics

In this episode of Cyber Security Inside What That Means, Camille jumps into the conversation on cybernetics with Genevieve Bell. For International Women’s Day, the podcast is honored to have Genevieve as a guest, who is accomplished, thoughtful, and influential. 

Genevieve Bell is Director of 3A Institute, Sr. Fellow at Intel, Director of School of Cybernetics, and Distinguished Professor at Australian National University.

The conversation covers:

  • What cybernetic technology is, the history of it, and what is happening with it and artificial intelligence today.
  • How computer science is meeting climate change, privacy, and social sciences in the field of cybernetics.
  • How cybernetic technology is intricately connected with history and changing perceptions of privacy and control.
  • What sustainability looks like in cybernetics and cyber security.

And more. Don’t miss it!

The views and opinions expressed are those of the guests and author and do not necessarily reflect the official policy or position of Intel Corporation.

Here are some key takeaways:

  • Cybernetics has been around for decades – over 80 years. It may sound familiar because of science fiction. However, it began as science fact around World War II. It was created to help manage the problem of control and communication in machines and humans. 
  • The group of people who make decisions and study humanity were worried about what technology could do in a destructive way. Cybernetics was supposed to be a framework for how we would manage technology and would create a critical infrastructure.
  • As we have moved into the 21st century, the way we think about control has changed. Cybernetics is about understanding how things flow and ensuring an ability to stop it from unfolding if it looks like we need to.
  • Technology is already in our bodies – vaccines are a good example of this. Again, the key thing in cybernetics is control and the ability to have control over your own body and decisions. Because of how history has unfolded and prejudice has played a role, there are people who have less control – we need to ensure that consent is a part of the process, but also asking questions about who is benefitting from the technology.
  • Each time we develop technology and computer systems to make things more efficient or convenient or to better do specific tasks, we have to look at every angle. How do self-driving cars impact pedestrian safety? And how does that impact people with physical disabilities trying to cross the street more slowly?
  • Surveillance and data collection are a huge part of both AI and cybernetics. Often, surveillance can be good, such as when we surveil wildlife populations to ensure they are healthy, or our water systems to ensure everything is working properly. A different approach needs to be taken with humans. Many large companies have made decisions about what technology they should or shouldn’t be using because they are waiting on policy or legislation or settings.
  • Because how people think about and manage privacy has changed over time and will continue to change, designing technology and AI systems for privacy is very difficult. People don’t just worry about their personal data and cyber threats, they also worry about what judgements will be made about them with that data.
  • Data is always based on what you have done in the past, which worries some researchers. This means that it is less likely for you to see something outside of the frame of reference you already have, which might limit your ability to grow and change and connect with others.
  • Sustainability plays a large role in cybernetics in a few ways. The first is knowledge, and making sure that many people have that knowledge and a role in the development of the technology in the community it is being used in. Then it is sustainable over time in a healthy way. 
  • But it is also about climate change and environmental impacts. We have to ensure that technology is not contributing to the environmental problem.

Some interesting quotes from today’s episode:

“It’s a term that kind of noodled around in science fiction for a long time. The reason we got it in science fiction, however, is it started in science fact.” – Genevieve Bell

“And coming out of World War II, it was really clear that computers weren’t just going to be big machines that crunch numbers to aim guns. They were going to be objects that could sit inside of decision-making frameworks and industry and scientific discovery, and potentially even inside people’s homes.” – Genevieve Bell

“It lets you think about systems. It’s a systems level approach that argues pretty persuasively that you can’t think about technology without thinking about humans and the environment.” – Genevieve Bell

“Think about the ways in which certain bodies have had work done to them without their consent. And you start to realize that the notion about the most recent nanotechnologies over various forms of computational technologies in our bodies are actually part of a much longer legacy where those questions are already highly charged.” – Genevieve Bell

“What information is being collected? Who has access to it? What sense is being made of it? How is that sense-making being used for further determinations?” – Genevieve Bell

“Whether that’s the lightweight things that sit inside Netflix, or inside dating apps, or inside Amazon, which is really about determining who you are and what you’d like in order to work out what you might like next. So think about that, and use the word ‘desire…’ How do we help satisfy your desires for things or people or stuff.” – Genevieve Bell

“Privacy is a relatively new term, and our notions about what is private and what isn’t are incredibly fungible and have changed remarkably, even over the arc of our lifetimes. And I imagine it will continue to do so.” – Genevieve Bell

“One of the problems with the way data is often mobilized is that what it does is that the choices you are given at any kind of moment in a recommendation engine, for example, are based on what you’ve already done. So it’s always based on the past. And one of the things that happens there is that you can then get locked into who you’ve been and limit your possibility of growing, changing, being something different.” – Genevieve Bell

Share on social:

Facebook
Twitter
LinkedIn
Reddit
Email

[00:00:37] Camille Morhardt: Hi, and welcome to Cyber Security Inside. We’re doing an episode, What that Means: Cybernetics with Genevieve Bell. She is a Distinguished Professor at Australia National University and also is Director of the School of Cybernetics and three a Institute there, which she founded. She’s also the Florence, Violet McKenzie chair at ANU for promoting inclusive use of technology and society.

She’s a fellow of the Australian Academy of Technology and Engineering. The first distinguished fellow at the SRI Institute, which was founded by the trustees of Stanford University. She’s also Vice-President and a Senior Fellow in the Advanced Research and Development Labs at Intel. And if that is not enough for you, she is also an Officer of the Order of Australia, which is the highest honor the country gives for service to humanity at large. That’s hilarious. Welcome Genevieve. 

[00:01:28] Genevieve Bell: You’ll be happy to know that it comes with a pin. that I am required to wear on all occasions. It sits somewhere on my collar and basically signals that I am a good human being and can be called upon to do things in the service of humanity. You’re right, it’s crazy.

[00:01:44] Camille Morhardt: So you have accolades at the highest level across industry, across academia, across government.  And I think that most people would be very impressed by that and others might think “suspicious.”

[00:02:02] Genevieve Bell: Yes. And I sometimes sit somewhere between those two things. I grew up here in Australia. Today I’m on the land of the Ngunnawal and Ngambri people in Canberra. I should acknowledge where I am and pay my respects to the elders of this place, and acknowledge that I’m sitting on land that was always sacred and was never ceded. I was raised in a culture that isn’t very good on people who excel. So we talk about a “Tall Poppy Syndrome” here in Australia it’s that don’t put your head above the parapet or it will be chopped off. So very different than the kind of American wanting to constantly kind of strive the narrative of pulling yourself up by your bootstraps and being amazing is kind of a counter narrative to the one where I grew up.

So I find myself uncomfortable with my own biography sometimes. So when people read off all those accolades, it makes me twitchy and slightly nervous. And I also fully recognize that in some parts of the world, some of those marks of esteem ought to be deeply suspicious. It’s like, “really? What did you do to get that? And what does that mean? And who really thinks you’re that? And, oh my God, what should we do with any of those things?” Yeah, no, mostly when people do what you just did, which is read at my biography, I’d sit there and twitch uncontrollably. 

[00:03:14] Camille Morhardt: You’re obviously influential. So I’m kind of interested in what pulls your internal compass? What inspires you? What impact do you want to have on the planet?

[00:03:26] Genevieve Bell: Those are such good questions. I had an unusual childhood. I grew up part of my time in Melbourne and Canberra here in Australia, and part of my time living in central and Northern Australia. My mother was an anthropologist and I grew up on her field sites and I grew up living in indigenous communities. I grew up exposed to the extraordinary challenges that come with being an indigenous person in Australia and with what it means to navigate the tensions between traditional owners, colonial forces and power. I don’t want to keep reproducing the world that I grew up in and even the world that I find myself in today.

And so for me, the decisions I make, the places I choose to work, the work I do always sitting inside of it is a core of, “if this work is done well, things will be different as a result of it.” And the difference might not happen on Wednesday. It may take a little while to get there, but that’s usually what’s driving it.

[00:04:22] Camille Morhardt:  I want to just stick with the standard format which is that we’re going to talk about cybernetics. And can you spend a couple of minutes defining what that is to bring people up to speed? I don’t think that’s a term that many people are super familiar with today, even though it actually has kind of a long history. 

[00:04:42] Genevieve Bell:  You’re right.  Cybernetics is a term that’s been around at least 80 years, maybe a lot longer.  So the reason it may sound familiar to the consumers of podcast is that it is a term that’s being kept alive in science fiction. You will have encountered cybernetics in the Terminator movies. You might have encountered it in The Matrix. You might’ve encountered it even in Ready Player One and other things more recently. So cybernetics  turns up as a term there. If you’re from the other big science fiction canon, the British one, you’ll know it from Douglas Adams and the Sirius Cybernetics Company, which made Marvin the Paranoid Android and prescient lifts.

So if it turned that kind of noodled around in science fiction for a long time. The reason we got to science fiction, however is it started in science fact. So in the United States, cybernetics history starts in World War II, where it is a term brought into currency by a man named Norbert Wiener who was a mathematician, and later became an AI and robotics expert at MIT. He coined the term in a book he published in 1948.  And he means it to mean, a science or a grammer or a language and a set of techniques that would enable us to manage the problem of control and communication in machines and humans, is basically his argument.  And you’re like, “well, what the hell does that mean?”

Well, you’ve got to then understand a little bit of the history of technology. So back during World War II, we saw computers as we would understand them, really start to come into common currency. The first of the computers that are the ancestors of the things we are communicating on, come into existence during World War II. So the ENIAC as well as a few others. And the presence of those computational objects changed the way people thought about data, that changed what people thought about communications, and it changed the way, what people thought was possible from technology. 

And coming out of World War II it was really clear the computers weren’t just going to be big machines that crunch numbers to aim guns, but they were going to be objects that could sit inside of decision-making frameworks and industry and scientific discovery and potentially even inside people’s homes to do sorts of things that were a bit vague.

And so for a collection of big thinkers who ran the gamut from mathematicians and chemists and physicists through to philosophers, historians, psychologists, anthropologists, and then public policy people, that group of people they were sitting there in the aftermath of World War II, saying, “well, that was a pretty cataclysmic thing and we created an enormous amount of damage and destruction.” 

[00:07:21] Camille Morhardt: Right, they had seen the destructive power of technology.

[00:07:23] Genevieve Bell: They had and they were determined that what came next, shouldn’t look like that. And that there had seen in some ways, the worst that human beings could do to each other in the war and were determined that what came next needed to be more deliberate and needed to have that notion in Norbert’s words of “steerage and navigation,” that we should manage the machines, not let the machines manage us.

And so for Norbert and his colleagues, the idea of cybernetics was the idea of trying to create the possibility that we could build a framework for how we would manage technology.  And for he and his colleagues, that framework had a couple of really important features. One was that you needed to understand computers as part of a system, that there was a system that happened here, right? And that the computation didn’t exist by itself. It sat inside a system that had technologies, the environment and humans, and that we needed to look at it holistically that way. That as you talked about computation, you couldn’t divorce it from talking about humans and the ecosystems in which they found themselves; which is already in 1946, 1947, that feels like a pretty radical proposition because it still feels like a radical proposition in 2022 (laughs). We should talk about technology, but we should probably talk about humans and the ecology as well. 

The second piece they said was that you needed to understand circular causality or the idea that any piece in the system had a reciprocal relationship with other pieces in the system so that you couldn’t just focus on one thing without thinking about what the consequences were for the others and about what the relationship was, not just the components.

So put another way, what cybernetics was also the beginning of systems engineering. Cybernetics was the theory, in some ways, systems engineering was its applied version. And so for people like Claude Shannon–grandfather of Information Theory–Claude Shannon at Bell Labs took cybernetics and turned it into systems engineering as did many other people involved in those conversations. It’s a theory of machinery and computational machinery that was created in the 1940s and the 1950s. 

A second answer to that, however, is a much older one because cybernetics was also a conversation that was happening in France in the 1800s, and even with the ancient Greeks, way back when.  It’s an idea that’s been around for a long time. For me, the important pieces of it are that its a systems level approach. It lets you think about systems. It’s a systems level approach that argues pretty persuasively that you can’t think about technology without thinking about humans and the environment, which for me feels really kind of useful. I think there’s a whole lot of stuff about how cybernetics unfolded in the 40s and 50s that are useful for thinking about how you would build organizations now.

[00:10:00] Camille Morhardt:  There’s a lots of questions coming off that, but I have a couple. You’re talking about that people are figuring out how technology is going to affect humans. For example, now we’re hearing about technology literally going inside of humans, and kind of this concept of human brain interface and other kinds of mechanisms by which we can enhance the human experience by having technology literally become a part of a person. And so they’re inseparable at that point. So how does this notion of control of the machine or controlling its intent and also its ability to potentially down the line control you, if it’s inside of you. Can you comment on that relationship? 

[00:10:51] Genevieve Bell:  I can and I think one of the challenges here has to do with how we understand control in the 21st Century and how it was understood in the 20th Century. So we think of control and you just used the language ‘control life’, and I want to basically be in charge of it as opposed to control simply meaning how do we understand how things flow? And I think for Norbert and his colleagues, the notion of control and steerage and navigation are an important cluster of words.  So it wasn’t about absolutly being in charge of everything it was about how do you ensure some ability to see how the thing is unfolding and have points of intervention in that unfolding, and to be able to theorize what happens if you do this and what the flow on effects are? 

The ideas that you’re unfolding are a really different set of questions. Where are the limits of human autonomy?  Where do we get to say “no” to things? Because really being in control in some ways, it’s about do you have an ability to decline something, not an ability to accept something? Although I think being able to say “yes” to things is equally in some ways interesting.

So the notions about are places where technical systems will impinge on our bodies and how do we think about that? is a question that extends far beyond putting technologies in our bodies because we’ve been putting technologies in our bodies for a really long time. That is of course what vaccines are and look at the arguments we’re having about those. There’s no surprise in some ways that those arguments are being surfaced right now, too? We know we have put the latest technologies in our body. If we go back to the early smallpox vaccines or to the polio vaccines of the last century or the notions about pacemakers or IVF technology, all of which were asked putting technologies in our bodies.  And if you think of every single one of those, they have been highly and hotly. As have ideas about more benign things–cataract surgery, organ replacement. You know, there’s lots of places where our bodies have served as sites of serious technological intervention. 

And then there have been places where it’s happened without consent. Think about the Tuskegee Airmen. Think about various kinds of forced sterilization in different communities. Think about the ways in which certain bodies have had work done to them without their consent. And you start to realize that the notion about the most recent nanotechnologies over various forms of computational technologies in our bodies are actually part of a much longer legacy where those questions are already highly charged and in some ways, quite hard to unpick and unpack all the pieces; though you are usually safe in asserting in those places that there are certain bodies that tend to be less able to say “no” than others. And certain bodies upon which those experiments tend to be enacted in ways that are less thoughtful. 

And so for me, as we think about how might we want to respond to the latest generations of technologies being put in and on our bodies is to not imagine it’s the first time we’ve had those conversations, but to sort of start to want to look at some of those other histories and then ask the questions about how is consent constituted? Who gets to say “yes” or “no”? Who is making those decisions and who is in some ways benefiting from the consequences?  These are questions that are also regulatory; these are questions that sit at law, not just in kind of philosophical debate.

[00:14:13] Camille Morhardt: And whether we want to look at this from inside the body or information being collected outside the body with or without permission when you look at let’s say even as we migrate towards say a smart city or autonomous driving and information is being collected. We can posit the intent is solely good–let’s make traffic more efficient–but there’s still this ability to collect information on who is in the car and where they are going based on their map system. 

[00:14:42] Genevieve Bell: Oh absolutely.  Not to mention in deciding that a smart citiy’s traffic system would orient to efficient flows of cars is already making a set of decisions about whose experiences have a city of privilege versus others, right? Because we know for most drivers in order to have a satisfactory experience of driving, you want pedestrian crossings, minimized. You want the time that pedestrian crossing lasts minimized. And in doing that, you’re usually making it harder for people who are not physically abled, for people who have mobility problems, for people who are lugging suitcases or prams.

So suddenly you realize that in making it efficient for cars, in that instance, you may be making it inefficient for humans and certain kinds of humans will suffer more than others. And you’re exactly right. What information is being collected? Who has access to it? What sense is being made of it? How has that sense-making being used for further determinations?

And so there’s something in all of that for me, where you have to start asking in some ways a really hard and often banal set of questions, but really important ones, which are about power, time, regulation policies and standards.

[00:15:56] Camille Morhardt:  Beneficiary.

[00:15:57] Genevieve Bell:  Exactly.  And also about intent and then realizing that decide you’re going to do something now—because your intent in 2022 is this–doesn’t mean that what you did then won’t have both unintended consequences and full on consequences that may be manifested 10, 20 years from now that are much harder to be thinking about. 

[00:16:15] Camille Morhardt:   I know you’ve talked about a couple of uses of AI being a surveillance and desire, and I’m wondering if you can elaborate a little; it seems like we might be touching on the edge of that right now. 

[00:16:27] Genevieve Bell: Oh, absolutely. It’s really important to think about the ways in which data is collected by whom and for what. So we often talk about artificial intelligence in this delightfully ahistoric way. Artificial intelligence as a term was coined back in the 1950s. It’s actually linked back to all of those cybernetics conversations because the same people who were talking about cybernetics went on to talk about AI. I think you could say somewhat flippantly that the AI conversation was basically cybernetics stripping out the people and the environment, because that was the messy stuff and just focusing on the tech.  (that is to do no one good credit. So really wouldn’t want to make that a serious argument). But there is a sort of a way where the AI conversation proceeds in talking about having machines simulate humans, but not necessarily about what it means to have the human still there.

One of the things about AI in the 21st Century is that it’s not just being built by governments; it’s being built by commercial enterprises. And whereas in the 1950s, much of the conversation was about the research agenda that would be AI, these days we also talk about who’s producing it and what they’re doing? And what is the intent of it, right? Is it about collecting data in order to make different kinds of determinations? And why would those be?  

One of the challenges of course, is you say surveillance and everyone thinks that’s bad.  Of course there are some strategies that aren’t bad. We surveil water systems and septic systems in order to determine if there are problems so that we can fix them, where surveillance is hyper necessary. We surveil wildlife populations to determine their healthfulness. But it’s certainly the case that some of the data that was being collected at the moment creates enormous challenges. And as a result, we have seen large companies make decisions about what technologies they are using and indeed stop using certain kinds of technologies until they can get the legislative and standards, and policy frameworks, and settings right.  So I would look there to what for instance Microsoft did with camera and computer vision and facial recognition technologies and stopping working on it until they could find it in a policy setting that they were comfortable with. I think that’s an interesting example of saying, “yes, there’s a technology, but you can choose not to use it or deploy it if you didn’t think you could get to a policy setting that you had comfort with.” 

We also know that some forms of new technologies are being used, not just to look at what we’re doing, but to look at what we’re doing in order to decide what we might want to do next. So whether that’s the lightweight things that sit inside Netflix or inside dating apps or inside Amazon–which is really about determining who you are and what you’d like in order to work out what you might like next.  So think about that, use the word desire. And that’s how I would think about that. Right? It’s like, how do we help satisfy your desires for things or people or stuff.

[00:19:21] Camille Morhardt: So, what about this migration from security to then people sort of add privacy into that conversation. And I think part of that is because we’re starting to surveil people to help them with their desires or maybe help with the traffic flow or whatnot. AndI know there’s not standards all over the world when it comes to privacy, though there is a common definition for it, but it varies.  Feels like now the tech world is moving more to the term trustworthiness. Can you help? Like what’s after that and do we just keep kind of evolving these terms? 

[00:20:00] Genevieve Bell: Gosh, that’s another really interesting question. I think it’s important to talk about the difference between probably security, privacy, trust and risk, at least, and maybe responsibility–all of those being slightly different things. Right?  We’ve bundled them together–which I also think is really interesting. You’re right, it’s not just about are they replacements, but what does it say that we are needing to have that bundle of words to describe a set of phenomena. 

Privacy is a relatively new term and our notions about what is private and what isn’t are incredibly fungible and have changed remarkably, even over the arc of our lifetimes, and I imagine will continue to do so.  Information that we would never have discussed in public even 30 years ago is discussed even by politicians, which is quite remarkable. One of the challenges we’ve had sitting in find the tech sector is that designing for privacy is not designing for a fixed thing (laughs). It’s not like I’m designing for the voltage that comes out of the wall that’s relatively standard. It’s like how people think about, manage, and engage in privacy practices has changed over time, changes for individuals in the arc of their lifetime, and it’s different across different platforms. So you’ve got that interesting dynamic. 

My suspicion is one of the places where legislation hasn’t kept up–because you wrote this legislation about privacy; But everyone has an idea about certain kinds of data that can’t be released or that needs to have more safeguards on it. In the US that’s particularly true around medical and healthcare data. In Europe, it’s a much broader set of data that’s covered that way. 

One of the challenges of course, is that as humans, we don’t just worry about our personal data. We worry about the ways in which that data is used to make judgments about us. So it’s one thing to say, “am I concerned that you might know how old I am or what my education level is or what my ethnic background is, or what my religion or national or sexual orientations are.” Individual pieces often called “data attributes,” those pieces we are required to hold them privately as various kinds of organizations and governments. As a citizen or a consumer, you might be equally concerned about how someone uses that data to make a determination about what music you like, or what food you eat, or what kinds of clothes you like, and about what that says about you. And those pieces have been less about privacy per se, and I suspect are more about things like our reputations and judgments and tastes. There’s something in all of that that feels different than privacy.

[00:22:41] Camille Morhardt: Well what about this notion that as AI or some compute and algorithms are starting to narrow down who you are and kind of predict what you might like, that you may also be limited in access. So all these filters are coming. So you don’t know what you don’t know because you’re not seeing information that if it knew nothing about you, it would have to provide.

[00:23:05] Genevieve Bell: It’s one of the things that I think is most interesting actually. And one of the problems with the way data is often mobilized, is that what it does is the choices you are given in any kind of moment in a recommendation engine, for an example, is based on what you’ve already done.  So it’s always based on the past. And one of the things that happens there is that you can then get locked into who you’ve been and limit your possibility of growing, changing, being something different, or I think as you put it in encountering things outside of the stuff that you know, you’ve liked. 

And I tend to like to imagine—maybe it’s foolish– that some of the most interesting moments I’ve had in my life was when I encountered something that didn’t work the way I thought it should up. When I encountered something I wasn’t expecting, or when I stood in front of a piece of art, or I read a book or went to a movie that made me initially quite grumpy (laughs). Because what you’re doing is actually having to think through something that sits outside of your frame of reference.  And I worry that you want to have people have moments where they can encounter things they didn’t expect. Cause in those encounters you get to be transformed, and that feels like a really important thing. I do worry at a level that our reliance on algorithmic structures that use past data, that what we’re doing is locking us into a kind of building the present and the future on a path that I don’t think that fair just equitable or sustainable.

[00:24:38] Camille Morhardt:  I want to ask you about something that we haven’t talked about; it’s one of the things that you work on as part of cybernetics, which is the environment and how that intersects with technology. I think unravel the intertwined nature of the environment and technology and humanity. But I’m wondering you use the word “sustainable.” And, that is kind of an old word too. And I’m just wondering why, you know, some people talk about “generative” kinds of environmental approaches and why did you choose “sustainable?”

[00:25:15] Genevieve Bell: That’s a good question. I’m sitting here in Canberra, about 900 kilometers north of me is a river that’s in flood currently because it is raining here a great deal. That river is the Barwon river and that river flows across the New South Wales/Queensland border. That town is at a point where there’s an intersection of rivers and a huge flow through of water when it’s raining. It’s also a place where there is a set of fish weirs–built into that river that helped collect and hold fish. Those stone weirs were built by the Aboriginal people that are local to that region. They were built, archeologists will argue this point somewhere between 40 to 10,000 years ago. 

[00:25:57] Camille Morhardt:  That narrows it down (laughs).

[00:25:59] Genevieve Bell: It does narrow it down!  Either one of those dates makes it one of the oldest technical structures on the planet. And these are deliberate intervention into a water system, right?  You have to understand hydrology and lithics and biology in order to make it work. They have been modified and adapted over many years and they were still being used before the flood started in December. So here’s a system between 10 to 40,000 years in the making and in the keeping. A system that involves sophisticated understanding of a range of technical systems, a sophisticated understanding of the environment, and a willingness to adapt and change to the changes in the environment over time.  And one that allowed groups of people to gather on the banks of those rivers to make law and knowledge and family. And I look at that and I go, “okay, there’s a system that has endured and been utilized over protracted period of time and is one that has sustained populations and human endeavor. But it’s also one that has been sustainable.” It has been built and rebuilt and built into that place with a notion of how that place worked rather than an idea of how we might overbuild it. And I tend to think of that as a what’s a cybernetic system–obviously you know, 10 to 40,000 years ago I’m fairly certain no one said, “it’s a little bit cybernetic.” “Like, man, we need fish. We’re going to have to do that.” 

There’s something about the idea of a system built to thread those pieces together, and one that was built with an idea that it should last over decades and centuries. And that in doing that, that meant that lots of people needed to understand how it worked and needed to understand why it was doing what it was doing and how it was doing that in that place.  So it’s also about local knowledge, it’s about knowledge that’s transmitted over generations. So there’s something for me there about the sustainable. An idea that has room in it to think about lots of important ideas. So information that is transmitted things that are built into local places rather than imposed from somewhere else. 

And as I sort of think about what it means to imagine that for the 21st century, I do it also mindful that I’m in a place that’s at the pointy edge of the climate change conversation, partly because the science is still being litigated by some parts of the spectrum here in Australia, but also because it is very clear having had the bush fires before the pandemic that we have choices here about how we want to live our lives that will require a different conversation. And technology sits inside that in two different ways. It sits inside of how do we use technology to help us make better decisions about water flows and climate change and about the livability of cities and about how we want to manage ourselves in this place as it changes. But there are also questions we need to reasonably ask about technology, itself, as a part of that puzzle.  So how do we think about the next generation of server farms so that the energy budget they use is less than the one they use now? because the farms are quite power intensive. How do we think about building computational objects that require less energy? How do we think about building computational objects that require less of all the other materiality that we know is complicated to acquire and has itself a flow on effect? How do we think about the other bit of the puzzle that we don’t spend as much time talking about, but about how we recycle technology when we’re done with it? How we limit the negative impacts of having huge piles of no longer useful mobile phones and all sort of materiality that goes with that? 

So for me there’s this sort of overlay I want to add that isn’t just how do we use technology to ensure we have a pathway forward, but also how do we ensure that that technology is not itself part of the problem? And how do we think about the whole lifecycle of our technical systems as part and parcel of that complexity?

Every time we want to build a microprocessor into something, how are we thinking about that end game? 

[00:30:21] Camille Morhardt:  Thank you so much for joining the podcast.  really appreciate it.

[00:30:25]  Genevieve Bell:  It was lovely to see you.

More From