Genevieve Bell is a distinguished professor at the Australian National University and the Director of the Autonomy, Agency and Assurance (3A) Institute. She is also a Vice President and Senior Fellow at Intel Corporation. After completing her PhD in cultural anthropology at Stanford University, she joined Intel in 1998 and went on to establish Intel’s first User Experience R&D Lab, and co-founded its first Strategy Office, where she ‘spent her life in the future, returning to the present on the weekends’. In 2017 she returned to her home country of Australia to establish the 3A institute examining the human impact of AI at scale. She continues to support the Intel senior leadership group whilst creating a new branch of engineering at the ANU.
This transcript was created using the awesome, Descript. It may contain minor errors.
Note: This is an affiliate link, where This is HCD make a small commission if you sign up a Descript account.
Jay: Hello, and welcome to Ethno Pod on This is HCD. My name is Jay Hasbrouck and I’ll be your host for this episode. I’m an anthropologist, strategist, and author of the book: Ethnographic Thinking from Method to Mindset. On this episode, I’ve had the pleasure of talking with Genevieve Bell, a truly accomplished anthropologist who I had the pleasure of working with at Intel and who has more recently founded a new institute at the Australia National University. Before we get started, I thought I’d share a little bit of background on Genevieve. Genevieve is a distinguished professor at the Australia National University and the director of the Autonomy Agency and Assurance or 3A Institute there. She holds a Doctorate in cultural anthropology from Stanford and established Intel’s first every user experience R&D lab and later, its first strategy office.
In 2017, she returned to her home country of Australia to establish the 3A Institute, examining the human impact of AI at scale. She continues to support the Intel senior leadership group whilst creating a new branch of engineering at the ANU, as a distinguished professor and director of the institute. Genevieve has been featured in publications such as Wired, Forbes, the Atlantic Fast Company, the Wall Street Journal, and the New York Times. She’s also a sought-after public speaker and a panellist at technology conferences worldwide. For my own experience, Genevieve is also a fascinating storyteller and a razor-sharp critical thinker who happens to share my personal charter to raise awareness about the relevance of anthropology.
In this episode, we talk about the relationship between technology and culture, something called cybernetics, as well as the story behind the formation and the purpose of 3AI with a deep dive into the institutes’ areas of focus, including autonomy, agency, assurance, indicators, interfaces, and intent. With that, I bring you, Genevieve Bell. I’m really happy to be able to have this opportunity to spend this time with you, Genevieve and hear about what you’re doing now, particularly since our history goes back pretty far. I thought I’d start by acknowledging that you and I worked together many moons ago at Intel. I was a researcher in your digital home group at the time. Did a lot of exploring.
Genevieve: That is correct. I remember all kinds of projects, thinking about the health of PCs, to think about how do you recycle computing technology when its lifecycle was over, about what people did with television and how they thought about content and how they sourced new content. Yes, lots of good projects.
Jay: Yes, and the phone, as well, was part of that mix.
Genevieve: That’s correct.
Jay: It’s great to be able to connect with you a little bit more in-depth here and have some people tune in and listen to hear what you’re up to. With that, why don’t we start with who you are, your background a little bit and what was your role at Intel, I guess, would be a nice place to start and where you’re headed now?
Genevieve: Sure, so the pottered introduction that I always give of myself, because I think you have to do that location work right in order to make sense of what you say after that. I’m the daughter of an anthropologist. I grew up on my mother’s field sites here in Australia, in central and northern Australia in the 1970s and 1980s, so I spent my childhood living in Aboriginal communities with people who took me onto their country and taught me to be a good hunter-gatherer, so I spoke in Aboriginal language, I didn’t wear shoes, I got to hunt and kill things. Basically, #bestchildhoodever. It’s a long way from there in [inaudible 00:03:29] to Silicon Valley, where you and I met. I took the roundabout where via education, so I did my PhD at Stanford in the 90s, wrote a history of one of the first boarding schools for Native American children, was an area expert in education, federal policy, feminist and queer theory.
You can see how that background would land you a job at Intel. Except not. I was on the faculty at Stanford when Intel recruited me. As part of an initiative, they were embarking on in the late 1990s to help them think differently about how you would drive innovation at Intel. If you can cast your mind back, for your listeners, 20 years, to a period of time when the internet was still a dialup and when computers will mostly meant desktops and often ones that worked not in your homes. There was a moment for companies like Intel and actually, Microsoft, IBM, and others where they were starting to wrap their heads around what it would mean for the PC to become a consumer tool, a thing we had in our homes, not just in our offices and a thing we didn’t just use for Excel and accounting software.
I think for companies like Intel, it was very much about, how do we get smart about what people might want to do with technology as a way of driving new innovation. I was part of a wave of hires into Intel in the late 90s, of research social scientists. The idea really was, could we find a different way into thinking about innovation. I’ve spent the better part of the last 20 years affiliated with Intel one way or another. I built Intel’s first big user experience competencies, I drove that way of thinking into the business units and into the fabric of the company. I exited the company full-time two years ago. My last role there was the co-leader of our strategy office.
My kind of specific job was really about how do we think about what the next five to ten years of the world would look like in terms of technologies, human behaviours, legal practice, cultural desires, and how do you thread the needle between all of those things. These days, I’m also a full-time professor at the Australian National University in Canberra, Australia. I am, much to everyone’s surprise, notably both of my parents, I’m a professor of engineering and computer science. The surprise being, my dad’s an engineer, so I think he thinks I finally got my act together. Then especially inside of that, I’m actually heading up a small initiative inside the university, in effect I think of it as a startup inside the university and our principle brief is to establish a new branch of engineering to take AI safely to scale. It’s an odd journey.
Jay: Yes. I’m curious, I get this question myself, as well, when people want to know, how does anthropology see into the future? How is it that you’re using anthropology to look five to ten years down the road? How do you respond when people ask you that?
Genevieve: I always think it’s a good question, right. Anthropology is very much a study of the present. It is very much about the deep nitty-gritty of daily life in the now. One of the things I think you become aware of as an anthropologist is how very slowly cultures change. Technology changes quickly and it’s easy to get seduced by that timeframe and to think everything changes as quickly as the technology does. The reality is most cultures and their underlying value set change incredibly slowly. The ways in which those values are manifested change a little more quickly, but the underlying, in some ways deep structure of those societies is very slow indeed, which usually means that it’s actually an easier thing than one might imagine to say what will five to ten years from now look like? The reality is that we’ll often look remarkably like now, just with more blinky lights.
Jay: Definitely. Yes, have you found that transition having been immersed, obviously, full-time at Intel into making the shift to ANU, how has that been? What have been the biggest changes for you there?
Genevieve: Yes, sometimes people ask me how I’m doing, and I sort of think, we should ask the university how they’re fairing. The hardest things are the things exactly you’d expect, right. I’m used to operating on a completely different timescale. One of my dearest colleagues here said to me early on in my time here, he said, “I think you have a problem with timescales.” What does that mean? “Well… here at the university, short-term is kind of this academic year and next academic year. Medium-term is three to five years. Long-term is ten years.” I just looked at him with I assume blank horror and said, well, what’s urgent? He said, “Well, clearly, very little.” I was like, okay, you have to understand, where I come from, short-term is, well, really the next quarter.
Medium-term is the next 12 to 18 months. Long-term is kind of one of those fantasies you occasionally indulge in and everything else is urgent. He just laughed. I think there’s something about that time thing that’s been hard for me to navigate. Then I do find myself I think they’ll regard me as crass is the wrong word, but I do keep asking the same questions I asked at Stanford, which always raised eyebrows, which is, yes, that’s really interesting, but what’s anyone going to do with that? Back in the day, they called me a historical particularist or a historical materialist and they never meant it kindly.
Jay: I’ve been trying on that term, radical empiricist for a while now thinking about what that means.
Genevieve: Well, there was this delightful moment I remember about five/six years ago maybe, I was in a conversation and it was on the cusp of us talking about big data to us talking about AI. I was with these people and they just kept saying to me, “More data equals more truth.” All I kept thinking was, no, more data just equals more data. By the way, didn’t we deconstruct the idea of truth? Then I thought, my God, you’ve gone from being post-modern to just modern again.
Jay: Yes, completely flipped.
Genevieve: There’s this weird thing that I do, I vacillate also between the notion of there are actually facts that I think you could contest differently versus the idea that data somehow equals truth. I’m not quite sure how to square the circle of those two things for myself.
Jay: Yes, the reason I was bringing up empiricism, too, is I think it would be nice to go back a little bit because I know that some of what’s happening through AI is rooted and has some really deep anthropological roots. Particular cybernetics. I’d like to get a sense of maybe for people who don’t know what it’s about, a good definition and then how you guys are using that with 3AI.
Genevieve: Absolutely. The institute we’re running here really does have that starting premise of establishing a new branch of engineering to take AI safely to scale. On the one hand, that sounds ludicrously ambitious. On the other hand, it’s not the first time that an emergent technical system has required a new way of thinking. In fact, a new set of skills to help manage that technical system out into the world. That’s certainly the original story of computer science, is a story about emerging technical systems and the requirements of all of the various people that were using it to have a level of abstraction and grammar for dealing with computing. If you go back even earlier to the very origins of computing, there was a series of extraordinary conversations in the mid-1940s to the early 1950s about what the nature of computing might be.
It’s important there to remember two things, right, one is the nature of computing in the immediate second half of World War II and the immediate years afterwards. Coming into World War II, when you talked about computers, you were actually talking about people who did maths calculations. They were mathematicians, basically, who did speedy calculations, for the most part, during World War II in order to aim military hardware better. They were working on telemetry for guns on naval vessels and doing a bit of cryptography, too. There were these rooms full of people who did maths really quickly, or mathematics. They were doing lots and lots of calculations. In many places, those people were women.
There’s a lovely academic paper by a woman named Judith Light called: When computers were women. Which is a really interesting history of that early period of time. Coming into World War II, when you talked about computers, you were actually talking about mathematicians. Exiting World War II when you talked about computers, you were increasingly starting to talk about these incredibly large electronic calculators. They weren’t yet digital, but they were electronic. They were able to do calculations ten to a hundred times X faster than the humans could do them, which is an extraordinary delta step-function change. The people who were starting to think about what you could do with that computation were a vastly wide array of people. Some of them were the men for the most part who’d architected those early computers, so people like John Von Neumann and some of his colleagues who helped build the [inaudible 00:11:55], which is basically the first computer.
There were other people who were thinking about the power of computing who came at it through telecommunications, on the one hand, mathematics on the other scientific calculations. One of those people was a man name Norbert Wiener. Norbert was a mathematician, he’d been involved in doing an enormous amount of really interesting stuff through the pre-war period. He is notably remembered in engineering as the man who first codified ideas about feedback and feedback loops. He’s a big person in systems and control theory. In World War II, though, he got really interested in thinking about the emergent space of computers as machinery and not as people.
About what it would mean to have these ever-increasingly powerful computational objects. He was really interested in starting to think about how you would theorize them. Not what would you use them for, but how would you make sense of them? He publishes a book in which he coins the phrase cybernetics. It’s a made-up term, right. He invents that term and he brings the cyber piece out of the Greek to mean regulated and governed. He’s trying to think about what is this world going to be when we have this computational capacity?
Jay: Yes, it’s a lovely word, actually.
Genevieve: It is a beautiful word and it is where we then get things like cyberculture and all of the others.
Jay: The cybers.
Genevieve: Yes, the cybers, all the ways we joke about the cyber. It starts with Norbert’s coining of the phrase cybernetics. At the time, cybernetics felt like this incredibly modern word, back to the modern, it felt like this incredibly unexpected sounding word, it sounded very sophisticated. It got a lot of, well, it was buzzy. It would have been a meme if they’d have memes, but it wasn’t, it was just buzzy. Norbert was a man with a pretty good reputation. He was well-known. He published the book: Not just for an Academic Audience, but for a Public Audience. He was very much in dialogue with the thought leaders of the day. He really wanted to inject this conversation about the future of computing into lots of places. It’s important to remember, right, we’re talking at this moment now about out of the war and into the immediate aftermath of the war.
46’/47’. We’re talking about a time where if you were in the know, the power of computing seemed extraordinary and the trajectory it was on seemed almost limitless and the idea that you could have a computer that would be like a human brain seemed not only probable but possible and likely within the decade. We’re also talking about a time when the unimaginable power of technology had been unleashed at a scale hitherto for never seen. We are just on the other side of dropped two atomic bombs. They’d been a combination of the science of many people, but they were an example of what technology could do and people were already starting to ask, “Is that what we should have done?”
There was an enormous amount of debate about the consequences of entering the nuclear age. You also have the unpacking of the rest of the arc of World War II, so you have debates about the holocaust, about the cost of war. You have this sense of instability in terms of where the centres of power are going to be and you have a diaspora of academics and intellectuals, notably, Jewish fleeing out of Europe. It is a time of extraordinary unsettled conversations about technology and its role in society and its consequences and about who should be in those conversations and about what was the role of the military.
Jay: Mead and Bateson jumped in there somewhere, as well, right?
Genevieve: That is correct. Right at this moment in time, Norbert and two of his other colleagues think, “We should probably have a big conversation about this.” I mean, to their credit, right. They approached, Norbert did, approached Mead and Bateson, so Margaret Mead probably at that point the best-known anthropologist in the world. Coming of [inaudible 00:15:49] had been published, she was working with the state department a lot, she’d been working with a bunch of other American government departments to help do things like create the food pyramid, create the standards for caloric intake that we know, working on the school lunch program. She was doing a lot of work of bringing anthropology out of the university sector and making it count. She was a public figure in many ways.
Gregory Bateson, her third husband, who was also at that point a quite well-known academic, British anthropologist also, the two of them had a fairly, well, heady marriage. You always get the sense that there was a lot going on in that marriage. Both in terms of two people who were intellectual powerhouses, very different kinds of characters who were really pushing the boundaries of their disciplines in very different directions. Hers into a space of how do we make this work meaningful outside of the university? Him in terms of thinking about how do you take anthropology and ask a series of questions about human consciousness and the human condition? They go onto having profoundly different legacies in terms of who they influenced and shaped?
You know, they’re also the parents of Mary Catherine Bateson, who was a scholar, a fine scholar in her own right. It always feels like it was a pretty heady relationship. One where they had a wide set of acquaintances and conversations. For Norbert and a man name Warren McCulloch, there was this desire to have a bigger conversation about technology than the narrow one that had been had. There’s an organisation on the East Coast called the Macy’s Foundation. Not for Macy’s department stores, but for a man named Joseph Macy who was a philanthropist. He wanted to convene these interdisciplinary far-reaching conferences. Although, they were kind of unconferences or ur-conferences before that was a fashionable term. The question was, who would you bring together to talk about cybernetics, because that’s how they framed it.
The list of participants is really interesting. Partly, because it’s so diverse. You’ve got Mead and Bateson kind of anchoring the anthropological piece, but they brought with them people out of linguistics and psychology and human biology and evolutionary biology and zoology and from all over the world. Israel, Japan, Mexico, Chile, Europe. No Australians, sad to report, no Kiwis either that I know of. Then also women. Then a who’s who of the list of people that would have been the big protagonists in computer at the time, or at least in the technical field. Norbert, John Von Neumann, Lick Lighter, who would go onto be the man that would run Darpa. Vanifer Bush is involved in the early days, so the man who wrote the [inaudible 00:18:36] memo and who was the precursor to Darpa.
People coming out of Britain, so Ashby was there who goes on to be a big cyberneticist, Stafford Beer was at least one of these ones. People who go on to be big names in machine learning and other forms of computing. It’s this kind of – Claude Shannon was there too, how can I forget? Claude Shannon who basically invents system engineering at Bell Labs and Shannon’s theories about information. It’s an extraordinary collection of people. They have conversations, they circulate some papers in advance, they litigate things. They are in charge of individual topics; they haggle about those topics.
Those topics run everything from mind control, ideas about memory, ideas about octopus’ consciousness, ideas about childhood learning and development, ideas about the subconscious, ideas about technical systems and computation and abstract linguistics. They must have been extraordinary. There were ten conferences in all. Eight of the big ones and two smaller ones. What they are all ultimately attempting to theorise is fascinating. One the one hand, there’s a thread running through it that’s all about, can we make sense of human cognition? The question there is, make sense of human cognition in order to imagine whether computation will ever marry it or match it. Then also increasingly this systems of systems’ conversation, of can we start to think about what it would mean to imagine that the power of computer has to be always and already in dialogue with cultural systems and ecological ones.
Jay: Right, which is where Bateson kind of weighed in there, some of his work intersects with cybernetics there, yes.
Genevieve: Absolutely. Of course, Norbert was a smart and complicated man, so he made sure that the New York Times covered these conferences. He made sure that pieces of the conference were circulating out in society. Many of the protagonists took ideas from that conference back to their day jobs. It drove all kinds of activities, not always called cybernetics, right? When the conferences wind down in 54’, so the last one is in April 54’, cybernetics as a term has kind of become a bit unfashionable, certainly, in the U.S., it’s seen as being a little too close to social engineering and possibly a socialist agenda in a time when the cold war is really heating up. A bunch of people involved take the ideas and start reframing them. Shannon is sitting inside Bell Labs at that point and he, in some ways, takes these conversations out of cybernetics about systems theory that has to include machinery, about where you put people in the loop, about how you think about much more expansively about the boundaries of technology.
He’s really one of the prime drivers of what we would, in the early 21st century, call systems engineering. It has a clear home inside Bell Labs. I tend to think of it as being if cybernetics was the theory, systems engineering was the practice. It carries through Shannon’s work there and ultimately, into a whole series of other places, the Apollo Project with NASA, first embedded computer, all kinds of notions come out of there. Indeed, the guys from Bell Labs are the ones who would ultimately hire me at Intel because the Bell Labs diaspora moves from Bell Labs into the west coast and into companies like Intel and HP.
Jay: Really? I didn’t know that.
Genevieve: You and I have that in our genealogy, yes. Chris Riley came out of Bell Labs, so did a bunch of other people. Carmen Agado who started the Intel groups that hired me, so there are this long Bell Labs tentacles that go into the west coast. You then had other people from that conference take those ideas back to places like England, so Ashby and Beer and Bateson who take a lot of the cybernetic agenda and continue to rehearse it in the UK. Stafford Beer helps take the cybernetics research agenda into Chile in South America. It unfolds in other kinds of places, but in some ways, the place that has the most influence is the place where the name is most lost. The last cybernetics conference is in 54’. The next time, many of those protagonists gather together is in 1956, in the summer of 56’, the North American summer, where they gather at Dartmouth college for a six-week summer school convened by money from Lick Lighter, money from Darpa, to bring together people that had been at that conference.
In particular Shannon, but Neumann is in the background. That conference is the conference that coins the phrase AI, or artificial intelligence. It’s the conference that kicks off what an AI research agenda would look like, that in some ways has shaped how that agenda has unfolded for, well, gosh, 60 years and counting. What’s fascinating about how it rematerializes at Dartmouth is it’s stripped out of conversations about culture and people. It’s stripped out of conversations about the ecological and what is left is a mechanistic notion of learning and of the brain, where the idea is, much more one that comes straight out of the industrial revolution and the logic of that, which is that we want to be able to break down into sufficiently small pieces that a machine can be made to simulate it, the acts of language acquisition, symbolic logic, physical action, task management, decision-making, and learning. Then, suddenly, you have AI.
The appeal for me of going back to that earlier set of conversations in the 40s and early 50s is an appeal of many voices in the room from very different places attempting to think through what the right questions are without deciding what the answers where. Framing those questions. Framing those questions in such a way that they had to, by necessity make sense of the technical systems, human systems and ecological systems. For me, it’s that troika, the technical, the human, and the ecological that feels really powerful and important as we look out from 2019 forward.
Jay: Yes, clearly, many of these issues are still with us. They’re not going away.
Genevieve: Well, indeed and I think in the late 40s, early 50s, the cost of computing in a social and environmental sense wasn’t yet clear and the power of it was profoundly on display. I think some of those conversations about other models for computing must have felt more exotic and easier to put to one side in the face of this incredible advancing power of computing.
Jay: How do you see this threading through to 3AI in specific ways. I mean, clearly a diverse team, lots of different people contributing to the conversation from different backgrounds. Are there other ways where you’re pulling from cybernetics?
Genevieve: I think that the notion that you need to keep asking the question. The thing about cybernetics is that it wasn’t instantly about a solution, it was more about how do we think about what the right questions were at the beginning. For me, I really want to be acutely aware of the kind of systems focus on that. I think the forms of systems engineering that came into being in the early 20th century are hugely important to us now as we think about the fact that we are looking at systems that are not just systems of technology, they are systems that include culture and people and the environment and in our insistence on thinking of them as technical systems only, we have created some really interesting challenges, when I look at what happened with Boeing and their aircraft, where they imagine that the technical systems were bounded, were, in fact, part of how the technology had to be understood was in reference to the manuals that needed to be written to explain it and the people who those manuals had to be certified by.
Where did the edges of that system lie? Well, they don’t lie with the physical case of the aeroplane, so there’s something about that ability to think of the systems as more than just the technical pieces feel increasingly important. As we imagine a world where the embedded digital piece of those systems is increasingly proactive and ubiquitous and “smart”, that feels like the cybernetic promise of 80 plus years ago. It feels like the only way that you can contend with what that will mean, is to be wanting to pull all of these threads back together again. For me, the attraction of cybernetics is both to go to a form of systems thinking that is informed by an idea of a system that has to by necessity encompass people, the environment, and the technology, not just the technology. That, for me, as we think about what it would take to take AI safely to scale feels hugely important.
Jay: Yes, and a very anthropological perspective. Surprise, surprise. Social media must come into this conversation in a pretty major way, I would imagine?
Genevieve: Yes, it does. I think in lots of different ways. I imagine had social media been around at the time, many of the people that were at these conferences would have been really good at it. Bateson liked a soundbite and so did Mead. Norbert understood the power of the press perfectly. There is a way of thinking about how very different it was to have those conversations when it wasn’t being atomised down into 280 characters. I don’t think that’s kind of what you mean. If what you mean is, how do we think about building a world where conversations about that world aren’t just contained to its experts, where many other forms of the conversation are happening and there are different forms of scrutiny and reportage, I think it’s a profoundly different moment in which to work and build than we’ve ever had before.
Jay: You’re thinking about the close relationship between those technologies and things that you guys are exploring, things like autonomy and agency.
Jay: They’re right there in our faces in terms of social media.
Genevieve: Yes, in different countries. I can’t help but think about our imaginings of our future need to be broader than they often are. It’s easy to think the entire future is the one that we have seen in the U.S. or in Australia or England. We know that’s not true. Thinking about and part again of Mead in particular’s brilliance in those early conferences is that she understood that those things would unfold differently in different places and was trying to have points of view that were informed from different lived experiences. I think that’s true now, to think about what does social media mean and how does it function? It has even the label suggests a monolithic capacity that isn’t the case. I often push back on even calling it AI, right. I think we’re actually talking about artificial intelligence, while that is a really awkward thing to say, AIs, not AI, there is some call of saying, well, listen, social media is not a flat landscape. What happens in LinkedIn versus Tick Tock are as different as Indonesia and Paraguay.
Jay: Yes, I agree.
Genevieve: I’m not making any gauges about which one is which in that, I’m simply saying, there’s an inordinate amount of variation there. When we talk about social media, I think we are often talking about an imagined social media landscape that might have existed briefly, rather than the one that actually operates in a contemporaneous mode. Imagine that the social media activities of the 20-something under-graduates I see in my coffee shop versus my parent’s cohort versus the aboriginal communities that I visit, versus what’s going on in Istanbul or Cicada or, well, London, or not big cities. It feels like that single label doesn’t get it done, right.
Jay: Certainly. I think there’s a lot to explore in that space. Is this some of the territory that you guys are continuing to add to your investigations?
Genevieve: Gosh, such a good question. When we said we were going to build a new branch of engineering when you take a step back.
Jay: Just a small bit.
Genevieve: Yes, then I’m going to regrout the bathroom tiles. Establish a new branch of engineering, what would that actually look like, tactically? For me, it was about three pieces. Right. The first piece was about for better and for worse, I spent 20 years in Silicon Valley, right, so the notion of prototyping and the iterative design feels like for me good research tools. One of the first things we thought about doing here was saying, well, okay, you need a new branch of engineering, can we teach it into existence? Is an option for us actually going about the business of saying, well, what would it take to teach something that doesn’t exist yet? What would be your predicate pieces? What are the bits you would go steal and borrow and re-integrate from other places? What are the key writings and exemplars and pieces that seem critical? What are the building blocks, basically? One of the things we’ve been doing over the last year was we ran an experimental master’s program this year, one year long, intensive masters. We did a competitive call for participants. We got about 173 participants from all over the world. 50 percent female, 70 percent based in Australia. Disciplines literally from astronomy to zoology.
Jay: Zoology, I didn’t see that coming.
Genevieve: I know. People from completely different kinds of lived experiences. We ended up taking 16 of those students and they’ve been here for 2019 with us. We’re just in the process of recruiting next year’s cohort. For us, part of what we were trying to do there was thinking about what do you need to give people to help them become protagonists in this conversation? For us, that was about a couple of things, part of it was about how do you get people from a problem-solving mindset to a critical question-asking mindset? You and I, I remember, and a few of the rest of us used to joke about the thing about anthropology is that we ask a small question and then we answer. The ability to frame a good question is a prized skill in our discipline because it is often what it unfolds that is useful, right. It doesn’t get you to an answer, but the asking of the question makes space for a whole series of possibilities. I’m a big believer in a well-framed question. I think a well-framed question is often a much more useful tool than a problem statement.
Part of what we spent this year teaching our students is how to ask a good question and about why that’s an important starting point, not how do I solve this problem? Critical question asking, as well as what are the kinds of technical building blocks you, or, well, technical building blocks in the other sense, right, because I think actually asking a good question is a technical building block. Then the other technical building blocks were around things like making sure that all of our students had been exposed to coding. I’m not about to make anyone into a computer scientist who wasn’t already one, but I know from my time at Intel, we were the most effective when we understood the technologies that we were talking to our colleagues about.
Jay: Most definitely, yes.
Genevieve: Not that we could build them, but we knew how to ask two clicks down. We’ve been exposing our students to the building blocks of these next-generation systems, so they can also do that kind of work. You know, everyone has had a bit of exposure to coding and data science and machine learning and how you build physical stuff and how you think about the interactions between the digital and the physical. Then we’ve done a lot of team-based flipped classroom hands-on learning, project-based. Lots of other voices in the classroom, so it’s not just us. I figure if you were to ask our 16 students, every single one of them had things they loved and every single one of them had things they hated.
I’m willing to bet they’re different for every single student because they all come from such different places. I mean, this cohort ranges in age from 25 – 60. I have someone in the cohort who didn’t finish Uni, someone who has a PhD. I have people who have backgrounds in computer science and physics and maths, as well as theatre and public policy and communications. We had people who were running the largest Maker space in Australia, running a theatre group, serving member of the Australian military, running their own company, working in an NGO, working for the government, working for a big American multinational. I mean, really diverse, which is completely fabulous.
Jay: Yes, that’s great. I’ve actually had my share of experience managing interdisciplinary teams and it can be very challenging. How did you go about helping them find a common language so that they’re interacting productively?
Genevieve: Listen, I always think that’s one of the – you’re absolutely right, it’s one of the conversations we never have, we talk a lot about diversity and not so much about what the consequences of that are in terms of how you have to create inclusive practices. I actually think in the short term, the more diverse your team, the higher your tolerance for a certain kind of conflict has to be. Not mean spirited conflict, but when everyone comes from a different starting place and a different point of view, common language is hard to accomplish and people don’t have an easy short-hand to fall back on, which means both as a leader and as a participant, you need to be mindful of that and be willing to imagine that some of the early pieces feel uncomfortable.
We were both really explicit in talking about that, of going, you all come from different places, some of this is going to be quite uncomfortable. And spending a lot of time making sure that we were really clear that everyone was on the same journey together. Then you have the things you need to do, create time for the introverts, create time for the extroverts, create space to sit in a corner and go, ah. Moment of people explicitly having to work out how to work together and giving people the tools, we could.
The curriculum has got a lot of reflection exercises built into it. Both pair, group, and singular, because we really thought that stuff was important. I was really conscious of thinking through the fact that we were building a cohort, not a bunch of individuals, and so when we were recruiting, we were recruiting for 16 people who would add things to each other and not just, “the 16 objectively best”, I wanted 16 that were going to be bigger together than they were as individuals.
Jay: I think it helps a lot to lean on skills, ethnographic skills probably as facilitating interactions so that people are comfortable contributing and that’s part of what you do as an ethnographer. I think it helps in settings like this, as well.
Genevieve: Absolutely. As does being really clear about what we share that’s external to all of us. To me, one of the lovely things about coming home is to be back in a country that is aware of its indigenous roots. You know, every Australia event these days begins with an acknowledgement that we’re meeting on traditional lands, acknowledging the local people by name, acknowledging their elders and their leaders and their future emerging leaders. For me, always remembering I’m on Aboriginal country is a really important piece. One of the things we did when this cohort turned up was actually take everyone out onto country with one of the traditional Elders so that people were exposed to realising that they were all in a very particular place together.
Jay: What a nice level-setting activity.
Genevieve: Yes, it was. Well, one of the things that welcome to the country by indigenous people often includes as reminding people that when they’re on Aboriginal country and when they’re on this particular piece of country, that particular leader says both you are welcome here and responsible while you are here. It was a nice way of giving them all of the thing that they had in common, which is that they’re on the traditional lands here of the Ngunnawal people. Being reminded where they were in time and space for me felt like at least a starting place that they all had in common. I said we were building the institute doing three things, so one of them is having the students. The student piece is a piece of the puzzle number one. The second piece of the puzzle for us was very much about, let’s try and teach it into existence.
The second piece was, well, let’s try and research it into existence. For me, that was about saying, where are places where there are emergent AI systems going to scale, where if we went and spent time with the people who were building them, we might learn something. For us, that was, how do we go look at people who have cyber-physical systems. We have a couple of different research projects going with scientific organisations that are building big technical systems using AI. We have projects with governments that are thinking about how you regulate them or buy these new technical systems. Then we have some relationships going with commercial enterprises that are implementing these systems and then a project looking at an arts organisation that’s using them for creativity.
For me, I wanted the full span of research, civic civil society stuff, art stuff, commercial stuff, because well, you know, I’m an anthropologist of a particular vintage. I always think the comparative is more useful than the singular. Those projects are unfolding. Of course, what we find in all of them is that there is not one application of AI, it looks really different in different places. There are a collection of different puzzles and problems that people encounter as they try to scale these things. It’s really different implementation areas.
What seems to run in common is a desire for different people inside that conversation. The good news for us was, as we started to talk to all of these different organisations, they really wanted the students we were a product, so yay. That piece is unfolding. Then the third strand of work we’ve been doing was really around what is the theoretical frame that you want to have here, or the ontological system, or the epistemology if you want to think about it that way. Like, what is the core set of questions?
Jay: Right, I am curious how you landed on autonomy agency assurance in those, yes.
Genevieve: Those were my first three questions because I thought to myself, okay, if AI goes to scale and it gets off the computer that we’re recording this on and out of my phone and into this whole collection of objects that don’t yet have a meta label, so drones, autonomous vehicles, smart lifts, smart traffic lights. All of those things are for me instances of cyber-physical systems. If you think of that class of objects of all having things in common beyond the individual applications basis, what questions might you want to ask of cyber-physical systems in order to think about how you would build them, regulate them, design them? For me, I started out with three questions. The first question was to say, is that system autonomous? That’s both a technical question, as in, does it operate absent, pre-written rules, is it a learning system, so it’s evolving its behaviour over time? Does it have to check back in before it can do certain types of things? Now, of course, the challenge there is that autonomous sounds like a human thing and when you say a human is autonomous, you imagine they have fully formed actors, technology systems aren’t like that. There are smart lift deployments running AI that are fully autonomous, they are not, however, sufficiently autonomous that they are leaving the building.
They still just go up and down and not going out the front door for a beer or a coffee. They’re autonomous, but there are parameters, right? Autonomy for me seemed like a really critical question, so was it able to act without being in a command and control infrastructure, which is what computer has been up until now, right, effectively pretty much all computers still command and control. The promise of AI is the promise of something that doesn’t have to wait to be asked or told, it will actually just act, so proactive. Is that then autonomous and how is that enacted is a question about what is the technical build? It’s also a question about what is the regulatory framework? It’s a question about what the hell are the consequences if you’re a human and the lift is now doing its own thing? Wow. A range of questions, number one.
A set of questions number two were, okay, if the object is autonomous, how much agency does it actually have? Humans are autonomous, but that doesn’t mean we get to do anything we damn well, please. There are laws that we are governed by, there are tacit forms of social regulation, there are physical impediments and the laws of gravity and other things. We can be fully autonomous but we’re still not flying, like without being in something that flies. There are things over which we have agency and control and things that we do not. I was interested again, to go back to the lifts, right, so now you’ve got these lifts that are determining what floors they need to be at based on previous forms of traffic, they are deciding where they want to go. There are probably some broader affordances.
The lifts are basically acting. What kind of agency did the lifts then have? Do they get to act in all circumstances? Are there overrides in controls? Of course, for lifts, there is an overdrive that is driven by emergency services, who can determine to bring all of the lifts to the ground in the case of a fire, who can commandeer particular carts. There’s all that sort of stuff. What is going to be the limits on these autonomous actors and are they set locally in software or hardware? Are they external? Do they live on the network? Who gets to determine when they’re implemented or not and how?
Think about an early proto-example would be Teslas in the state of Florida when that last range of hurricanes came through, and there was a need to evacuate people out of Florida, Tesla overrode the programming in the individual vehicles that were geofenced and extended the battery life by 12 hours. They didn’t ask anyone whether they wanted to have that happen, they didn’t seek anyone’s permission, they just did a down the wire fix. Now, there are some really interesting questions that that raises, both at law, in commerce, and technically, about what it means to have an object that someone else can change? How you implement that both technically and legally is fascinating? Agency is a big set of questions as far as I’m concerned. Those two questions together, the questions of are these systems autonomous and are these systems agent-ful or in other words, who had the controls and limits on them raises this third bundle of questions about what we call assurance here because unfolding that list includes risk, trust, liability, accessibility, explicability, manageability, ease of use and ethnics.
There’s a whole lot of things you then have to think about in terms of, okay, if this object is autonomous and has some degree of agency, how do we think about risk? Where does that sit? How do we think about liability? How do we think about trust and privacy? How do we think about security? How do we think about explicability because that’s a demand at law by the GDPR bundle of legislation in Europe? How do we think about ease of use? How do we think about ethics? I want to be clear that I think ethics is part of a whole bundle of conversations, not just a conversation on its own. It’s not just about the ethics of lifts, it’s also about who are they explicable to and how do we think about trust? How do we think about privacy and how do we think about liability and risk?
Jay: It spreads across all three of these, I would imagine, right?
Genevieve: Absolutely. A big set of questions there. Those were the three I had initially. I was like, I think it’s those ones. Then two years in, I’ve added three more questions because I thought there were three more things that became clear. Happily, there are Is, which gets me to the three As and the three Is. That’s all good. The first one of those Is, is around indicators. How do you know it’s a good system or a bad system? Most previous large technical systems that we have had to manage, computers, electricity, production line, steam engines, trains, the predominant indicators we cared about were about efficiencies and productivities. Time in motion mostly. A little bit of safety. Clearly, in this emergent area, we’ve got safety as a principle indicator, but we’re still working on these notions about is it more efficient than the previous system, or more productive? I’m really interested there on a cybernetic question, which is one about the environment and about sustainability.
Jay: Insistent thinking, yes.
Genevieve: Yes, well, it’s also been really striking. There’s been an emergent set of data over the last two years, looking at how much energy it takes to run AI. How much of the world energy budget is being spent on data farms, about ten per cent, I bet that’s a floor and not a ceiling. If you look at how much energy it costs to run a single instantiation of machine-learning, the tech review at MIT did some stuff on this about six months ago and are starting to suggest that certain algorithms are more intensive than cars. The whole production and running of a car, not just dealing with a day’s worth of driving.
Genevieve: I know, you then start to go, okay. All I keep thinking about, perversely enough, is James Watt and the steam engine and when he optimised the thing. I think to myself if someone had said to Watt way back then, “Listen, mate, you’re building this steam engine, it’s going to be great, it’s going to change the whole world, but could you think a little bit about fuel efficiency? Because whilst what you’re doing makes sense at the moment, we’re going to chop down every tree in Europe and we’re going to fill these things full of pitch coal and we’re going to pollute the entire planet and it’s going to have a consequence. Mate, could you make a different decision?” Maybe he would have, maybe he wouldn’t have, I don’t know. I think of that moment now and I wonder, who is driving the conversation around next-generation technical systems, where we say, we can’t afford this as a planet, so how are you going to architect that to be a low energy algorithm? How are you going to architect that to be a sustainable machine-learning technique?
Jay: It’s a different conversation than externalities that you see in typical sustainability literature, right, which are these accidental unintended consequences in many cases. These are pretty direct.
Genevieve: Yes, these are intended consequences.
Genevieve: Or possibly not intended consequences, but like consequences. I wonder about those, for me, the conversation about indicators feels very now, the second of the Is, is one around interfaces. You and I spent a lot of time talking about user interfaces and user experiences back in the day. Here, I’m interested in thinking about what’s the emergent grammar of these systems? The first wave of them will be, for the most part, things we have lived with for decades that are now about the change behaviour and change state. I go back to my mild obsession with smart lifts. Most of us have used lifts or elevators before. We have a cultural muscle memory about how to use them. There are buttons on the outside, there are buttons on the inside. Up or down on the inside, granularity on the inside, it’s all good. This next generation of lifts tend not to have buttons on the inside because they’re running algorithms and so, you need to tell them everything they need to know before you get in the lift, so that it can do its calculation sequence. That means for a whole lot of people, you’re getting into a lift carriage that doesn’t have buttons at all, which feels like a sudden loss of control.
Jay: Major anxiety.
Genevieve: It is. I’ve watched people have major anxiety all the time. Because we’re trained to use lifts in a particular kind of way and what the computation needs to make these lifts work turns out to be, well, out of sequence. To make these new lifts function, mostly what you’re doing is waving a badge at them and your badge is cued to a particular floor, or you’re pushing a floor and the lifts are basically optimising. The problem is when we travel in groups, which we frequently do, only one person pushes the button or waves their badge. The lift only thinks it has one person in it. The reality is it may have more, which creates interesting challenges for congestion, weight, tie, and a bunch of other stuff. Humans are behaving the way they have behaved around lifts for, in some countries, a hundred years. Now, we need to understand that while the thing looks the same from the outside, it’s not behaving completing differently. How do you signal that and for how long? They’ve just put in a smart light rail system in Canberra where I live and the light rail has preference at the traffic lights, so it overrides the cars and pedestrians when it’s coming through. They put these signs up on the traffic lights that say: Traffic signal may vary.
Genevieve: That’s what they should be doing, but then they think – then you spend a lot of time trying to work out what that sign is really trying to say. I think to myself, how on earth are we going to mark the world up in this interim period when objects that we know are behaving differently than the way they have always behaved and how you effectively re-socialise people or change the way those objects, or both, in order to make that shift. There’s this really interesting one. I’ve got collections of the signage that we had when things were electrified, about how you taught people to behave differently because electricity was different than gas. There’s just this really interesting stuff for me about what that whole thing feels like, so the pieces of the interface feel hugely important.
Jay: Yes, I think grammar is the perfect analogy for it, as well.
Genevieve: Yes, and then there’s all the other kind of stuff about you and I have lived through the various moments where computer scientists and engineers have said to us, “Everyone is going to talk to computers.” And we were like, no, they’re not. If the entire world is talking to stuff, it just doesn’t work. It’s like loud and noisy and there’s a whole lot of stuff that you don’t want to say out loud, probably what you just typed into Google. Not so much. Interfaces and indicators for us are two of the big Is and then the third and last I for me is about intentionality. What is the intent of the system? How do we get clearer about what the consequences of that intent are, both in its intentional-ness and then its unintended-ness? The smart lifts are an interesting one, right?
The first instantiation I know of them, the intention of the lifts was to save energy and the moving people around is a secondary condition. They’re designed to be energy saving over the arc of the building’s lifetime whilst moving people around. The energy-saving piece is the most important piece. People’s weight, time is a second or third-order consequence of that. Getting clear about is this is your intent, what are the second, third, and fourth-order choices of that is a conversation about sequencing, but also a conversation about systems that we need to be clearer and clearer about if that makes sense? When I try to think about what are the foundational questions that this new branch of engineering or what are its critical frameworks, it’s about those six questions. Will the system be autonomous? Will the system have agency? How will we manage assurance? What will be the indicators? What will be the interfaces? Who is determining and then managing the intentionality feel like the right set of questions? Then you can start to say, well, clearly, if you want to address those questions, you are going to need people who have been trained in certain fields inside engineering and computer science, but also, well, clearly law, public policy, sociology, architecture, cultural geography. There’s a lot of places where you’re going to want to pull key expertise from in order to establish this new framework.
Jay: Just quickly, are there any of those that have jumped out at you so far as being particularly challenging?
Genevieve: In terms of those questions?
Jay: In terms of those six?
Genevieve: The interesting thing about them is, that on the first pass, each one of them seems really obvious. Then you discover that philosophers believe they own the question of autonomy and no one else should talk about it. Engineers think it’s really simple until you say to them, okay, like, Tesla’s version of autonomy is completely different than Volvo’s. Those are just two cars. They’ve architected them differently, they’ve secured differently. That one turns out to appear to be really simple and then completely isn’t. There is this delightful semantic slippage, right, where you say autonomy in English and people hear sentient consciousness, like Frankenstein and the terminator. You’re like, no.
Genevieve: The lift is still in the building, literally. The autonomy one feels like it should be straightforward and isn’t. the agency one, agency is one of those terms we understand out of sociology and anthropology, other people hear it meaning agents, so chatbots. You’re like, okay. That’s been an interesting one to get people’s heads around. You’re like, that’s really simple. No, it’s not. The assurance one, people want to talk about the ethics piece, they don’t really know how to think about the liability piece. I find that trade-off fascinating. In the intentionality one, everyone has gotten quite good at talking about AI, they haven’t gotten quite so good about talking about what is the problem it is trying to solve or what are the questions you should ask before you even get to the problem. That one has been actually a really useful question for getting people to start to unpack what they think they’re up to in the first place.
Jay: And prioritisation, I would imagine, in there too.
Genevieve: Exactly. The interfaces one is emergent and I’m watching it unfold around me in real-time and it’s fascinating and I think the indicators one, people are just not used to thinking about the energy budget of technologies in quite that way. For most of us as consumers, it’s masked, right. You’re not paying for the cloud; it sits somewhere else. That’s been an interesting one where it catches everyone’s attention and then I think it is one of those ones where it feels a bit paralysing. They’ve all got very different engagements, but I found them to be – it’s been a useful framework. Every time we’ve unpacked it for someone who’s building a system or thinking about building a system, those turn out to be questions that resonate well. That’s always good.
Jay: Well, I know you have a limited amount of time to share with me, Genevieve, but it’s been great to understand a lot of more detail about what you’re doing at 3AI. I really appreciate your time, thank you for joining me.
Genevieve: It was my pleasure. It’s nice to get to talk to you again.
Jay: Yes. Just to leave the listeners with some kind of way to get in touch with you, if you’d like to share how they can connect with you or 3AI?
Genevieve: Easy as. You can find the 3A Institute on the interwebs, on the cybers. You just type in: 3ainstitute.org, that will get you to our website. We have a newsletter that we publish pretty regularly that gives you the goings-on of the place. You can also follow us on Twitter and Linked in at 3A Institute in both places, or you can find me on both of those places too, so on Twitter, I am @feraldata. That is my long-standing handle and joke with the universe. You can find me on LinkedIn too. We’d like to use those two platforms dominantly in the social media platforms for keeping stuff up to date and sharing with what we’re up to.
Jay: Great, thank you very much, Genevieve. I appreciate your time. I look forward to connecting with you in person sometime.
Genevieve: Me too.
Jay: All right. Bye. I hope you enjoyed listening to this episode as much as I enjoyed hosting it. You’ve been listening to Ethno Pod with Jas Hasbrouck. If you’d like to hear future episodes, you can subscribe on your favourite streaming service, or visit: Thisishcd.com. To reach me directly, you can follow me on Twitter @Jayhasbrouck. That’s H-A-S-B-R-O-U-C-K, or by visiting Ethnographicmind.com. Thank you for listening.
We provide remote, flexible training options to help you grow your design and innovation capabilities. We also offer bespoke training programmes for teams and organisations on any of our courses.View all courses