Sam Ladner is a sociologist who helps teams innovate, design, and learn. She is the author of Practical Ethnography: A Guide to Doing Ethnography in The Private Sector and Mixed Methods: A Short Guide to Applied Mixed Methods Research. She has worked on dozens of advanced software projects including Alexa, the Echo Look, Windows 10, Microsoft Office 2016, Cortana, and HoloLens. She currently works at Workday, an enterprise software company, as a Principal Researcher studying the future of work. She received her PhD in sociology from York University and lives in the Bay Area with her husband and cat.
Jay Hasbrouck 00:09
Hello, and welcome to EthnoPod on This Is HCD. My name is Jay Hasbrouck and I’ll be your host for this episode. I’m an anthropologist, strategist and author of the book Ethnographic Thinking: From Method to Mindset. On this episode, I’ve had the pleasure of talking with Sam Ladner. We’re going to talk with Sam a little bit today about her new book. Sam and I have known about each other for quite a long time and got to know each other a little better recently on a retreat, which was really cool. And she’s now in the process of promoting her new book. Welcome, Sam.
Sam Ladner 00:44
Thank you, Jay. It is such a pleasure to be here.
Cool. For those that don’t know you, I think it would be useful to have a little bit of an introduction about your background and your current role, that kind of thing.
Currently, I am a Principal Research Strategist at Workday, which is an enterprise software company in the Bay Area – specifically in the East Bay, which is, frankly, the only place in the Bay Area I want to live. So I was delighted to have this opportunity to come and work on the future of work here at Workday. It’s been quite a journey getting here. Prior to this, once upon a time, I actually was a tech journalist – if you could believe that, back in the 90s working at CNET Canada. Now defunct – well, CNET itself is pretty much defunct. But there was a journey between here and CNET Canada, and it’s long and circuitous, including a PhD in sociology, a stint at Microsoft, a stint at Amazon, running my own company for a while and writing my first book, Practical Ethnography, and then, of course, to today, my most recent book, Mixed Methods.
Great. Did you happen to know Jenny Cool from the 90s? She was also working in technology.
No, but I know of her. Yes.
Yeah. She’s written some interesting things about those early startup days. It was an interesting time in technology for sure.
Yeah, it was it was very different time because it was genuine where people thought, ‘Oh, I can’t wait till I cash out and I get my Beemer.’ People really thought that. I had a friend that I worked with. He got some unnamed stock, and really was such a bonehead about it. He immediately bought a Beemer, and he saved no money for taxes. So guess what happens? You know?
Yeah, that’s what you do. Right?
Yeah, that’s what you do. And I just thought, ‘Oh, yeah, that’s normal.’ And now I’m like, ‘What the-‘
So maybe we could touch a little bit on your new book and a little bit about it and what inspired you to write a second book?
That’s a great question. I think the inspiration came – well, I know it came from actually personal frustration. I teach along the way. Currently, I’m also teaching with the Ethnographic Praxis and Industry Conference, EPIC. They have an ongoing teaching capability that I participate in, and I offer online classes through them. And so I had been doing this already and I realized that there really wasn’t a good text that really married the qualitative and the quantitative in a way that was approachable to non-researchers.
There’s lots of mixed methods books out there, you know – you probably read Creswell in grad school like I did. I found it – actually, I got a lot of value out of Creswell, and I still actually have a dog-eared copy. But I found that in the applied setting, the dilemmas were quite different. The way that you get set up for success as a mixed method researcher in the applied setting is quite different than it is in the academic setting, which is where Creswell really shines. So I decided I needed to – I had kind of resisted for some time because I’d kind of cobbled together materials that I would share – through PowerPoint decks or documents, blog posts, even. And I thought, this just isn’t adequate. I need to get a corpus of information in a single place, and I need it to correspond to – frankly, selfishly – the things that I wanted to teach. So that’s where it came from.
And then who do you see as your primary audience? Who are your readers?
There’s a couple of different types of readers. The first reader really is that post academic ‘alt-ac’ reader. This is the person that probably trained in anthropology, perhaps, maybe sociology, or maybe even psychology, but didn’t get an adequate mixed method training. On top of that is now also working in an applied setting, so doesn’t really know how to deploy mixed method needs in a corporate, third sector, not-for-profit or government location. So that’s one of the main personas, if you will, one of the main readers.
The second is – and this is probably the more important one I think I was trying to really reach – is a design researcher who works maybe in UX research and technology or perhaps working in innovation zones, like a design studio of some sort, and doesn’t actually have any formal methodology training. And so doesn’t really know how to navigate this challenge between like qual and quant in the applied setting because they haven’t actually studied it. They don’t know the fundamental differences. So they find themselves defending the wrong things or fighting the wrong fights, and making a point of really diving too deep in quant, for example, or completely disregarding quant and just doing the qual. And so that person I really wanted to reach, because I found that that’s the kind of person gets attracted to the courses I tend to teach.
Right, right. And in the book, I know you talk a little bit about the quant/qual divide. How have you seen that play out in applied settings specifically?
I was actually literally just having this conversation today where I was talking to a researcher colleague, and we were lamenting on this problem that we often see people talk about data. And you realize that they’re using data to describe numbers only. And it drives me insane, because data are not just numbers. I mean, if that were the case, we would know nothing about the natural world. You know, what kind of mushroom is that? I have no idea. 4.38. You have no clue, right? So that’s kind of that divide. People mistakenly kind of choose their tribe. And they go to qual and they say, ‘I need an emotion and metaphor, and I need to understand deeply. And I don’t like all that number stuff.’ Wow, that’s a really an interesting perspective to have. Because if I were to tell you, yes, that’s a particular type of mushroom – but you have no idea how many of them there are, or how prevalent they are compared to other types of mushrooms – you have no idea. So you really don’t have good knowledge. One or the other is not sufficient, frankly.
Yeah, that’s a great way to put it. I think that there’s this black box about qualitative research for a lot of people that have not been trained in qual. And so you’re right. They come to qual because some of their ‘why’ questions are clearly what they’re seeking answers to, but they end up sort of overextending, or thinking that qual is really going to become something entirely different and not systematic, when, in fact, it is.
I get a bee in my bonnet about that kind of thing, too. It’s like, ‘Oh, we’re doing something mysterious’ or ‘Oh, it’s an art, not a science’. That’s the one that makes me go, ‘I’m going to stab myself in the eye if I hear that one more time.’ Talking about qualitative research. It’s true, because it is a science. Science is the systematic investigation and development of knowledge. That’s what it is.
And if you say that what you do is research and then you pretend it is not a systematic investigation of knowledge, then what are you actually doing? Are you making stuff up? Systematic doesn’t mean numerical, it doesn’t mean quantitative. It means you have a procedure. That’s all. A repeatable-
Yeah, and you follow it.
And a knowable – and you follow it, exactly. So yeah ‘the art and not science of qual’ school is not really my jam.
I feel like there’s an importance to how you convey those findings and a little bit of storytelling goes a long way there, but that’s not the same thing as poetry.
Hundred percent, and it’s interesting. I used to write a lot of poetry when I was younger and poetry has this interesting juxtaposition because, in order to write poetry, you actually have to know a lot about the stress points of language. And to make it good, you need to be able to push and pull at those stress points without breaking it. So it’s got this kind of elasticity. There’s an elasticity in poetry that does not exist in qualitative research. That elasticity is much, much tighter in qualitative research. Your data are going to break if you start making stuff up. Your data are going to tell you, they’re going to steer you wrong if you’re not systematic about it. So poetry, as much as I love, it is not a good model for qualitative research.
It’s true. I’m sure you’ve experienced this, too, when someone new comes into the field. Often, if you’re doing field research as part of your qual research, they start to wonder, ‘Well, didn’t we just ask the person before that same question?’ And the point is, yes, we did. What we’re doing here is to discern where are the patterns, where are the commonalities and differences, and so it’s got to be systematic in that way.
Marsh says this about survey research, that it’s kind of an unpopular opinion. She says that surveys are, it turns out, the most efficient way of asking a lot of people the same thing. That’s true. It is the most efficient way for asking a lot of people the same thing, but it doesn’t mean it’s the best way. So when you ask the same question in an interview, you have perhaps some elasticity, if we can use that term. You have more than you would have in a survey. But you get also that repeatability, that reproducibility, you get that comparative rigor, you get that deeper insight. You as the researcher, unfortunately, though, just have to work harder to gather those insights into a way that can be interpreted. We tend to outsource that. In surveys we outsource that to, as they say in survey research, the respondent.
Exactly. There’s a shifting of the weight of how you’re managing the data.
Yeah. And when you think about it, that’s kind of disrespectful, isn’t it? To expect your ‘respondent’ to do all the work for you – like, really?
That is one of the things that I find working with client researchers. It’s actually a nice balance. It can be a really nice balance if you’ve got a collaborative relationship established, because I think there are times when, if your focus is really just on the data side of quant, sometimes you can lose sight of how much onus you’re placing on the respondent. You’re often asking too much of them.
Because probably, as you say, your focus is elsewhere, because you probably have quite a bit of work to do. It’s just very different kinds of work. And you can either accept the fact that you can’t delete all that effort that goes into developing the data set, you can just shift it. It’s easy. I’ve been guilty of this. I confess, I actually like doing surveys quite a bit. But I find when you do more advanced survey work, you end up focusing on that beautiful empathy database that you just want to populate – like, everything goes right in there and it’s so cool when it happens. And as a qual researcher, it’s such a delight to see it all fall into place. And then you just press a couple buttons, doo doo doo. And next thing you know, you know, you’re done. It’s wonderful.
Yeah, it’s a lot cleaner, that’s for sure.
Hundred per cent.
Or it can be. It feels cleaner. It’s an illusion, but it feels quite good.
So I want to jump in maybe to a little bit of the detail about the book because I love it. I had a chance to read it, and you definitely give quite a bit of practical knowledge. So I wanted to dive in for those who are interested in those sections and help people get a sense of where the value is there, particularly around getting traction with interdisciplinary teams. You and I both work in interdisciplinary teams, and we know that that’s an ongoing challenge. And you have some things to say in the in the book that I think are really valuable, particularly around mixed methods, of course, so I wondered if you could share some of those.
Some of the insights around how to work with stakeholders?
Right. Particularly people who maybe don’t know anything about research, or they know only about one, either qual or quant. How do you get that kind of traction that helps that team really understand where you’re going and the choices that you make as a researcher?
It’s a very challenging thing to do, because you can’t really give people a crash course in epistemology when you’re trying to work with them on a day-to-day basis. It’s a little too hard. So instead, you kind of have to work as the secret agent and give them a set of trade-offs. And understand that – nobody likes to hear this – but you can’t have it all, right? There’s always going to be trade-offs. I mention in the book, the basic trade-off that you’re looking at is, on the one hand, you want coherence to your data, and you want to focus on the participant. And this is particularly true for design based research. You want those things to shine through with the data that you collect. On the other hand, you also tend to want scale, right? So the prevalence, how big or little is this thing. And you also want causation – does this cause that. These are entirely reasonable things to expect from data. But it’s unreasonable to expect all of those things from a single data set. So that basically is the crash course in epistemology, although you don’t have to tell everybody that when you start. You just basically say, ‘Here’s the trade-offs, which is more important in this case?’
Having an understanding of where you are in the product cycle, which would be appropriate for me or anybody else working in technology products or product design, is a big part of recommending the right course of action. When you’re early, you don’t know a lot. You don’t have any sense of what people – your customers, your users, your stakeholders – want. You don’t know what they need. You need to really embrace them as the center of what you’re doing. So that’s why qualitative is great at the beginning, because you get this really deep focus on your participants. They are driving everything. You’re getting contextual insight. It’s rich, it’s thick. Everything that they think is important is what becomes important to the method, to the data. And you also use the interpretive method to provide a coherence to what looks on the surface to be very, very messy data.
What happens when you don’t have the experience mixing methods is, as a researcher, you might take your cue from your stakeholders, where they’re going to say, ‘We want to know is this good or does this make people want more’ or whatever. They’re looking for levers. Implicitly, this concept of causation is embedded in everything they’re really asking for. It’s not because they’re bad people that they’re asking for the wrong thing at the wrong time. It’s just that they don’t have the imaginative space to understand how data can be developed and created. So it’s your job as the researcher to introduce this concept of the trade-offs. And then you can mix and match to some degree. You can always add more on the quantitative side if you need to, or add more on the qualitative side if you need to. But what you can’t do is get blood from a stone. What I mean by that is the quantification of qual data or, vice versa, the qualification of quant data. This is a fool’s errand. You think you’re actually going to solve for X. ‘I’ve got this magical algebraic function. Somehow I’ve come up with this Holy Grail.’ No. Doesn’t happen. You can’t actually have it.
And it actually feels wrong. Either of those. I’ve had people ask for that before, and just thinking it through made my brain hurt. Especially when people want to quantify qualitative insights. ‘Okay, well, we can run a separate survey, that’s one option.’
Yeah, it feels wrong because both sides aren’t satisfied. You’re not going to get robust causation or scale. And yet, at the same time, you’re not really focusing on your participants, and you’re not really providing coherence. You’re a mile wide and an inch deep. That’s why it feels so dissatisfying.
And if we flip that, I’d like to think a little bit about another concept that you introduced in the book called psychological safety and engendering a climate of psychological safety. The reason I’m saying ‘flip it’ is because we’re talking about how as researchers you might feel uncomfortable with requests like that, but you also talk about helping an interdisciplinary team feel comfortable. You talk a little bit about how that’s worked well for you.
Yeah, getting an interdisciplinary team feeling safe is a really, really difficult thing to do. I do mention it a little bit in the book – I have a chapter on this – but if you haven’t read some of the literature on psychological safety, I really encourage you to do that, because you’ll get a deep understanding of the context in which you’re doing your work. Specifically, one thing that I didn’t actually know about was the concept of a certain type of organization: volatile, uncertain, complex and ambiguous, the VUCA. In these particular types of organizations – which, by the way, I’m sure pretty much everybody listening to this will feel sounds like theirs – there’s tons of complexity and ambiguity and it’s all changing all the time, and I just don’t know what’s going on. Well, it turns out, if you have this foundational layer, the social infrastructure where people are able to talk honestly about challenges or problems that they’ve had, then you’re actually able to cope much, much better with difficult things – like findings that, ‘Hey, your baby is ugly, stakeholder’, which happens often. I mean, we all know this, right? This is the biggest problem that researchers end up having. Their work becomes somehow either the referee or the condemnation, and we’re all we’re trying to do is-
The wet blanket.
The wet blanket in the room. Yeah. So understanding that you’re heading into this kind of a situation before you even start setting it up. You’re not actually solving a problem if you don’t address the contextual foundations, which is essentially what organizational sociologists talk about as double-loop learning.
When you do research, oftentimes you’re asked to do kind of this really tiny, tiny loop of test, iterate, test, iterate, test, iterate, test, iterate. ‘I’m going to move the needle five millimeters to the left, and then I’m going to move it back seven’, etc, etc. If you find yourself kind of in that ongoing loop, there’s a good chance that you’re not doing double-loop learning, which is looking at the why we made this decision in the first place. What led us to this continual inability to face the fact that this checkout procedure is completely broken for our users? Why are we not able to face this? What is it that we’ve had ongoingly? As a researcher, your job – up to a point, you’re not the only hero in the organization – is to illuminate that fact that we have failed to understand this, we have not approached this. It’s too scary for us, we haven’t had a chance to solve it – not because we didn’t want to, it’s because we didn’t come to grips with it. So I talk a lot about that kind of context, the shared understandings that are verboten, and those that become malleable and approachable. It makes it so much easier to do all sorts of research in a context like that.
Yeah, I think – I agree one hundred percent. I think it’s really important to have that understanding of the organization. I talk about it, of course, in terms of ethnography. In many ways, you should be looking at your team as a field site. You should be thinking about them from an ethnographic perspective so you can understand where those gaps occurred, where values didn’t align and how that plays out in terms of product development, because it has an impact. It’s there.
Yes. It’s always going to be there.
Yeah. And sometimes I think it’s actually useful to have the tools or the language of sociology or anthropology to express some of that, so it doesn’t sound like you’re placing blame anywhere.
Yeah, exactly. It’s remarkable to me, and I’m sure you have this experience, too, because it’s been a long time since you were an ingenue in an organization. I barely remember it myself. I realized that people genuinely don’t have the language or concepts to understand that organizations have predictable patterns, that culture is not an unknown entity – that you can actually measure it even. I forget that a lot of people don’t realize that. They kind of think it’s this black box-y thing that can never be truly understood. And that’s just because they haven’t had the opportunity to go, ‘You know what, there’s lots of people that have studied this for over a century and it’s very predictable, and it’s got dimension and qualities that I can predict and understand.’ I kind of take that for granted sometimes. Because when I’m in a context where I see something like the single-loop learning – you know, the test, iterate cycle – playing out, I go, ‘Okay, I know what’s going on here.’ And I just I make peace with it really quickly. I’m not perplexed by it. I have language for it. ‘Oh, okay. So if that’s what’s happening, then I need to kind of consider these factors.’ I think it really helps people to understand that you’re not stuck in this black hole that has no insight. There’s many decades of insight that can help you.
Yeah. You may have used this article in the past, and you have to use it very judiciously, but there’s an article called ‘Nacirema’.
I think you would like it.
Tell me about it.
The idea is that it’s a description of a culture and it’s written all in this anthro speak.
Oh, wait, no, I do know this one. I recall it vaguely Yes. Oh, yeah. I do know this one.
So it’s really about Americans, but it’s a way of exposing the fact that, if you’re completely immersed in a culture, you’re often blind to it, right? So this is a way of sort of flipping it and it exoticizes Americans – whatever that means. But it’s a good way of showing – a lot of people assume they’re going to work and that’s not a culture. But often, there are lots of cultural phenomena that are influencing the decisions that are being made – not often, I think, I would say always – in terms of the organization and how it functions. It’s a good tool.
Yes, if you do say so yourself. We take this for granted, I think, if we’ve been trained to approach something that seems amorphous like culture. It seems like it has no shape. It has no dimension, you can’t get your arms around it. And we just take it for granted because we’ve been trained to dismiss that belief, because we’ve read all these articles and we’ve read all these books. If anything, I think people should be heartened to know that there’s this whole corpus of insight out there that could help them, that is way more helpful than they thought. And it’s completely well developed and it’s accessible and you can learn it. And it’s not rocket science. It’s right there for you.
Yeah, definitely. Even if you don’t want to take a deep dive, there definitely some principles that could help.
So you talked about a couple other things that I want to touch on in the book. There are two concepts. One you mentioned, luxuriating in the customer. I love those words. I want to get a sense of how it is that you host that process with your teams. And the other thing we can touch on perhaps after that is thinking about creating artifacts. But let’s talk first about this idea of luxuriating in the customer.
Well, it is kind of a luxury, right? And the reason I use the word ‘luxury’ is because most of the time, when we are sitting around trying to figure out what to do with our lives, when it comes to creating things, we believe that we have scarcity. We don’t come from this perspective of abundance. And we have this deep fear that if we just slow down a little bit – I don’t know, what do we think is going to happen? We’re going to think, ‘Oh my god, I’ll never get back on track. How am I going to answer all these Slack messages?’ Maybe that’s what we’re thinking, I’m not sure. So I use the word ‘luxuriate’ because it does feel like it’s a little bit of an indulgence. It feels like the thing that you’re really not supposed to do, but it feels really, really good to do. And whatever that is for you – something that you would normally spend more money on but not very often, or more time on but very rarely. That’s kind of where luxuriating in the customer kind of comes from.
And when you do that yourself, you have to fight off a lot external pressures to do it. So as the researcher – like I said, I’m not the only hero in the organization, but I am a particular type of hero. And my hero journey in the organization is to carve out time and space and give people permission to ‘luxuriate’ in the customer. I tell them in these three hours or this even one hour, we are doing nothing except for understanding this person. That’s it. That’s all we’re doing. Understanding this group of people. Understand who they are, what they like, what they don’t like. It doesn’t have to go anywhere and there’s no output and there’s no exam at the end. You just have to just luxuriate in the time that you spend with this person. And I love giving them that permission. I don’t have a magic wand and I certainly have no magical powers, but it feels magical when I say that. People just relax, they’re like, ‘Oh, thank goodness I don’t have to check my phone and I don’t have to worry about how am I going to do this and put it into my performance review’ and so on. No, you don’t have to do any of that.
Now, of course, when you take a step back, and you look at what I’m doing, of course that’s going to be in your performance review. Of course it is. If you don’t understand who you’re trying to serve, you are not going to perform well. In some ways, it’s hilarious that I’m telling these people it’s a luxury of affluence to do really what is their 100 percent job, but I just give them permission. And nobody ever says, ‘I shouldn’t do that.’ They all want to do that, and I carve out the space and time for them. And when they’re done, they of course realize. ‘Oh, this is actually one of the most useful things I’ve ever done. Of course this was wonderful. I just spent all this time doing open ended, creative encountering of my user, my customer. What a wonderful thing. Now I’m better equipped, I’m going to be way faster when I make new decisions now.’
Totally. And I love the way you frame it as giving them permission, rather than a set of rules of this is what you must do. I’m sure you must see deeper levels of engagement because they’re in this sort of reflection state, rather than ‘Oh my God, I’ve got to learn this thing.’
Well, it’s funny, because they don’t really believe me a lot of times. I do have to push them. And there’s kind of like an interesting sweet spot to get people where, their leader – it could be their VP or the CEO – is in the room. I even have seen this multiple times, where people kind of check with the senior-most person, give them a side eye like, ‘Is she for real?’ or ‘Is this a trap?’ So you should probably make sure that that senior person knows what you’re going to say, because if they freak out and flip the table over when you say that, it’s probably not a good situation.
Good advice. Yeah, that’s good advice.
To be fair, I’ve never had that happen. It’s good advice to prepare the field for that. But they look at the CEO or the VP and they take their cue from that person. And the person’s like, yep, nodding. ‘Okay, three hours, people. That’s what we got.’ And everyone just relaxes, and they’re like, ‘Yeah, let’s go for it.’
Yeah, I love it. So one of the other things that you touch on that I really found was useful and insightful was the value of creating artifacts, because I found the same thing in my work. I think it really gives you that other dimension. I’d like to hear a little bit about how you’ve created them, how you use them, and maybe some examples of where they’ve done well and maybe not so well.
Well, where do I start? I have to say, I think to this day, my most successful artifact that I ever created was a graphic novel. And I didn’t even think that that was going to do anything. But boy did that thing have legs. It was actually born out of constraints. As any good designer will tell you, constraints are actually a good thing for the creative process. I was running my own research company. I had several people working with me on this big project for a pharma client. I was unable – and it’s completely worthwhile – to record patient-physician interactions. I was able to witness them and take field notes. I got proper permissions, proper privacy protocols etc, so I was able to do that. But I was not able to do anything, really – no photographs, no video, no audio, nothing. So what do you do in a situation like that? Well, you can still tell stories, right? The core is the stories. So we worked with an illustrator, who was great because she was really good at setting mood and tone with – I don’t even know what you call it to be honest – the framing within the panels of a graphic novel. They have kind of a temporal dimension, they have tension building, that perspective that they have within each panel. It’s designed specifically to develop a story and move it along. She was great at that, so we were able to tell these wonderful, rich stories with just images that were sketched. I mean, they were colored and they looked good. And we printed it. It sold out, as it were. We got a second edition. They wanted more, and I thought, ‘Wow, that was really, really great.’ So that one was really successful because it suited the actual audience. The audience that we were dealing with were not the kind of people who were going to read reports. It wasn’t so much that they were just busy – although they were busy – it was also that that wasn’t something that really would be attractive to them. And we knew that. We knew that I could write up a memo, but nobody’s going to read this memo. I could do a PowerPoint, but they’re just going to flip through it as they’re doing 18 other things.
Yeah. Everyone’s busy, right?
Everyone’s busy. Giving them a physical artifact that looks special and unique was inviting and creative and interesting. I would say that’s on the greatest hits. But then, anything that has the core of a story within it has the elements, has the ingredients to be a greatest hit as well. It’s just a story – has a beginning, a middle and an end, and you build tension, you resolve conflicts, you tell poignant resolutions or lack of resolutions, you leave people hanging. You want them to understand the emotional landscape of this person through the story. So that’s essentially underneath all successful artifacts. And you can do it in many ways; it doesn’t just have to be video. Our friend Bruno Moynie, the ethnographic filmmaker, always disagrees with me about this, because he’s a filmmaker, and his films are beautiful and haunting and wonderful. I wish I had his talent. I don’t. I also wish I had his camera, to be honest, but I also don’t have that, so I can’t do the kinds of stuff that Bruno does. But I can do hacky versions of video. The core is the story. So that’s on the successful side, I would say. The artifacts that have fallen flat for me in my past have been very – what would you call these ones? These are the greatest flops, I suppose. These are the ones that have been out of step with the stakeholder needs.
I’m a human being. I get prideful about certain things. I have decisions that I make for my own purposes, not for other people’s. I get selfish or scared or whatever. I’m just like everybody else. I’ve made some misjudgments on outputs, artifacts before. Case in point. I remember thinking back when I was at Microsoft, I was having challenges convincing some of my stakeholders to pay attention to what I was doing. So I thought it was just a rigor problem, right? I thought it was just a comprehensiveness problem. God help the former academic who believes that comprehensive is–
It rarely is, Sam.
Isn’t that absurd, if you think about it? How many footnotes did you put in? Nobody asks that anymore. Who asked that? Not even academics ask that anymore of any other academic. Occasionally you might see ‘and you didn’t cite me’, but you never get this. Anyway. So I thought, ‘I’m going to double down on the comprehensiveness.’ The level of rigor I put into this report was fantastic. It was a sight to behold. It didn’t do anything. Nobody read it. Nobody cared. Actually, that’s not true. It did at least give the impression that I had actually worked hard. But box ticking is not really what my interest is in life. I don’t want people to tick a box next to my name that said ‘worked hard’, ‘meant well’. No, I don’t want that. I would say that was one example where I just really missed the mark. I was perhaps trying to earn some respect in a way that I thought would earn me respect and it turned out to – I mean, it didn’t get me less respect, I’ll say, but the amount of effort that I put into it and the amount of payback I got was miniscule.
Yeah, it’s definitely good idea to tailor those outputs for sure. I mean, even if it’s just like a two-by-two, is this what’s going to resonate now for this group and their current needs, right?
You put it in a very tangible example for a lot of people who work in applied settings. ‘Just a two-by-two’, like you think that that is just a small little artifact. And there’s researchers listening to this right now who think that that doesn’t represent enough work. But it turns out that the conceptual work of boiling insight into something simple – not simplistic, but simple. You’ve taken away all of the complexity, and you’ve boiled it into crisp, abstract, understandable diagrammatic form. That is huge. This is the pinnacle of human intellectual achievement to do such work, and we underestimate how important it is. It is very important. So if there’s a researcher listening to this and wondering, ‘Is just my two-by-two enough?’ Yes. If you feel like you are conveying conceptual clarity to your stakeholders, 100 percent, you’re doing your job.
Yeah, if it’s an effective artifact that makes sense for that audience at that time, you’ve-
That’s a win. That’s 100 percent a win.
That is the win. And it’s nice to see sometimes. I love the story about the graphic novel, because I’m always surprised when things like that do go viral, when something gets legs of its own. It’s always inspiring as a researcher to see that the impact travels. I’m sure you felt the way the same way there. It’s like, ‘Oh, wow.’
Yeah, exactly. But it’s a good surprise for sure.
Yeah. Good surprise. Yeah, for sure.
You also talk about a concept that I found really interesting in the book called data exhaust, and you talked about in the context of data scientists and helping formulate research questions. But I wonder if you could expand on that concept of data exhaust, because I think a lot more people now are working with data scientists. And it’s become sort of the third piece of the research puzzle now.
What’s the first two?
Well, I mean, if you think of qual and quant, and then – well, data science, you could put in a lot of different categories. I suppose it is perhaps more quant-y, but there’s certainly a great bit of interpretation that goes into that field. So I just would like to hear more about how you’re working in that space, especially around data exhaust.
I came across that concept some time ago. Originally it was coined to describe the discovery of TV listings, and the activities that were happening at certain times turned out to have value. And that’s why exhaust is like gas exhaust, where you recapture something. It had no value as a waste product, but then, if you recapture it, it has value. That was a very early example of people figuring out that there was value in capturing data around TV listings, and who was watching what when. Now, of course, almost everything we interact with leaves behind this trail of data. And a lot of people who aren’t really trained in research methodology – so don’t really know the challenges of designing data, because data do need to be designed, right? – don’t realize the primary thing that designing data- particularly quantitative, deductive data – is about falsifiability. You need to be able to prove something wrong for causation sake. So when you design an experiment, for example, people who have taken research methodology are well familiar now with this concept of you fail to reject the null hypothesis, meaning you have failed to falsify something. So data exhaust is not data. It’s this automated, unstructured, non-falsifiable information that is generated from any number of devices and interactions that individuals have. It’s not intentional. It’s not designed with purpose. It very rarely easily lends itself to falsifiability quickly. And we underestimate this. We’re like, ‘Oh, we can just track what our users do and no problem, if so, then go.’ We know everything, you know.
Pull up the numbers.
Yeah, pull up the numbers. Well, what are these numbers? Where did they come from? What do they refer to? There are questions you ask of numbers. Data exhaust is not designed to answer any question at all. It’s just the scraps. It’s the compost heap. Sure, you might be able to make something out of the compost heap, but it’s going to take some work. Data janitor work is a is a new term, but it’s a real thing, when people are spending half their time cleaning the data. So we have this kind of gold rush mentality about data exhaust today that, I don’t know about you, but personally I think we’re on the downward slope of that gold rush. And people are beginning to realize that it’s not as easily captured or exciting as it was, and that it’s actually a lot more work to do something with it. Particularly, of course, because we’re using it now for algorithmic training. And we realized that it has all sorts of garbage-y composting heap stuff in it but, nevertheless, it does represent an opportunity. So when you have data exhaust, and you have access to work alongside data scientists, for example, it’s a great opportunity to work collaboratively on the scale and causation side, and the participant focus and the coherence side. So it’s a wonderful opportunity to marry the two if you have (a) access to people that are able to collect it and (b) structure it into being falsifiable.
Right? I almost think of it as there’s a little bit of a triad going on there, too, that it can be a helpful third leg to pair with quant and qual. The data science. As you mentioned, it’s got to be approached in this structured and systematic way, of course, but you’re right. It’s like we’re swimming in data. All of us, all the time, but it’s value is another question.
Starved for insight.
Yeah. I don’t know about you, but I think we’re actually starting to bleed out that belief now. I think people are kind of realizing that that Shangri-La of data everywhere equals insight everywhere. I think they’re beginning to realize that that it was an oasis, a mirage that did not exist. And so I think we’re kind of getting there now. But if you can work with your data scientist colleagues and structure data in such a way that it is falsifiable, particularly for the things that you are doing deep qualitative work on, oh my goodness. How wonderful.
Yeah, it is. It’s a great relationship for sure. Thank you for reflecting on those parts that at least I found interesting. I’m sure other readers will find other areas interesting in the book as well. But I wanted to focus a little bit now on what do you see on the horizon either for the discipline itself of research or for you personally, if you’re interested in reflecting in that direction?
Well, it’s an apropos time. We’re at the end of the decade or the beginning of the new decade, and when we’re recording this obviously the new year is very fresh and very top of mind for a lot of us. And the future is here. You know, I was thinking the other day about the end of the 20th century, and the end of the 19th century, and how they are the same and different. The end of the 19th century, they say, didn’t really end until the end of the First World War, because people still had this insane optimism in the early days of the 1910s. And it wasn’t until well into 1918 when the war was ending did people realize what modernism had wrought. What was upon us, and what the next century might begin. And I’m thinking the same is happening in this epoch. I don’t know what marks the end of the 20th century. Was it 9/11? Brexit? I don’t know. I don’t know what marks it exactly. But I do see us at a cusp of a very important change as a civilization, but also for researchers. We are now truly confronting the reality of algorithms in our lives, and for researchers in particular, whose job it is to develop knowledge, we are now forced to confront what knowledge means in the face of artificial intelligence. I know, artificial intelligence is not that intelligent yet. Anybody who works in AI or machine learning in a way knows 100 percent what I’m talking about. The inanity that algorithms will turn up on their own is kind of insane. But you also can smell it, right? Like it’s around the corner. So if developing knowledge is becoming, shall we say, lower in cost – it’s not there yet, but it’s coming right – what is a premium knowledge base? What is a premium knowledge producer? What is the luxurious knowledge producer?
Yeah, there are people out there already arguing that it’s art – that art is next.
Exactly, exactly. And maybe this goes back to our poetry conversation, about elasticity of our data. If we can figure out as researchers how to stretch and bend data in a way that gives insights that machines will never be able to do, that might be something we should gravitate toward, don’t you think?
Yeah. It’s interesting. I do think that it’s an interesting balance, right? Because without being systematic, we lose any way to substantiate our claims. At the same time, there is a creativity – and it’s maybe a bad use of that word – to developing insight. I think that’s where you’re headed. It’s not literally about repetition. It’s not about being able to spot a cat because you’ve seen 30,000 pictures of a cat. It’s something different than that.
That’s a great example. What kind of cat? Was it the same cat, the exact same cat?
That’s what happens when there’s a glitch in the system.
Yeah. Nice reflection. Thank you, Sam. I’m wondering if you’re open for a few rapid-fire questions as we close up.
Okay, great. What’s the one thing about the tech industry that you wish you were able to banish?
Banish? Wow. Well, frankly, I would like to banish inequality that’s embedded in the tools. I want to banish that with a cudgel – in a New York minute I would banish it.
What advice would you give someone who’s beginning to explore a career in UX research?
Do read research methodology, even on your own. Just read some of it, because it does exist. Read it. That’s your skill.
Yeah, it is definitely the bread and butter. Let’s see, what have you read recently that you’d recommend to our listeners?
Oh my goodness, what a great question. You know what I’m reading right now? I’m reading a book called Forgive For Good. And it is actually written by a researcher here in the Bay Area – I believe he’s at Stanford, if I’m not mistaken. He actually teaches a course on how to forgive, which I was like, ‘What is that?’ And apparently forgiveness, like a lot of things, is a skill and you can learn how to do it. And I was like, wow, okay, that’s a really curious thing. So I started reading that, and I’m finding it fascinating so far, to be honest. The other thing that’s interesting about this book, and I didn’t pick it up for this reason, is it turns out it has a lot of the same content from the work I’ve been reading in the past year or so on Stoic philosophy. I did not anticipate these two things coming together at all, but I was like, ‘Oh, you know what, that makes sense.’ So Stoicism is another place that I would recommend people – especially in these times, you know?
Yeah. I knew that you were reading about Stoicism, but thanks for the tip on the book. Great. Sam, it’s been awesome talking to you. As usual, thank you very much for your time and for being a guest on the show.
It’s been my pleasure. Thank you for having me.
Yes, thank you, Sam. For our listeners, is there any special channel you’d like to them to use to get in touch with you if they’d like to reach out?
Yeah, absolutely. The first thing that they can do if they want to learn about the book in particular, they can go to mixedmethodsguide.com. The information for the book is all there. Usually you can hit me up on LinkedIn, 100 percent no problem. Try me on Twitter, I’m probably not going to reply. I might, you know. But LinkedIn is guaranteed.
I would recommend actually any listeners to follow you at least on Twitter because you have some good comments from time to time even if you don’t respond to them.
Oh, thanks. That’s nice. That’s good. Yeah, it gets a little overwhelming, so I’ve given myself permission to not personally reply to everything. But on LinkedIn, I know people are serious – they really want to meet and talk or connect – and so that’s the best way.
Great, great. Well, thanks again for your time, Sam and I hope to see you in person soon.
Me too, Jay, me too.
I hope you enjoyed listening to this episode as much as I enjoyed hosting it. You’ve been listening to EthnoPod with Jay Hasbrouck. If you’d like to hear future episodes, you can subscribe on your favorite streaming service or visit ThisIsHCD.com. To reach me directly, you can follow me on Twitter at @jayhasbrouck or by visiting ethnographicmind.com. Thank you for listening.