Hang tight while we fetch the video data and transcripts. This only takes a moment.
Connecting to YouTube player…
Fetching transcript data…
We’ll display the transcript, summary, and all view options as soon as everything loads.
Next steps
Loading transcript tools…
Stanford Webinar: Need-finding and Empathy - Why Your Customers Can't Tell You What They Want | Stanford Online | YouTubeToText
YouTube Transcript: Stanford Webinar: Need-finding and Empathy - Why Your Customers Can't Tell You What They Want
Skip watching entire videos - get the full transcript, search for keywords, and copy with one click.
Share:
Video Transcript
Video Summary
Summary
Core Theme
Design Thinking is a process and mindset for innovation that emphasizes understanding users deeply through empathy and iterative prototyping, rather than directly asking them what they want, as users often cannot articulate their future needs.
Mind Map
Click to expand
Click to explore the full interactive mind map • Zoom, pan, and navigate
So now I'd like to present our speaker today, Professor Bill Burnett.
So Bill is an adjunct professor of mechanical engineering, and
the Executive Director of the Design Program at Stanford.
He directs the undergraduate and graduate program in the Design and
teaches at the D.school.
He received his bachelor's of science and
masters of science in product design at Stanford.
And has worked in start ups and Fortune 500 companies,
including seven years at Apple, designing award winning laptops.
And a number of years in the toy industry, designing Star Wars action figures.
He holds a number of mechanical and design patents and design awards.
And in addition to his duties at Stanford, he's on the Board of
Socially Responsible Fashion startup, and advises several other startup companies.
So with that, Bill, I'd like to turn it over to you.
>> Thank you very much.
Let's just get started here.
And we try to create a little bit of a provocative title,
Need-Finding and Empathy: Why Your Customers Can't Tell You What They Want.
And from the looks of the number of participants we have here,
I guess the title works.
But it's really an interesting conversation, and I have this
conversation a lot with people when they either are reading about Design Thinking,
or they've maybe been to one of our workshops,
the Master Class which is coming up in June, or one of the other workshops we do.
And they go wait a minute this Design Thinking thing,
it's kind of pretty simple, isn't it?
You just go out and ask customers what they want.
And then you build it.
And then they say that's it.
And then you're done, right?
And it really doesn't work that way.
So take a little bit of a step back to just make sure everybody's on
the same page.
Design Thinking is both a process, and what we called a set of mindsets,
a way of thinking, or a culture that you create in your organizations.
And is this the pretty common diagram, you see the following up of the D School
website, and our website, and the design program.
[COUGH] It's a kind of a five step process, you start with empathy,
then you define or redefine the problem.
Ideate just means have lots and lots of ideas and then you prototype and test.
Now this thing about this diagram that is a little bit misleading,
is that it looks like it's in the process, but in fact, it's not that at all.
What happens is design teams enter this process at various points.
Maybe they have an idea already that they want to try with customers,
maybe they don't even know who their customers are going to be if
they're a brand new start up.
But this is sort of the way we describe it.
And the way I've always talked about these kind of process diagrams is remember,
the diagram isn't the process.
It's just a way of breaking it down so we can teach it.
I like to use an analogy to golf.
If I were going to try to teach you how to, try to learn to play golf,
I couldn't teach anyone because I'm a terrible golfer.
But I was going to learn to play golf, the teacher would probably break
the golf swing down into a number of things, like how do I hold the club?
Whats the proper grip?
How do you address the ball? How do you place the ball on the tee?
How do you bring the club back?
So each of these pieces would be described as a separate step, but
of course that's not what a golf swing is.
A golf swing is a continuous, a beautiful continuous motion which
results in the golf ball going where you want it to go.
So remember that the process diagram is
the not the way the process actually unfolds when you do it.
That's one of the reasons we like to talk about the mindsets of Design Thinking.
[COUGH] The way in which designers think, and the way in which if you have a culture
of creativity, your design team, or your company would think.
I would say you start with curiosity.
We're trained as engineers, designers, people, if you're in marketing,
finance, whatever the discipline you are in an organization,
you're trained to think about the world sort of with a skeptical rationalism.
You break problems down into smaller and smaller steps.
You rationalize them, you use your logic and deductive reasoning to make decisions.
That's not a particular useful way to think about doing innovation.
Because, when you're doing innovation, you're moving into a future where nothing
exists yet, or the things that you're trying to invent haven't been invented.
And so there really isn't much to be skeptical or rational about.
It's much more interesting and much more generative to be curious.
So, we start with curiosity, and we typically,
reframe the problem, that block called define, I always call redefining,
because typically we've been given the wrong problem to solve.
All of problems are sort of out there in the world, but
the real problem is something kind of either deeper or at a more abstract level.
We like radical collaboration across multiple teams,
we like to be mindful of our process, because when you're in the middle
of a converging part of the process where you're trying to bring things together,
it's important to know that's not a good time for brainstorming.
When you're in a divergent part of the process, where you're looking for lots
more ideas, that's when you want to reach out, collaborate, and brainstorm more.
And then of course we have a bias to action, because of these set of problems
that we're going to talk about next, about why people can't tell you what they want.
Since there's not a lot of data to be analyzed, and
there's not much that we can plan with, we just like to start by doing.
So the kind of sort of a background, I'll always say with Design Thinking,
we always said we start with the people not the problem.
I've been a consultant for 25, 30 years.
Someone might come in and say hey, we're a company and make this certain products,
and we'd like to make in a new more innovative versions of these products.
And that's typically the wrong place to start.
We go back out and say, well that's interesting,
let's just start with who your users are, who your customers are, and go there.
Because most problems are actually solutions, masquerading as a problem.
The New Wheelbarrow Project is the classic example.
I was doing a project for a company that makes wheelbarrows.
Actually, it's a slightly disguised project, they made another really,
really common industrial product that you'll find at every job site,
but we're going to say they made wheelbarrows.
And they wanted us to do innovative wheelbarrow.
And so the problem, of course, with that framing of the problem is that the problem
already is described as a solution.
They want another wheelbarrow, they just want an innovative one.
That might result in some small product modifications.
Two wheels instead of one, better handles, more ergonomic,
design of the lifting process so that people don't hurt themselves.
But at the end of the day,
you're still going to end up with a slightly modified wheelbarrow.
And so,
that's an example where the solution is not actually what the problem is.
The problem that a wheelbarrow solves,
if you step back a second is that there's workers all over a job site, and
materials typically delivered to the job site around the periphery.
Because it's easier for the trucks to drop things off in pallets full of, let's say,
bricks, at the periphery of the job site.
And, therefore you can't get the brick layers that are over somewhere else.
And so, they come with their wheelbarrows, they unload a few bricks into
a wheelbarrow, and they walk over to where they are doing the work.
So the problem isn't wheelbarrows, the problem is this location of workers and
supplies.
Once you frame the problem that way, there's lots and
lots of ways you can solve it,
that are much more innovative than simply thinking about a new wheelbarrow.
I'll give you one example you could.
Instead of delivering all the bricks on pallets to edge of the job site,
you could have Amazon deliver the bricks in individual small quantities with
their army of drones.
In which case locating each of the bricklayers
by the cell phone coordinates in their pocket, their GPS
receivers in their cell phones would simply transmit their location on site and
the drone would put the bricks exactly where they want them.
You would never come to a solution like that.
And I'm not necessarily saying that's a good solution.
But you'd never come to a solution that had that much innovation in it if you were
just brainstorming wheelbarrows.
So this whole notion that you just take a group of users and ask them what they
want, what they need, it just doesn't work, and here's the reason why.
This is a famous Henry Ford quote that he said,
if I'd asked people what they wanted, they would have said faster horses.
Now, just as a disclaimer here,
I want to point out that It's most likely that Henry Ford never never said that.
We can't find any historical evidence of it.
But it's sort of a snarky statement, it's something Henry might have said.
But his point was simply that users don't know what they want,
they only know what they've got.
And so when you ask them about what they need,
they give you variations of the things that they already know about.
And that's the first part of the problem.
The problem with this need finding thing is, you go out, and let's say you're,
I was at Apple for seven years, but long before they started doing phones.
But I know some people from the original phone team.
But they knew they were going to do a phone,
that wasn't what the need finding was about.
The need finding was all about what kind of a phone would the features be,
how would data be presented, what would be the sort of model of, the new paradigm.
So let's say you're on a team like that, and you've done a bunch of market research
and bunch of and you've determined that users have a specific set of needs.
You know, needs to find the phone numbers usually,
needs to be able to navigate to website easily, things like that.
So we've built this really high resolution prototype which most people call a beta,
the beta prototype that objectively you and
your team knows you've met all the needs that the users have talked about.
You show it to your customer expecting that they will love it, and
what they say is, the thing that all engineers and designers hate.
They say, yeah, that's what I said I wanted.
But now that I see that, I've changed my mind.
You know what I really want.
And isn't that maddening?
Right.
And this is where the engineers go back to the marketing people and say, hey,
you didn't spec this right.
Because we showed it to users, and
they had other needs that you never told us about.
And the marketing guys say well, we told you to build exactly what they wanted.
So if they didn't like it, you must not have built what they wanted.
But instead of getting into that conversation, you just gotta
acknowledge that humans are what is called technically a wicked problem.
Human beings do not stand still inside some sort of market research bubble and
their needs and their wants are not consistent from moment to moment.
And particularly when you've shown them something that they never could have
imagined.
And here's the problem.
Turns out a wicked problem, a wicked problem is something that came out of
social science, and I think originally out of Berkley.
Some guys who were studying urban planning and why even though when you've built
something everybody wants, they don't like it any more.
They said, a wicked problem is a problem that's difficult or
impossible to solve because of incomplete, contradictory, and changing requirements.
In addition to that, these requirements are often difficult to recognize.
So it's difficult to understand if you really understood the requirements
of users, because users are so complex.
And here's, this is why.
The data that you're looking for, the data that you want, is about some future state.
But that future state changes as soon as you compose a solution.
So I want a phone that makes it easy for me to dial my phone numbers.
You show me a phone with a graphical user interface.
And then I say, well,
wow, if I had a graphical user interface, I want a browser too.
And that was never in the spec.
But if you've shown me something that changes my perception of what's possible.
because I, as a user,
only believe [INAUDIBLE] you could maybe find a better way to do menus to
solve my problem of finding my phone numbers in my address book.
But then later you show me this other thing, and
I realize, now if that's possible, then all these other things are possible, and
now I've changed what I want.
And the users don't often, this notion that requirements are difficult to
recognize, the users don't often recognize their own needs, what we call latent or
non-obvious needs, because of what we call the expert problem, or
what social scientists call the expert problem.
When you give a person a piece of technology, if you remember the first time
you used a spreadsheet or the first time you used a browser.
And maybe the user interface was clumsier, difficult,
on the very first piece of software you tried.
You knew it was valuable to learn this, so you worked really hard to learn it.
And you figured out all these little tricks, if you remember Lotus 1-2-3 or
the early days of Excel, when it was hard to figure out how to do things.
So you figured out all these little tricks that you retained, to quote, as expert.
You were the person in the office who knew how to use that piece of hard software.
And so even though there were things in the software that were difficult to use,
and this applies to physical products or any other kind of product where, you know,
there's a little bit of difficulties to overcome.
And you become the expert, you've overcome the difficulty, you know the workarounds.
Then when I ask you, what do you need?
You say, nothing, this works really well.
And I have to observe how you're using it in order to understand all the ways in
which you worked around difficulties that are not obvious to you anymore,
because you become an expert.
Dave Liddle who was one of the founders of Interval Research here in Silicon Valley,
talked about this problem in a couple of papers he wrote.
And he was also at the early Xerox PARC work on the Star microcomputers,
which became the Apple Macintosh interface.
He talked about how software goes through sort of three stages.
First, only experts and gurus can use it.
This is, you know, IBM mainframes in an air conditioned room and
technicians in white coats.
And then it becomes technical software that's used by a large group of people.
But the technical nature of the software requires developing expertise.
And then the experts hoard that knowledge and
actually can't tell you about the workarounds that they've invented.
Finally, software becomes easy to use in consumer software.
So think of video games, think of Nintendo, and things like that.
And now all of the websites that you use whose user interfaces are simple and
easy, with a flick of a finger or a thumb to navigate through them.
And so now you're not expecting to have expert knowledge,
you're just expecting to be able to use the product, because it's so easy.
But we've entered the era of expectations,
where everyone believes that the product should be easy to use.
And yet, the design of products is still mired in this old idea of trying to
capture the user requirements, writing a document and then executing on it.
When in fact, pedagogically, you cannot discover the true
nature of the user's need by simply interviewing and observing.
You have to do the rest of the process, which is to provoke the user's
experience with a prototype, super low resolution if possible,
in order to kind of churn up this next layer of information.
The thing that, when they say, well, if that's possible, then you know what I
really want is And that's how you get to the non-obvious user needs, because
the most important thing to understand I think about, defining your empathise.
That it works on this class of non-linear any samples weighted problems.
Coming back to you know basics of understanding how we know anything
I'm going to propose there's kind of three kinds of thinking, engineering thinking.
That bridge in the picture here is very straightforward.
I know the length of the bridge, the strength of the steel beams
I'm going to use, and I know how many cars what the load on that bridge will be.
I want the loads from when the other things will be.
I have completely bounded set of information and
I have the equations that allow me to solve for the strength of the bridge,
the length of the bridge and its structural elements.
And moreover, if I take that same set of specifications saying this is the same
loads, but I move it from, you know, San Fransisco to New York.
The same bridge will work in both cases because
the boundary conditions of the systems are exactly the same.
So we say with engineering thinking you can solve your way
forward because you have a bounded system.
Now in scientific thinking its a little different, we have a hypothesis,
and this is how we've moved all knowledge forward.
You have a hypothesis, and then we create a experiment which
carefully isolates the variable so that we can determine whether it's independent or
dependent on other variables.
Then we run the experiment, create a data set, and we analyze the data, and
we either decide the hypothesis true or false.
The two things that are critical in this so that we can design a good experiment.
We can isolate the variable we wish to test, and
that we keep our hypothesis constant.
We can't change the hypothesis to match the data that we just drawn out of any
scientific journal, because that would be cheating.
The hypothesis has to stand alone, and so
in scientific thinking, we analyze and record.
Now, we don't have all the data insights, right?
We're inventing new data, but we have this very rigorous process of testing
hypothesis to experiments that allow us to validate the true or falseness of data.
And we believe in Science so I hope as well.
Then you get to this other class of problems we've just been talking about,
where you have humans in the equation and the dataset is not confined,
it's not well bounded.
Humans come at problems from a variety of different cultural,
social, economic places, they look at products and
services based on th one they've already experienced.
They're very poor projecting forward into technologies that they don't know about or
possible solutions to the problem that employ techniques that they
are unfamiliar with.
And they had this problem changing quality to them where once provoked
with a possible solution about something in the future their needs change.
So they're co creating their needs with examples that you show them, only
way we know to solve that problem and the most powerful way to solve that problem.
Which is actually, we've been working on for almost 50 years here,
we started giving degrees in this in our design program at Stanford in 1963.
But these schools just a little over ten years old now.
We started in 2006 when that was where we started teaching design thinking
to everybody.
But we've really been working on this stuff for a long time and
what we've discovered through many, many iterations of the processes,
is the only way to discover the non [INAUDIBLE] needs that
nuance data set is so valuable to innovation Is to build our way forward,
to build and try the process over and over again.
To convince collapse the user experience down to something
that they all agree is valuable at least for that moment.
That picture up by the way, the picture I have of the iPhone in
the last couple slides ago was the original iPhone.
And it's kind of funny to look at it now, it looks so old fashioned,
it only has a few icons on the screen, it's tiny, it's got little round edges.
It's interesting how not only do our age change but our sense of style and
values change.
And particularly in technology products that evolve so quickly,
it's funny to see something that's not even ten years old look so old fashioned.
I'm going to go through a build here
of what we call the five basic skills around design thinking.
So we've discovered why kid stars can't tell you what they need,
because they actually don't know it, and because it
evolves with the experimentation that you do with prototypes.
So now we actually get there,
how do you process the innovation data that we desperately need?
If we want products that are radically different from things we ever done before
and opened up new value, propositions, markets, and
And or create in the non-profit world new services that scale very rapidly.
So we said there are five basic skill sets that this whole process.
And you put them all together and you use them in an iterative process and
it turns out to be, the only way we know to sort of invent and
innovate regularly in the future.
So the first step we talked about, observing with empathy.
And they call it seeing the water, there's an old joke.
Two fish were swimming along [INAUDIBLE] says hey boys, how's the water?
And the the other two little fish swim on for a while and
one turns to the other and he says, what the hell is water?
And that's a little joke, but
it's all about we don't see the enviroment we live in, we don't understand,
we don't often make visible or make actionable the culture we live in.
And so this observing of empathy is about seeing the water that your users swim in.
The expert problem where they don't even notice the look arounds they're doing
anymore because they've gotten so good at that piece of software.
Or, so good at that interaction.
You have to break it down and you have to go through your [INAUDIBLE] which all
involves with observation and this process of provoking the world with prototypes.
You go out, you talk to people, you show them stuff, you look at the world they're
in, you live with them, you do a ride along for a day with a cop.
Or you do in home visit with en elderly person if you're working on
an aging in place project and really try to get into their shoes,
we call it a you walk up into their shoes.
And then, but from that you have all this data and now you have to develop insights,
because the idea after empathy of defying is creating a point of view,
which is a little different than a hypothesis.
Because we're allowed to change it as we get new data,
it's more flexible than a hypothesis.
The insights come from framing and reframing the data and
trying to understand what people actually are thinking and
feeling behind their actions.
Third skill set, the ability to manage ideas or the flow of ideas.
We talked about certain process being divergent.
And actually, when you form a multidisciplinary team, or
a radical collaboration team, you want to have some people who, and
there's a couple of tests for this, you kind of know it.
In the people that you work with,
you want to find some people who are just really good at divergent thinking.
In other words, they're just really good at having lots and lots of crazy ideas.
They're not very good at figuring out which ones are the good ones.
Of the bad ones but
they love the process of diverging opening up the solutions phase.
And anyone can learn this if you learn to brainstorm well, you learn mind mapping,
you learn more analysis, you learn some of our improvisation techniques.
All of which are just about diverging and opening up with solution page.
The converging, and now we've got lots of good solutions to choose from.
What does her intuition tell us are the best ones to choose to start nailing down?
So this converging and
diverging and being cognizant of when you do what are the other.
That's really one of the critical things.
Actually one of the hardest things to teach is this sort of layer of
insight that happens here about there's lots of different ideas we might pursue.
But these ones over here feel like they might be more interesting or
in a more generative direction.
Or might yield potentially some more innovative solutions downstream.
The thing most teams do, is they try to, once they have lots of ideas,
they pick something that they know how to do, or
they pick something that's familiar to them.
And that's typically the wrong solution, or the wrong solution set to pick,
because you'll end up kind of back where you started.
So there's diverging and converging process, it's very important to manage.
Emma Smith, I love this deck of cards iterative failure, iterative failing.
There's a whole bunch of different ways to think about them.
But David Kelley, our senior professor and founder of the d.school and
the founder of IDEO says you gotta fail early if you want to succeed sooner.
Fail early, fail fast to succeed sooner.
We also talked about failing forward.
And this is the notion of prototyping.
And very specifically prototypes in this case are really more resolution.
Sort of attempts to interrogate the customers and
find out more about their needs.
Because you know the needs will change once they see [INAUDIBLE].
So it's not about project scaling although you think it's going to be
organization once it's record in innovation.
We have to refrain even the project failure as learning.
And you have to willing to fail a little bit even if the project level,
if you want to be an innovative organization.
But mostly when we talk about failing fast to learn something, we're
talking about prototyping, and building our way forward into the environment.
So that we have things that we can talk about.
Then we can provoke users and we can observe new behaviors.
And it's never a failure if the user uses the prototype that was unintended or
doesn't the prototype at all, but suggests an alternative path.
That is a huge success, even though you might argue that the prototype failed.
So after cycling through your observation phase and
your framing phase and figuring out converging and diverging and
then coming down to a few sets of solutions that you prototype.
You have to accept that you're going to fail forward.
And then as you begin to recognize certain patterns are persistent.
That users react the same way, positively, to certain things.
And you start building on those things and
making those things even more evident in your prototypes.
And you get to the place where you go from so
interrogating users to sort of selling them on your idea.
This idea that David Kelley talks about of now days it's no longer good enough for
the design team to just say, here's our design or you think.
You have to take the design and
put it inside a story where it's a story of the future, the future of a user.
User Jack name John who wants to do something new and different.
And here's my product, enabling John to life in a new and different way.
And if I tell the story of John,
with my product, in the future, making John's life better.
I've created the momentum around the design
that will drive it through even organizations.
Because now I have proof of, an example of a person using the product in the future.
And story telling's become such an important
part of design now that we have a whole class on it.
So those I guess five basic skills, observing with empathy, developing
insights and asking why, the ability to diverge and generate lots of ideas.
And then converge and pick the ones that are most likely to lead to innovation,
not the ones you're most comfortable with.
And moving to the process by building the way forward, building and failing,
building and succeeding a little, building a finding resident point of
view where everybody is converging on something that's new and interesting.
I don't know the history of the case of study of Tinder the app.
Somebody figured out that matching people through profiles in
a match to match way, was probably a good idea.
But the real kind of innovation was this gesture of swiping right swiping left,
which is just partly...
Driven by the technology and the interface if a smartphone, but
it's also recognizing the sort of casual nature of moving the data in and
out of your stream of consciousness in a way.
But matched in a way people were imagining this dating.
Matching process going and I think that deep residents will cause
to be the app that everybody uses and all the other ones to fail.
Taking all the friction out of the is one thing but adding in a little magical
moment is what we all hope to do in design and then mail it.
Okay, so you have accepted users can't tell you what they want.
You've worked hard on your organization to have these five core skill competencies.
And then you take the data and you put it in the Frameworks, and
you guys have seen these Frameworks before, I'm just kind of real quick.
The fundamental Framework is this notion of moving from the concrete
to the abstract, and from the here to the future, from today on up.
Left side, that level got lost and
to the future on the right side of the horizontal axis.
So you can be in the concrete here, the abstract here,
the abstract future, or the concrete future.
And you're basically moving around that process in the frameworks that
you're deriving concrete here is your observations from the field,
you start putting those Frameworks in observations into some frameworks.
It's first thing we always talk about is the empathy map.
The fact that people say and do things in the world, but
what you're really trying to get to is what they think and feel.
So you make a little two by two of this, so you can feel,
and you translate things from the same new category to the think and feel category,
using your own personal insight and intuition.
Knowing that you may be wrong, this is where your points of view is not
a hypothesis that's testable, it's simply a point of view.
It's a user plus a need plus an insight, and I think this is what they were,
when they said this I think this is what they were feeling.
Let's go try to say if that's true by building a prototype that would work
if that thought were in fact accurate.
Powerful but simple framework the hierarchy of needs, this is Maslow's idea.
There's a lot of work now that this isn't necessarily a hierarchy anymore,
that people cannot have all of their safety and
physiological needs met and yet still be working on self actualization even in
the poorest communities, even in South Sudan in refugee camps.
Or people don't have security and maybe even food and safety.
People still make jewelry.
People still write and sing songs.
They still perform artistic activities that are sort of very human.
So although it's not a hierarchy, it's a good way of necessarily.
Looking at things.
If I've made observations in the safety and psychology area,
but I have no observations in the love, belonging, esteem, and self-actualization,
it gives me a to go back to the field to ask different questions.
Or if I've found three of the five things already, and
I'm going to try to intuit the other two, it gives me a framework for
what's missing, and it drives the design team to make better insights.
And then finally, this one that comes out of Dev Patnaik's book on need finding.
Dev is the founder of Jump Associates.
And also has written a bunch on need finding.
There's another book called Wired to Care which is a business book.
And he also teaches at Stanford undergraduate need finding class.
He's a great guy.
Dev developed what they call the simple mnemonic AEIOU syystem for
breaking down observations.
A lot of times students come back in the field and they got these videos and stuff,
but they don't really know what they mean.
And it's a great way for kind of looking at a video, encoding certain actions, and
then breaking those actions down into insights.
Or the actual activities that you observed, the team have a,
the people you were looking at have a role to play, were they just participants,
how is the power of the team divided up around activities?
What did you notice?
Environments, where were they doing what they were doing?
Were they in a public space or a private space?
Was the place constrained or open?
Were they able to move things around or not?
Interactions.
Were they interactions with people or machines?
And were the, what was the friction in those interactions?
How did the interactions occur?
What objects were they interacting with?
Because if you can think about [INAUDIBLE] how objects tell us what to do.
A hammer's got a handle and a head and you know exactly which.
No one ever picks it up with the head and tries to bang with the handle.
There's an old expression, the hammer shapes the hand.
But it's this combination of hammer and hand that is the tool.
And that's the object that we can understand.
With digital objects, it's much more complex to understand what were the things
people were using, touching.
How were they working?
But if you think about the simplicity of swiping your thumb left and
right in an app for selection,
you start to understand how critical objects interactions are.
And then other users.
Who else was there, what were people doing, how were those people involved in
either making the observable behavior easier or harder to do?
And we find it, so
a lot of times just looking at a video of a user doing something.
Even just two or
three minutes of that video can take several hours to decode what was going on.
Looking at activities, environments, interactions, objects, and users.
And then, to take that and put it into an empathy framework.
This is what they're doing.
These are all the things we observed them doing in all these different modes.
And then, those translates to a mental model of what we think they,
what's happening in their brain, how were they imaging the interaction to occur?
Why were they getting stuck at certain points and not others?
It's a really simple but very powerful way of breaking down your observations.
And we really do encourage video as a way of capturing,
not only the simple and observable stuff, the body language, emotional state.
When someone's frowning when they're trying to do something or
smiling when they're trying to do somethings tells you a lot about what's
the cognitive load that's going on in their brain.
So, we get to that point and
then I want to talk about how do you make this stuff actionable?
And I've alluded to it about ten times now.
You have to start with the assumption that the information you can get
about the future is very limited.
You don't live in the future, so not much you can do.
And this was compounded by the expert problem and
by the problem of users changing their needs.
So as you think about this, you, let's say you've come back with observations and
lot's of, you've seen people using products and
or services that are the things you're interested in redesigning.
So you do have a pretty strong data set.
You have a very nuanced set of data about what's going on, what's
the cognitive model, where are the points of friction, where is the cognitive load.
It may be higher than it should be.
Where are the points of delight?
Oftentimes, you have to do, if you want to unseat an incumbent and
they're doing a great job and they're making something easy, Lyft,
Uber, Tindr, Snapchat, these apps that make instantaneous and
ephemeral communication super easy to do.
If you want to unseat them, out-innovate them,
you're going to have to be even more delightful.
It's not simply enough to remove pain points, or
to remove cognitive discontinuity.
You're going to have to out-delight them.
So you've got all this information, and then the only way we
the actionable is this process of prototyping and iteration.
So, I don't have a specific example to show you, but
in the case of going all the way back to my example of, all right, I'm working for
a company that makes wheelbarrows and we want to be the most innovative company.
And let's say we want to grow ten times our size.
Well, we're not going to find,
I guess we could start selling the wheelbarrow we've got in new markets.
We could go to China, we could go to India, we could go to other places.
But that's a marketing solution, and that's not.
That may be an innovation, but it's not the same thing as a product innovation.
So if we really wanted to say how do you take a wheelbarrow company and
reframe it into a materials logistics company, or
a getting stuff moved around where you need it when you need it company.
Simple prototypes like delivery.
We could mock mock that up with a $100 drone from the toy store and paper bricks.
And we could fly around job sites and see what people thought about it.
We could look sort of, what's really is going on,
its an information's system problem.
That stuff that we need is being delivered to point to different
places than where we need it.
So, could we design an information system that solves that problem?
Could we design micro trucks that could deliver stuff on site and almost directly.
Which is kind of like what a forklift is, only maybe an autonomous forklift
that the bricklayer could call from his smart phone, like he calls an Uber.
So, you've got this, you create a how might we statement which is
based on your insight, but it's not about wheelbarrows, it's about logistics.
How might we automate the delivery of materials directly where they're needed?
Or how might we provide an information system so
that logistics on the job site are massively improved?
Or how might we create on-call systems so that carpenters and
brick layers can summon materials where they want, when they want?
So, each of those how might we statements
Provokes a whole series of possible prototypes.
Because you don't know what's the ideas and good ideas.
But it's got an interesting resonance.
It seems to match up some of the data that you can match from the field with.
I know it's a good how my reinstatement when I can take a three prototypes,
I want to go right way.
And the prototypes are fast that are inexpensive among other solution, and
I learned something.
All right, literally a $100 drone in paper bricks, and
I can understand the psychological and
cultural advantage and disadvantages of flying stuff around the jobs sites can be.
Maybe people will think it's so cool, maybe they will think it's so
dangerous and so out of control that they would never do it.
Or maybe carpenters and brick layers are such conservative users that
this kind of technology intervention they would consider way, way, way too abrupt.
I often think about the introduction of Google Glass as an example of technology.
It's actually pretty cool.
It was introduced and handled so poorly that it became an instantaneous disaster,
and that was even amongst tech early adopters.
So you make things actionable by building in the world.
>> Thank you Bill.
So thank you so much for this wonderful presentation.
It's always interesting to hear about all the different ways that we can learn and
engage with our customers and arrive at so many unexpected results.
So Bill one of the questions that often comes up is
around this question of failure and risk taking.
What is the role of risk taking in generating ideas?
How important is that and how do people overcome that fear of failure?
>> Yeah, it's a great question and it comes up all the time.
When we do the master class, I often poll people in advance,
about what do you want to learn before you get there?
And one of the things they talk about is hey,
my organization doesn't like to take a lot of risks.
We're a very conservative organization, failure as a high really on cost around
here, and yet we still want to be innovative.
What do we do?
And I do a lot of tours of the school as well, I'm talking to companies,
and of course they say, we want to be more innovative.
And I go great, what are you willing to change?
And they say, nothing.
And I go, well that's going to be difficult.
[LAUGH] If you're trying to invent something that's never been done before,
if you're trying to create a product or service or innovate in an area like
the Wheelbarrow Company, then do something that's never been done before.
Look at the venture capitalist, who were supposed to be really smart invest in ten
companies and then nine of them succeed.
70% of your product institutions in the United States,
now by some measures failed to ever return in 70% do not return the investment
made in the development of those products over the next two or three years.
So 70% of products are failing and 90% of venture capital companies are failing.
And yet the ones that succeeded are so powerful the Googles, the Apples,
the Snapchats, the [INAUDIBLE].
They're so powerful there must be something about being
comfortable with some level of of failure.
Now, our idea is that you fail really early in the process because that's when
you can do the most learning.
But if you have an organization that's not willing to take risks at all,
then my only question is if I believe that we don't actually have a choice.
That innovation is going to happen in your marketplace whether you participate in
or not.
So your only choice is do I want to participate in innovation and
try to stay ahead of my competitors?
Or am I willing to sort of stay in a defensive posture,
defend my position, continue to extend my products incrementally?
But boy if you Google or somebody else comes around out innovates me,
I will be caught back flooded.
And probably see my markets and my margins decline.
So when these companies come, I say, well you don't really have a choice.
You either do the innovation or one of my 25 or 26 year old students going to go
out and start a start up in Silicon Valley, out innovate you in your space.
And then you're going to be faced with a declining market.
So the correlation between the ability to take and
manage risk appropriately and
to actually reward proper kinds of failures.
Failures that are learning opportunities.
When we see companies doing those things,
they tend to out innovate their competitors.
So it's just something you're going to have to get used to managing like you
manage any other risk in an organization.
And look, 90% of your portfolio of development activities will probably be,
what we would call it Apple line extensions.
There's simply the next Notebook, the next iPad, the next phone.
But every once in a while, you're going to be called upon, because the market
forces to come up with a completely new, completely not in a straight line.
This is something knowing what's predicting kind of product and
in order to do it, we have to be willing to fail.
Remember the iPhone changed failed three times with Steve.
Turn down the iPhone three times and say it wasn't innovative enough,
wasn't good enough.
And so if they can do that, anybody can handle it,
because failing in front of Steve was not applied from experience.
>> Yeah. >> [LAUGH]
>> Yeah, and
I think it's important to remember that big companies can also fail.
In fact I had a question from one of the participants about,
if a company such as Google can fail with Google Glass, what will the rest of us,
smaller and lots of us people do?
And I think what you might be saying is that this failure is part of the process.
And you need to learn to live with it and learn from it and benefit from it,
is that correct?
>> Yeah, absolutely, and just a quick comment on Google Glass.
It's a classic example of actually very poorly managed project and
a very poorly executed thing.
I don't think it represents a great innovation on Google's part at all.
It falls into the category of, we see lots of start ups at Stanford and
I saw some people come looking for advice.
And they come in and they're so excited about their product and
they show it to you and it's so hard, it's so technical, it's so amazing.
You solved this huge technical problem and then my said, great who needs it?
What's this for?
What problem does it solve?
I said no, no, you don't get it.
It's the technology, it's so cool.
The problem with Google Glasses was all technology.
They didn't take into account the social and
emotional issues of wearing something on your face.
They didn't take into account the privacy issues of having an always on camera that
didn't give any signal to the non participant that they were being
photographed.
And so it violated, so many social and sort of privacy norms.
And it included the face in the way that made the person look foolish.
And so it was like all of these things were easily discoverable had they done
some refunding.
But it's what we call a technology push project, where they are just pushing
technology into the world and hoping to find a quote market fit.
That almost never works.
>> Interesting, that's helpful.
Are there any specific traits or
characteristics of people that you look for that could be
more successful in performing design thinking activities, etc?
Is there something that you, Recommend looking for or
kind of developing as a person wanting to do more of that.
>> There is actually and it's interesting.
If you're an undergraduate in my program or a graduate student,
we'll teach you the entire process from empathy to prototyping,
through manufacturing, production, business planning and stuff.
But of course, some people are really good at one or two of the things.
Some people are really good at business planning.
Some people are really good at understanding markets and
how to distribute- distribute things very cleverly.
But there's a class of people that are just really good
at the design research part, the empathy observation part.
They tend to have very high social IQ, so
their social emotional intelligence is very high.
They tend to be incredibly good listeners,
because mostly this is just about listening and observing.
It's not so much about interviewing.
It's interesting, people who are great news reporters,
like a friend of mine teaches journalism at the Emerson College in Boston.
And he was a journalist for many years for the Boston Globe.
And he says, good journalists are just nosy people.
They're not good listeners.
They just want to pry into other people's lives and find out what's going one and
write it up.
That's the opposite of an empathetic listener.
So I've noticed some of my students just have a natural tendency towards listening.
They have an open sort of gregariousness, which invites people to talk to them.
They're the kind of folks that give, they were sitting on an airplane, they would
talk to everybody in their section at some point, but they're not overbearing.
So the high social emotional intelligence are sort of probably an extrovert or
open mindset tend to be the best at the design research side.
Now that's all learnable, for sure, but some people just have it naturally.
[INAUDIBLE] >> Great.
So I think we have just three more minutes, we'll have time for
one more question.
Before we take that, just a reminder to everybody that we will be,
we are recording this webinar and
then email with the link to the recording will be send to all of you.
So last question, you described earlier that it might take a few hours to
kind of get all the data and information from even a few minutes of video.
Could that lead to over analysis of the data that you're collecting?
And how do you avoid that?
>> Well, one of the mindsets I mentioned in, I think the third slide,
was a bias tax.
So although you do wrap the field, you collect all this video and data.
And, by the way, someone mentioned, does everybody go on the team?
We typically go in twos, because someone can do the interview,
while someone's taking notes or running the cameras very discreetly,
because you'd like the person talking to not want to have to be taking notes.
But when you come back and two, three or four teams have all talked to different
users, you some need time to collate that information and
just sort of chalk through it and understand it.
But as soon as you believe you have even a cursory understanding or
beginning understanding of the users, form a restatement and
build a prototype, because your dataset's going to be incomplete anyway.
And you know that because as soon as you build a prototype,
the users will, I say change their needs, but what they really will do is they'll
just evoke new responses now that they know there are other possibilities.
So the biased action would say spend 70% of your time in the field
either interviewing or trying prototypes and 30% of your time on the analysis.
And we actually measure our student team, or masters teams, they have to tell me
every week how much time they spent doing and how much time they spent talking.
[COUGH] And if I don't see a 70 30 split,
I push them back out into their field to do more field work.
The prototype building and testing is part of the empathy process.
It's where the back part of the process connects back again to the front part.
Anyway, I think we're probably out of time.
>> Yeah. >> Thanks everybody for listening, and
it appears everybody hung in there for the whole time.
We really appreciate it.
>> Thank you everyone.
Have a good rest of your day.
Thank you for spending your morning with us.
Click on any text or timestamp to jump to that moment in the video
Share:
Most transcripts ready in under 5 seconds
One-Click Copy125+ LanguagesSearch ContentJump to Timestamps
Paste YouTube URL
Enter any YouTube video link to get the full transcript
Transcript Extraction Form
Most transcripts ready in under 5 seconds
Get Our Chrome Extension
Get transcripts instantly without leaving YouTube. Install our Chrome extension for one-click access to any video's transcript directly on the watch page.