The Captain's Log
Ah, to be a naval captain in the days of yore, recording the weather, looking over the horizon, stumbling across the deck with scurvy-weak knees.
This is like that, only I'm writing from my cozy office and the changing conditions and observations I'm recording are the ones I have during meetings and leisurely walks.
Aka, this is the home of the most unfinished, unpolished, unformed thoughts in a fledgling digital garden. They are tiny chunks of ideas that I write as I think of them, and then maybe over time they grow and connect to other ideas and re-emerge with more thinking in a longer broader piece.
There's lots of "I" pronoun use here (as opposed to "you," the reader, the audience) because they're quite literally what I'm thinking as the thoughts occur to me, and not yet filtered out to how they help or impact other people OR what actions someone else might take.
Okay, here's the log:
Oct 28, 2021
About this time I presented JTBD findings and nothing changed
This is the story of how I learned that getting* customer insights out of a JTBD research project and mobilizing* insights across the org are two completely different initiatives.
A few years ago I ran a JTBD study, identified the job stories, & shared the output with the CEO who hired me.
"I love this! Present it in all hands." I did, & everyone there seemed to "love" it, too. Emojis in the Zoom chat, DMs after. The works.
And then... nothing happened. After that project presentation, few people consulted the recordings, the transcripts, or the artifacts. The research didn't make its way into other decision-making centers outside of the ones I was already in.
Why did seemingly excited people do nothing with what they learned? Well a better question might be, what did I *want* them to do differently? And how did I help my colleagues make those changes?
Early on I hoped that saying, "Hey here's the report" would precipitate:
- Replacing much of what was on the roadmap with new opportunities
- Getting access to data and sharing it across teams
- Re-orging team structure around the customer journey
None of this happened and little changed because saying, "Here's a report, now go do stuff," won't get an entire org to build a customer-led mindset overnight.
If we do research for people who have not worked with research before, if we present insights without guiding our colleagues on what to do with those insights, if we give teammates the tool of qualitative data but not the support to understand and apply it, we can expect the outcome to be limited, the insights to stay sequestered in someone's hard drive.
Using insights to build a customer-led culture doesn't happen with research alone - it happens by re-orienting away from mobilizing *the insights* and instead mobilizing *the team* to believe in and use the insights as a guiding strategy for decision-making. And that is a change strategy project altogether different from research.
Oct 19, 2021
First, JTBD helps you learn about your customers. Then it helps you learn about your org.
When teams seek out JTBD, they're often aiming to improve a metric.
A common one is this: "We need to increase free-to-paid conversion, and we know we need to understand our customers so we know what to build."
Running a JTBD study will help you understand your customers and discover what to build. But that's not all you'll discover as you move from deciding by customer folklore to deciding by customer consensus.
A JTBD study helps your team discover how to improve a metric. And then it shows you that maybe you need some new metrics - plus a new way to hit them and a new strategy to execute on the opportunities you've uncovered.
Leaders that run a JTBD research project to understand their customers soon ask these 3 questions:
1. Are we working toward the right metrics? JTBD will teach you what milestones customers use to measure their progress - are they the same metrics that determine success at you org? You may find that the metrics that you've been aiming at won't drive long-term success for you or your customers.
2. Are our teams organized to build experiences that help our customers make progress? Few humans describe experiences with companies by what "product" built and what "marketing" built. Can your team build experiences to help customers make progress if they're still operating with an arbitrary departmental separation?
3. Are we ignoring the *really* valuable opportunities? JTBD will disavow you of the notion that your customers come to you because they "need analytics" and replace it with a full understanding of when, where, why, and how they need analytics - and all the other barriers stopping them from getting what they really want. You'll see the experiences your team can build to help customers along their journey, but they'll fall outside the scope of your current roadmap. Should you rethink it?
Run a JTBD study to understand your customers' world. Along the way, you'll start to wonder if your world is structured to help customers make progress, and what metrics, team structures, and strategic decisions you need to make to get it there.
Oct 18, 2021
What do you mean when you ask "Who is our customer?" What do you learn?
I've struggled to find a way to ask this question so it produces more than an occupation or set of descriptors. I've often griped, "If you ask it starting with "who," people might answer with a persona, and that's not what we want!"
The ambiguity in asking "who" makes room for answers that tell us where we are, where we ought to go next, and what kinds of growth opportunities we can pursue. *How* we answer tells us which of three decision-making states we're in and that is as valuable as the answer itself.
Ask this question at your org, and use the answers to see which of these 3 states you're in:
1. Customer Chaos: Team members are all over the place with limited to no agreement or understanding of who they serve
How it sounds: "Designers" or "Marketers" or "To run workshops" or "....idk"
What it means: The team is not solving any problem well for a well-defined group of people. It will be hard to do much of anything well without extraordinary luck or funding.
2. Customer Folklore: Similar answers in the form of personas or user groups with scattered details on use cases
How it sounds: "Creatives" or "Designers at agencies"
What it means: Teammates are responding to "who" using an occupation or set of attributes. Even if everyone agrees, they're sharing the characters but not the plot. They don't know the full story, they haven't agreed on what it is, and you will struggle to make strategic decisions to support how people find, start, and stay with you.
3. Customer Consensus: Job story shorthand
How it sounds: "Job 2" where everyone knows that means something like, "Knowledge workers who planned an in-person meeting that is now remote who need a way to run it well so they don't lose progress or clients."
What it means: Team members will use an established and agreed-upon shorthand to refer to a customer including their persona, struggle, and definition of success. Shared understanding rich in context gets your org aligned so you can make smarter strategic decisions on marketing, product, ecosystem strategy, and more - and move faster.
How to move from one decision-making state to the next
Start by surfacing misalignment by bringing everyone together to share their understanding of who your customer is so everyone can see it for themselves.
When everyone agrees the misalignment exists (and that it's sabotaging growth), run a JTBD study to come to customer consensus. Plan on thrashing about and taking time to adapt to this change for some time before finally getting there.
Oct 15, 2021
Is it obvious *to you* that your team lacks customer consensus, but no one agrees – or thinks it's worth addressing?
Many teams first need to agree that misalignment exists and that it’s causing problems severe enough to allocate time to solve – especially if the team runs on a ship - ship – ship treadmill.
Have you had enough conversations with team members to sense that you're making decisions based on customer folklore? Have you watched enough projects flop to sense that a lack of customer consensus is at the heart of your teams challenges?
If you have, you might know that your team is missing the kind of alignment around who you're building for that JTBD can get you, but getting traction on a change initiative can be a hard sell – even when you're the boss.
To start paving the way for JTBD initiative, use this framework to facilitate a conversation and build a coalition eager to align around who your customers are:
Convene: Invite 4 - 6 team members who have shared different ideas of who your customer is and frustration around their work
Alex may have said, "I'm building this feature for target audience, designers." Meanwhile, Sam may have run a competitive analysis on role-agnostic meeting tools.
Did Alex see limited adoption of the designer – focused feature? Was Sam's competitive analysis ignored? If so, they’re feeling the pain of customer folklore and will benefit from a JTBD project, even if they aren't yet aware that solution exists. Frame the conversation around what's going on in their world.
Describe: Ask each member to independently describe your customer
Have everyone respond to the same set of prompts, here are three to get you started:
- How would you describe our customer?
- What happens in our customers world that leads them to give us a try?
- What outcomes are they seeking? What outcome do they get?
Discuss: Review the similarities and differences - and why they matter
Review everyone’s responses solo, then discuss as a group - and compare to a real customer story if you have one.
There may be an awkward silence when folks said with the realization that they haven't been marching in the same direction - but I thought they were.
As you create space to see these differences, you'll bring attention to the problem, give folks a reason to want to solve it, and create space to discuss solutions.
Oct 14, 2021
Why the *process* of running a JTBD study is a team – alignment mechanism as much as the *output* of a JTBD study
We designed a JTBD interview studies to explore a big question: What's going on in our customers’ worlds that drove them to give our product or service a try – and how have we helped them make progress?
To explore this question at an existing business, we design studies around people who have recently been in the “before” state prior to switching to our product and are now in an "after "state of achieving some kind of success with us. This group is likely to have a fresh memory of their time in the old world and fresh perspective on their new world.
To get this perspective, we recruit interviewees who fit at least two criteria:
1. People who have made a decision to switch to our product or service very recently (i.e. started a trial and upgraded to a paid plan within the last 30 - 90 days)
2. People who have achieved a success milestone during that time
Sitting down to recruit people for match these criteria is an exercise that brings many teams to their first moment of realignment.
Designing a JTBD research study is often the first time teams have stopped to consider what makes customers successful, whether this is consensus on that outcome, and how (or whether) it's measured.
Choosing to say, “Let's find customers who are successful” means asking:
- How do we define success?
- How do our customers define success?
- Does success mean something different for different user groups?
- How do we know that this is the best way to define success?
- Where did that success metric come from?
- Is success quantifiable? Should it be?
For a great many teams need to JTBD, recruiting participant is the first time in a long time that anyone has stopped to consider these question – and it's where the change work begins
Oct 12, 2021
Why is it so difficult to learn how to do JTBD - and to see if it's right for you?
I wrote essays earlier this week for folks who are aware of JTBD, who believe that it might help them, but who aren't yet sure exactly how it works.
Today I started to push on down that line of thinking with a few essays on some of the fundamentals of JTBD, including essays on switch moments, the 4 forces, questions to ask to understand an interviewee's experience over time, and a conversation with a colleague earlier today about how JTBD is more of a theory than a framework and why that matters.
For each of these I poked around to find a good source to cite, a good debate-settling masterstroke by one of the many folks out there who have used JTBD effectively.
And in looking for resources on JTBD for the first time in some time, I stumbled back into the world that folks new to JTBD often remind me of with a grimace.
The limited learning resources and varied interpretations of JTBD present a steep learning curve for novices, and the consequences for being wrong are unclear.
It is challenging and overwhelming to understand how to apply JTBD at your org, run JTBD interviews, understand the multiple interpretations and applications of JTBD theory, and determine whether it is even a worthy endeavor (many believe it is not).
There are few success stories of end-to-end JTBD application, so what does it look like when it's going well? There are even fewer stories of failure, so how bad can it go if you do it wrong?
What is the real outcome? What is the real risk? How do you know if you're doing it right? How do you course correct if you're doing it wrong?
Why are many JTBD resources so muddy? Is it complexity of the theory? Ambiguity of interpretation? Is the secret sauce kept secret on purpose? Is there similar muddiness for other methodologies? In other fields? I wonder.
Oct 11, 2021
What's the difference between JTBD and other frameworks?
Many frameworks give a static snapshot in a story, i.e. a character or plot point
If I were to describe the folks I work with who are shopping for JTBD help with the common frameworks folks use to describe customers, here's what I might say about one group:
- Persona: CEO, VP Product, VP Marketing, VP Growth, VP Onboarding
- User Group: Strategic and economic decision-maker for the project
- Ideal Customer Profile (ICP): A SaaS leader at a B2B SaaS company with 20-60 employees and >$10 million ARR
- Pain point: Don't know how to out-compete larger firms, don't agree on customers
- Task: Fix onboarding, fix retention, build ecosystem strategy, get team alignment
- Customer journey map: Reads 3 articles, sends an email asking for help, has a call, maybe two, books a project
- Desired Outcome: "Get an understanding of our customers" or "Get our team aligned" or "Grow metrics" or "Grow to $100 million ARR in 3 years"
JTBD gives you the whole story, i.e. the characters, beginning, middle, end, emotional, strategic, and logistical context of that story
If I were to tell you about this same group using the "job story" component of JTBD, here is what I would say. Or rather, here is what that person might say:
"Once upon a time, my team realized that increasing activation metrics is our biggest opportunity to grow the business. We formed an interdepartmental team with folks from product and marketing to focus on onboarding. That's when we realized that everyone had a different understanding of the customer we were building for.
"We needed a way to build on a single, shared, empirical understanding of our customers so we could get activation outcomes, define our ecosystem strategy, and grow the business, but we didn't know where to start - and we were afraid that our team of non-experts might make mistakes that kept us misaligned.
"That's when we decided to research JTBD experts to facilitate this process for us, so we could be sure we're following the best practices, getting the best outcomes, and getting buy-in from our team. We reached out to you because we heard you on a podcast and know that you specialize in the team-alignment side of research."
Traditional frameworks give us static snapshots of individuals in singular moments. They're helpful, but they're not the whole story that JTBD gives.
Oct 10, 2021
The most common myth about JTBD has to do with time
When I speak with people who are new to JTBD, some are excited to use it to solve a painful problem. Others are skeptical that JTBD can solve any problem at all.
I've observed that each group of new JTBDers tend to draw their initial excitement or disdain from the same misnomer.
Myth: JTBD is about validating assumptions and confirming what to do in the future.
Novice champions of JTBD often say something like:
"I've been talking to people internally and looking at our features, so I have a pretty good list of what the jobs are. I know I can't just use my opinion, so I want to run a study to confirm my hunches before we start building"
Novice skeptics of JTBD often say something like:
"I don't know about this so-called research, seems to me it's just biased people running these studies to confirm their own opinions."
Both groups latch onto a myth that JTBD is about coming up with an idea on your own, then going out to confirm it with customers. One group uses this myth to support their own initiatives, the other uses it to reject the premise of JTBD.
Reality: JTBD is about understanding what has already happened in the past to kickstart a new strategy for the future.
JTBD interviews are about asking customers about what has already happened.
We don't try to understand whether a feature fits in a roadmap because we ask about what we built yesterday, not what we might build tomorrow.
We can form opinions or biases and guess about why customers choose our product all we want, but we are almost always wrong.
So we ask customers what they've done already, when they did it, and what it felt like. We use those inputs to form an empirical understanding of our customers' job stories. And we use that new understanding to kickstart new conversations where both champions and skeptics say:
"Oh okay now I know what features we should be building, what problems we should be solving, and how we can all get aligned and working in the same direction."
Oct 9, 2021
JTBD helps your team change your decision-making state
If you asked every member of your team to write down their understanding of who you build for, what they're struggling with, and how they define success, what you hear will tell you a lot about how your team is making decisions.
The 2 Decision-Making States:
If someone from marketing responds to this question with a persona descriptor, someone from product describes a use case, and someone from customer success describes a pain point, and there is no real shared understanding that you could verify with your customers even if you wanted to - your team's decision-making state is based on folklore. There is no consensus at the stating line. Not within teams. Not across teams. Not between your team and your customers.
When your customer stories have similar main characters but different plot points, you're making decisions based on customer folklore.
When your team responds to this question with a single, shared, empirical understanding of your customers, when they can tell you confidently that they know it to be true because they have verified it by listening to and observing actual customers, then your team is making decisions based on a customer consensus.
What happens when you go from deciding by folklore to deciding with consensus
When you're making decisions based on folklore, everything is a guess or a mess. UX meetings drag on and on because everyone is defending their version of the customer. What gets built in-app doesn't match what gets built for email. Big opportunities get missed. Huge.
When your team makes decisions based on customer consensus, marketing and product aren't bumping into each other - they're supporting each other. You're getting acquisition, activation, and retention outcomes. And competing strategic priorities sort themselves out.
* I'm still noodling if consensus is the right word for this
Aug 23, 2021
Highly recommend this Captain's Log format to remove the pressure that makes it hard to make writing a regular habit.
I heard this interview with Seth Meyers after he left SNL to go to the late show, and he talked about how SNL is 1x a week and his new show is every night, so he got a lot more chances, which meant he can be a lot less precious with the segments they produced.
I've thought about this a lot, and tried for a while to be less precious with writing, but still didn't write. I also created a few sites, but those, too, felt like the infrastructure supported very polished essays. Learning more about digital gardening and setting up Notion has helped clear out some of that pressure. Calling it a "Captain's Log" has helped get rid of most of the rest of it.
So now I'm changing the habit of what I'm doing when I'm writing. With the Captain's Log, I'm not "writing essays" as much as I'm "recording observations" and that alone makes it so much less stressful, it puts so much less pressure on it. And I have a fun playspace to put it in.
I've written 3 of these in under 90 minutes, and that is.... not how I have typically written. At all! I would mull things over and over to try to find the right framing, the most astute analysis, the one that people would read and say "Wow Alli has made a Contribution to the Discourse." and now I'm not doing that, I'm just like "Hey i noticed this thing! Maybe it helps you!"
Next I have to work out how to mechanically distribute everything (just wired up some sweet-looking ChiliPepper.io forms and Notion to ActiveCampaign, next I'm figuring out how to format tweets and emails) and get over those fears of distributing the work.
Aug 18, 2021
What are some DOA ways to get your team to start using a new tool?
When I help teams implement EnjoyHQ, we talk some about the tool, but most of this work is about learning how to think about how qualitative data moves around the org, how decisions get made, how team members can encourage their colleagues to use a new tool.
Assumptions that fail.
- "Well they have been told they have to use it."
- "Everyone is really nice and seems very interested!"
- "We are doing a big group demo so everyone will know what we're doing."
These ways fail. I know, because I've believed all 3 would be surefire ways to force a change, introduce a new tool, change a workflow - and every time I've failed. Large group demos with the CEO saying "we're using this now" with colleagues who knew, liked, and trusted me - it didn't work.
Change initiatives (and introducing a new software tool into a workflow is indeed a change initiative!) that do not connect the change to the team members' individual goals fail. And when it "seems like" they shouldn't, it's all the more surprising.
Tactics that get outcomes.
Instead of a top-down decree, instead of relying on good will, instead of generic group introductions:
- Build a core group of champions to understand the change (i.e. new tool, becoming more customer centric); this might involve discussing the problem and creating urgency around it for longer than we might expect. (I typically mention EnjoyHQ 10-15 times before we begin the implementation.)
- Connect the change to the needs and goals of people using the tool.
- Do it one person at a time. Don't have time to talk with everyone one on one? Talk with a few people and help them talk to others.
And also: expect the change initiative to take 5-10x longer than you expect, expect to need 5-10x more touchpoints than you expect.
Aug 11, 2021
Why do researchers, copywriters, UX people sometimes struggle to get buy-in on initiatives?
We have all the skills for understanding how people think, how people make decisions, frameworks on frameworks on frameworks for creating experiences that help people become successful.
And yet, when we try to talk to our colleagues about a new tool or a new way of working, we often face a big wall.
Yes, to a certain extent some of the resistance we face, some of the intractable UX theater we encounter won't change, all organizations the shadow of one person, and that person might have a different paradigm they refuse to shift.
And yet, so often I talk with teams that are struggling to get buy-in on implementing EnjoyHQ or being more customer centric. But they have not stopped to understand their colleagues as if they would understand their customers.
- "Hey PM, what's going on in your world? What are you working on? What's your process for getting the research we use and for applying it?"
- "Hey VP Marketing, what's the strategy look like for the next 2 years? What do you need to execute it?"
- "Hey researcher in another department, what does your workflow look like? Where do projects get crushed?"
I have observed many researchers who will ask these questions easily of our customers and users and research participants, and yet often do not ask them of their internal customers, aka their colleagues, direct reports, and superiors.
But when teams center their colleagues needs at the outset of a change initiative, when they approach them like research projects, I see much greater buy-in.
Jul 1, 2021
Qualitative data does not seem to translate into power the same way that quantitative does.
Even at customer-led orgs.
I want to think about why.
Jun 20, 2021
Will "UX Librarians" be commonplace in the future?
I'm starting to think that "Qualitative Insights Manager" or "UX Librarian" (or possibly a librarian function under the broader umbrella of "Research Ops") will be a common role in the future, based on my observation of how much work it is to set up and maintain a qualitative research repository.
I've said this a few times on calls to overwhelmed PMs processing their new responsibility of managing a library of their org's insights, when all they wanted was just to have the insights accessible.
I suppose I am saying this in part to alleviate their fear, validate their responsibility even.
But really do we let every user of the library decide how to organize the books? No we do not, the managers of the Dewey decimal system or Library of Congress system do that. But of course that is for published works and not data and in-progress findings.
But how much do the classifications change for much of the taxonomy? For many of the projects?
What type of governance is necessary for a qualitative insights repository? What is sufficient?
Jun 16, 2021
Introduce your qualitative insights repo to one person at a time
I consulted with a team recently that is getting ready to roll out their insights repository to their entire org by flipping the switch to turn on SSO access to the repository.
I advised strongly against doing that on account of having led 20-person demos of EnjoyHQ and then received ~20 pings on Slack immediately after the demo with questions and confusion.
(I actually do no recommend flipping a switch and making it accessible to everyone without training or understanding of what qualitative data is and how it's used. Only people who are not antagonistic to research get access. Or people who are being chaperoned by someone who will explain the raw and/or processed insights to their team.)
Much better to build out your repository for a small group, get really comfortable, understand how it supports you and your team, then roll it out to another few people - but do it one-at-a-time.
I explained the folly of my earlier days, and this team changed their strategy on the spot.
Jun 7, 2021
"Consultants are on the outside so they don't have the chance to go deep."
This is something I hear often in new client intake calls. It is so fascinating because in almost every engagement I've had, I, the outsider, the not-bothering-to-show-up-for-all-hands meetings am somehow the one with the deepest understanding of the customer's JTBD.
I reacted in the moment to someone saying this to me recently with my chest puffed out, "well we'll see about that!" I say this on account of the fact that my engagements involve doing research, setting up research libraries, defining strategy based on that research, and then designing experiences based on that research - and then presenting to stakeholders who have often barely skipped the research report. "I'm the only one doing research, so in one study I'll know your customers even better than you do so there."
And lately the threads I'm tugging at in my work is to make myself less unique in understanding the customer. How do we mobilize the insights? How do we get the team to agree on how to use the insights? What role does qualitative data make in decisions, and how do we get it to play a bigger role?