Captain's Log
Captain's Log

Captain's Log

The Captain's Log

Ah, to be a naval captain in the days of yore, recording the weather, looking over the horizon, stumbling across the deck with scurvy-weak knees.

This is like that, only I'm writing from my cozy office and the changing conditions and observations I'm recording are the ones I have during meetings and leisurely walks.

Aka, this is the home of the most unfinished, unpolished, unformed thoughts in a fledgling digital garden. They are tiny chunks of ideas that I write as I think of them, and then maybe over time they grow and connect to other ideas and re-emerge with more thinking in a longer broader piece.

There's lots of "I" pronoun use here (as opposed to "you," the reader, the audience) because they're quite literally what I'm thinking as the thoughts occur to me, and not yet filtered out to how they help or impact other people OR what actions someone else might take.

-Alli

Okay, here's the log:

Oct 11, 2021

What's the difference between JTBD and other frameworks?

Many frameworks give a static snapshot in a story, i.e. a character or plot point

If I were to describe the folks I work with who are shopping for JTBD help with the common frameworks folks use to describe customers, here's what I might say about one group:

  • Persona: CEO, VP Product, VP Marketing, VP Growth, VP Onboarding
  • User Group: Strategic and economic decision-maker for the project
  • Ideal Customer Profile (ICP): A SaaS leader at a B2B SaaS company with 20-60 employees and >$10 million ARR
  • Pain point: Don't know how to out-compete larger firms, don't agree on customers
  • Task: Fix onboarding, fix retention, build ecosystem strategy, get team alignment
  • Customer journey map: Reads 3 articles, sends an email asking for help, has a call, maybe two, books a project
  • Desired Outcome: "Get an understanding of our customers" or "Get our team aligned" or "Grow metrics" or "Grow to $100 million ARR in 3 years"

JTBD gives you the whole story, i.e. the characters, beginning, middle, end, emotional, strategic, and logistical context of that story

If I were to tell you about this same group using the "job story" component of JTBD, here is what I would say. Or rather, here is what that person might say:

"Once upon a time, my team realized that increasing activation metrics is our biggest opportunity to grow the business. We formed an interdepartmental team with folks from product and marketing to focus on onboarding. That's when we realized that everyone had a different understanding of the customer we were building for.

"We needed a way to build on a single, shared, empirical understanding of our customers so we could get activation outcomes, define our ecosystem strategy, and grow the business, but we didn't know where to start - and we were afraid that our team of non-experts might make mistakes that kept us misaligned.

"That's when we decided to research JTBD experts to facilitate this process for us, so we could be sure we're following the best practices, getting the best outcomes, and getting buy-in from our team. We reached out to you because we heard you on a podcast and know that you specialize in the team-alignment side of research."

Traditional frameworks give us static snapshots of individuals in singular moments. They're helpful, but they're not the whole story that JTBD gives.

image

Oct 12, 2021

Why is it so difficult to learn how to do JTBD - and to see if it's right for you?

I wrote essays earlier this week for folks who are aware of JTBD, who believe that it might help them, but who aren't yet sure exactly how it works.

Today I started to push on down that line of thinking with a few essays on some of the fundamentals of JTBD, including essays on switch moments, the 4 forces, questions to ask to understand an interviewee's experience over time, and a conversation with a colleague earlier today about how JTBD is more of a theory than a framework and why that matters.

For each of these I poked around to find a good source to cite, a good debate-settling masterstroke by one of the many folks out there who have used JTBD effectively.

And in looking for resources on JTBD for the first time in some time, I stumbled back into the world that folks new to JTBD often remind me of with a grimace.

The limited learning resources and varied interpretations of JTBD present a steep learning curve for novices, and the consequences for being wrong are unclear.

It is challenging and overwhelming to understand how to apply JTBD at your org, run JTBD interviews, understand the multiple interpretations and applications of JTBD theory, and determine whether it is even a worthy endeavor (many believe it is not).

There are few success stories of end-to-end JTBD application, so what does it look like when it's going well? There are even fewer stories of failure, so how bad can it go if you do it wrong?

What is the real outcome? What is the real risk? How do you know if you're doing it right? How do you course correct if you're doing it wrong?

Why are many JTBD resources so muddy? Is it complexity of the theory? Ambiguity of interpretation? Is the secret sauce kept secret on purpose? Is there similar muddiness for other methodologies? In other fields? I wonder.

image

Oct 10, 2021

The most common myth about JTBD has to do with time

When I speak with people who are new to JTBD, some are excited to use it to solve a painful problem. Others are skeptical that JTBD can solve any problem at all.

I've observed that each group of new JTBDers tend to draw their initial excitement or disdain from the same misnomer.

Myth: JTBD is about validating assumptions and confirming what to do in the future.

Novice champions of JTBD often say something like:

"I've been talking to people internally and looking at our features, so I have a pretty good list of what the jobs are. I know I can't just use my opinion, so I want to run a study to confirm my hunches before we start building"

Novice skeptics of JTBD often say something like:

"I don't know about this so-called research, seems to me it's just biased people running these studies to confirm their own opinions."

Both groups latch onto a myth that JTBD is about coming up with an idea on your own, then going out to confirm it with customers. One group uses this myth to support their own initiatives, the other uses it to reject the premise of JTBD.

Reality: JTBD is about understanding what has already happened in the past to kickstart a new strategy for the future.

JTBD interviews are about asking customers about what has already happened.

We don't try to understand whether a feature fits in a roadmap because we ask about what we built yesterday, not what we might build tomorrow.

We can form opinions or biases and guess about why customers choose our product all we want, but we are almost always wrong.

So we ask customers what they've done already, when they did it, and what it felt like. We use those inputs to form an empirical understanding of our customers' job stories. And we use that new understanding to kickstart new conversations where both champions and skeptics say:

"Oh okay now I know what features we should be building, what problems we should be solving, and how we can all get aligned and working in the same direction."

image

Oct 9, 2021

JTBD helps your team change your decision-making state

If you asked every member of your team to write down their understanding of who you build for, what they're struggling with, and how they define success, what you hear will tell you a lot about how your team is making decisions.

The 2 Decision-Making States:

Customer Folklore:

If someone from marketing responds to this question with a persona descriptor, someone from product describes a use case, and someone from customer success describes a pain point, and there is no real shared understanding that you could verify with your customers even if you wanted to - your team's decision-making state is based on folklore. There is no consensus at the stating line. Not within teams. Not across teams. Not between your team and your customers.

When your customer stories have similar main characters but different plot points, you're making decisions based on customer folklore.

Customer Consensus:*

When your team responds to this question with a single, shared, empirical understanding of your customers, when they can tell you confidently that they know it to be true because they have verified it by listening to and observing actual customers, then your team is making decisions based on a customer consensus.

What happens when you go from deciding by folklore to deciding with consensus

When you're making decisions based on folklore, everything is a guess or a mess. UX meetings drag on and on because everyone is defending their version of the customer. What gets built in-app doesn't match what gets built for email. Big opportunities get missed. Huge.

When your team makes decisions based on customer consensus, marketing and product aren't bumping into each other - they're supporting each other. You're getting acquisition, activation, and retention outcomes. And competing strategic priorities sort themselves out.

* I'm still noodling if consensus is the right word for this

image

Aug 23, 2021

Highly recommend this Captain's Log format to remove the pressure that makes it hard to make writing a regular habit.

I heard this interview with Seth Meyers after he left SNL to go to the late show, and he talked about how SNL is 1x a week and his new show is every night, so he got a lot more chances, which meant he can be a lot less precious with the segments they produced.

I've thought about this a lot, and tried for a while to be less precious with writing, but still didn't write. I also created a few sites, but those, too, felt like the infrastructure supported very polished essays. Learning more about digital gardening and setting up Notion has helped clear out some of that pressure. Calling it a "Captain's Log" has helped get rid of most of the rest of it.

So now I'm changing the habit of what I'm doing when I'm writing. With the Captain's Log, I'm not "writing essays" as much as I'm "recording observations" and that alone makes it so much less stressful, it puts so much less pressure on it. And I have a fun playspace to put it in.

I've written 3 of these in under 90 minutes, and that is.... not how I have typically written. At all! I would mull things over and over to try to find the right framing, the most astute analysis, the one that people would read and say "Wow Alli has made a Contribution to the Discourse." and now I'm not doing that, I'm just like "Hey i noticed this thing! Maybe it helps you!"

Next I have to work out how to mechanically distribute everything (just wired up some sweet-looking ChiliPepper.io forms and Notion to ActiveCampaign, next I'm figuring out how to format tweets and emails) and get over those fears of distributing the work.

image

Aug 18, 2021

What are some DOA ways to get your team to start using a new tool?

When I help teams implement EnjoyHQ, we talk some about the tool, but most of this work is about learning how to think about how qualitative data moves around the org, how decisions get made, how team members can encourage their colleagues to use a new tool.

Assumptions that fail.

  • "Well they have been told they have to use it."
  • "Everyone is really nice and seems very interested!"
  • "We are doing a big group demo so everyone will know what we're doing."

These ways fail. I know, because I've believed all 3 would be surefire ways to force a change, introduce a new tool, change a workflow - and every time I've failed. Large group demos with the CEO saying "we're using this now" with colleagues who knew, liked, and trusted me - it didn't work.

Change initiatives (and introducing a new software tool into a workflow is indeed a change initiative!) that do not connect the change to the team members' individual goals fail. And when it "seems like" they shouldn't, it's all the more surprising.

Tactics that get outcomes.

Instead of a top-down decree, instead of relying on good will, instead of generic group introductions:

  • Build a core group of champions to understand the change (i.e. new tool, becoming more customer centric); this might involve discussing the problem and creating urgency around it for longer than we might expect. (I typically mention EnjoyHQ 10-15 times before we begin the implementation.)
  • Connect the change to the needs and goals of people using the tool.
  • Do it one person at a time. Don't have time to talk with everyone one on one? Talk with a few people and help them talk to others.

And also: expect the change initiative to take 5-10x longer than you expect, expect to need 5-10x more touchpoints than you expect.

image

Aug 11, 2021

Why do researchers, copywriters, UX people sometimes struggle to get buy-in on initiatives?

We have all the skills for understanding how people think, how people make decisions, frameworks on frameworks on frameworks for creating experiences that help people become successful.

And yet, when we try to talk to our colleagues about a new tool or a new way of working, we often face a big wall.

Yes, to a certain extent some of the resistance we face, some of the intractable UX theater we encounter won't change, all organizations the shadow of one person, and that person might have a different paradigm they refuse to shift.

And yet, so often I talk with teams that are struggling to get buy-in on implementing EnjoyHQ or being more customer centric. But they have not stopped to understand their colleagues as if they would understand their customers.

  • "Hey PM, what's going on in your world? What are you working on? What's your process for getting the research we use and for applying it?"
  • "Hey VP Marketing, what's the strategy look like for the next 2 years? What do you need to execute it?"
  • "Hey researcher in another department, what does your workflow look like? Where do projects get crushed?"

I have observed many researchers who will ask these questions easily of our customers and users and research participants, and yet often do not ask them of their internal customers, aka their colleagues, direct reports, and superiors.

But when teams center their colleagues needs at the outset of a change initiative, when they approach them like research projects, I see much greater buy-in.

image

Jul 1, 2021

Qualitative data does not seem to translate into power the same way that quantitative does.

Even at customer-led orgs.

I want to think about why.

image

Jun 20, 2021

Will "UX Librarians" be commonplace in the future?

I'm starting to think that "Qualitative Insights Manager" or "UX Librarian" (or possibly a librarian function under the broader umbrella of "Research Ops") will be a common role in the future, based on my observation of how much work it is to set up and maintain a qualitative research repository.

I've said this a few times on calls to overwhelmed PMs processing their new responsibility of managing a library of their org's insights, when all they wanted was just to have the insights accessible.

I suppose I am saying this in part to alleviate their fear, validate their responsibility even.

But really do we let every user of the library decide how to organize the books? No we do not, the managers of the Dewey decimal system or Library of Congress system do that. But of course that is for published works and not data and in-progress findings.

But how much do the classifications change for much of the taxonomy? For many of the projects?

What type of governance is necessary for a qualitative insights repository? What is sufficient?

image

Jun 16, 2021

Introduce your qualitative insights repo to one person at a time

I consulted with a team recently that is getting ready to roll out their insights repository to their entire org by flipping the switch to turn on SSO access to the repository.

I advised strongly against doing that on account of having led 20-person demos of EnjoyHQ and then received ~20 pings on Slack immediately after the demo with questions and confusion.

(I actually do no recommend flipping a switch and making it accessible to everyone without training or understanding of what qualitative data is and how it's used. Only people who are not antagonistic to research get access. Or people who are being chaperoned by someone who will explain the raw and/or processed insights to their team.)

Much better to build out your repository for a small group, get really comfortable, understand how it supports you and your team, then roll it out to another few people - but do it one-at-a-time.

I explained the folly of my earlier days, and this team changed their strategy on the spot.

image

Jun 7, 2021

"Consultants are on the outside so they don't have the chance to go deep."

This is something I hear often in new client intake calls. It is so fascinating because in almost every engagement I've had, I, the outsider, the not-bothering-to-show-up-for-all-hands meetings am somehow the one with the deepest understanding of the customer's JTBD.

I reacted in the moment to someone saying this to me recently with my chest puffed out, "well we'll see about that!" I say this on account of the fact that my engagements involve doing research, setting up research libraries, defining strategy based on that research, and then designing experiences based on that research - and then presenting to stakeholders who have often barely skipped the research report. "I'm the only one doing research, so in one study I'll know your customers even better than you do so there."

And lately the threads I'm tugging at in my work is to make myself less unique in understanding the customer. How do we mobilize the insights? How do we get the team to agree on how to use the insights? What role does qualitative data make in decisions, and how do we get it to play a bigger role?

image

Hidden