What is good user adoption for my B2B SaaS product? (How to use data as a product manager)
The short answer is that there is no way to know unless you do the work to figure it out. Every B2B app is different in scope, and not all user personas are created equal. You will never find an easy and accurate answer to this online by Googling it. Thankfully, it’s not witchcraft either and you can definitely figure it out. In this post, I will explain how to use data to answer questions you have as a product manager and how to define effective key metrics for your product from scratch.
Pre-requisites
Before you start, you will need to make sure you have access to data sources for your application or that you have a work bestie who does. This may be your application database or it might be CRM tools like Salesforce or Hubspot, customer success tools like ChurnZero or Gainsight, or analytics tools like Google Analytics.
You will need a good list of current clients
You will also need a good list of canceled clients for which data is still available (this may be optional UNLESS you are trying to come up with KPIs like “good user adoption for an account is x”)
You will need some basic pivot table and chart skills in excel
Step 1 - What is the question you want to answer?
Before you can do much of anything with data, you need to know the question you want the data to answer for you. As an example, in one of my past roles, we wanted to know more about client retention. What are the red flags that a client is likely to cancel? What are successful clients doing that unsuccessful ones aren’t?
Step 2 - Come up with some hypotheses
Next, you need to come up with some educated guesses about what you think the answer to your question might be. For this, you can come up with some of your own guesses, and I recommend going to your client-facing team members to solicit their guesses also. Customer success managers, professional services, and technical support are all good stakeholders to consult for this exercise.
In our client retention study, we came up with guesses like:
Successful clients have more end user engagement
Successful clients have more admin user engagement
Unsuccessful clients encounter more bugs or defects than successful clients
Successful clients are more fully utilizing key features of the product
Successful clients launch faster than unsuccessful ones
Successful clients have more touch points with their Customer Success Manager
Step 3 - Determine which hypotheses you can actually analyze based on the data that you have
Inevitably, there will be some theories that you come up with that you simply don’t have the data to prove or disprove. Maybe it’s not something you track today. Maybe you can’t get co-operation from the owner of the data. In any case, don’t let this stop you. Focus on the theories you CAN prove. For the rest, put those on your to-do list for later and circle back around at the end of the project and add them to your data collection.
In our study, we had access to the data for these:
Successful clients have more end user engagement
Successful clients have more admin user engagement
Unsuccessful clients encounter more bugs or defects than successful clients
Successful clients are more fully utilizing key features of the product
But we didn’t have access to the right data for these:
Successful clients launch faster than unsuccessful ones
Successful clients have more touch points with their Customer Success Manager
Step 4 - Analyze
Finally, it’s time to actually look at your data and make a determination about whether your hypotheses are correct. Here, we will look at two example hypotheses from our list: one where we were wrong and one where we were right and break down how we arrived at our conclusions.
Example 1 - Successful clients have more end user engagement
For this theory, we used login data from our database that we had been collecting for years so that we could provide our customers with analytics on their own account usage.
We used excel to calculate the average number of logins per day for a canceled account for the 3 years prior to cancellation.
We then calculated the average number of logins per day for active accounts over the last 3 years.
We then took a subset of our “best” customers as identified by our CSMs and looked at the average number of logins per day over the last 3 years.
From there, we were able to determine that our theory was correct. Successful accounts had far more logins than canceled accounts.
Let’s say for the sake of this blog post that we found that canceled accounts had an average of 500 logins per day, active accounts had an average of 5,000 logins per day, and our best accounts had an average of 10,000 logins per day. This is not our real data because I do not want you to be tempted to use this as a guideline for your product. You need to do the work!
Example 2 - Unsuccessful clients encounter more bugs or defects than successful clients
For our second theory, we used bug report data from our ticket triage tool (Jira).
We compared the average number of bugs reported per year over the last 3 years prior to cancellation for our canceled customers with average number of bugs reported per year over the last 3 years for current clients.
We actually found that this hypothesis was wrong. Active accounts reported far more bugs than canceled accounts.
We thought about this and concluded that we were not actually able to measure bugs encountered. What we were really measuring was bugs reported. Obviously encountering bugs isn’t “good" but maybe reporting bugs tells us something else about the client:
They are using the product more and therefore are more likely to come across issues
They are invested enough in the relationship to report problems they encounter
Thus, while our hypotheses was wrong, we were able to conclude that reporting bugs is a useful measure of client engagement.
Step 5 - Share and act
Of course, all of this is for naught unless you actually do something with your newfound understanding. In our case, we created a report and accompanying presentation to share with our customer success team so that they could update their tools for measuring client health and their retention strategy. If their customer success tool tracked fewer than 5000 logins per day for a given client, this is a measurable risk for retention. If their tool tracked fewer than 500 logins per day for a given client, this is a huge red flag.
We were also able to use our end user login data to create some meaningful product metrics that we could manage toward in order to improve client retention. We wanted to see all of our clients increase their end user logins to the volume that our best clients were experiencing. This allowed us to have a KPI (an average of 10,000 logins per day) that we could work toward as we considered product roadmap initiatives to increase end user adoption of the product.
How to conduct a B2B software beta program with little to no budget
What does a scrappy product manager do when they’re launching a new product or complex feature, but they don’t have a big enough budget for UX testing consultants or A/B test applications? How do you beta test your new feature with end users and get the validation you need that you’re on the right track prior to launch? This is of course not ideal, but if you work for a small company or an early stage start-up, this is probably a situation you will run into. Whatever you do, DO NOT just tell people internally the feature is on beta and hope someone will eventually provide some feedback. That’s a good way to doom your new feature to launch failure.
In this post, I’ll give a real world, step-by-step example of how to conduct an organized and effective beta test program in a B2B product launch scenario using tools you probably already have or can get for free.
I recommend running through these steps twice if possible: once less formally with a set of internal employee test subjects (I like to use subject matter experts from the professional services or technical support teams) and then again with a set of customer test subjects. Testing with internal team members first will help you smooth out the experience for your customer testers and hopefully leave them with a good first impression of your new feature. Ideally, you will want your internal team members to vet your beta testing scenarios and documentation to make sure they are easy to understand. They can also help you find the first wave of bugs and user experience issues to address so that your client end users don’t run into as many during their test.
Step 1 - Set goals
First, you need to determine what you want to get out of your beta test process. This will help guide you in your decision-making process later on when you’re determining how you collect feedback and what kind of feedback to collect. Some examples of goals you may have for your beta period are: finding bugs, assessing usability of the application as certain tasks are performed, determining if you have the right feature set or all of the above.
I like to keep notes during the design and development process about questions we had or controversial decisions we made. Then I can revisit those as inspiration when I’m determining what I want to validate during my beta test period.
Step 2 - Design testing scenarios
I have found that if you just set users loose to do whatever they want with your new feature and then ask for feedback, the quality of the feedback is rather low, and it can be difficult to tease out any patterns across different test users. Instead, I recommend that you identify some common tasks that you would like your users to run through to validate whether or not you’ve accomplished your goals.
For example:
Goal: Assess the usability of my new widget that allows the user to design a marketing asset.
Tasks:
Navigate to the new widget
Create a new marketing asset
Add an image to the design
Add text to the design
Change text colors
Change background colors
Save the design
Export the design
Step 3 - Identify test users
Next, you will need to decide who should test your new feature.
I like to ask our customer success managers and other client-facing team members if they can make recommendations on who we should try to recruit. I request that they consider the following guidelines for who to recommend:
They should fit the persona you had in mind when you developed the feature
They should be an active and engaged user of the software
They should have an open and curious personality
Bonus points if they were one of the clients who asked for the feature to be built in the first place - these are often the most enthusiastic testers
I usually start an Excel spreadsheet or Google Sheet a few weeks or a month ahead of when I want to start the recruiting process and I share this around to the client-facing teams so that they can add good candidates along with their contact information.
If you have a ticketing system for collecting feature requests like Service Desk, UserVoice, or Aha! you can also do a search for users who requested similar functionality in the past and add them to your list.
You may even wish to target an entire user type such as “system admins.” In that case, skip right to the next step and send an email to that entire audience.
If you don’t have any existing users you can draw from, you can enlist the help of your marketing team. They may be able to help you find users in the right persona through LinkedIn advertising if they’re willing to share their budget with you. Alternately, you can look for professional groups or forums targeting that persona and post there looking for interested participants. For example, if you’re looking for marketing researcher users, you could try the Reddit r/MarketingResearch forum. Make sure you read the posting guidelines first to make sure that this type of activity is allowed.
Step 4 - Recruit test users
If you feel really confident that you have the perfect list of test subjects from your CSMs, you can skip this step and proceed to Step 5.
Once you have a list of target users, create a screening survey to collect some preliminary information about them so you can be sure they are a good testing candidate. You can ask them questions to help you assess if they are the correct target persona and if they have the right background knowledge to complete the tasks you will assign to them.
I like to use Microsoft Forms or Google Forms for this. This survey should be quite short. It should be no more than 5 questions.
Then enlist the help of your marketing team to draft a recruitment email. The email should contain the following information:
What you would like them to do
How long it will take them to do it
Why you would like them to do it
What’s in it for them
Sometimes all you can offer is that their feedback will shape the future development of a feature which will eventually provide value to them
You might be able to offer them a free subscription to that feature for some period of time
You might have some marketing swag laying around that marketing will let you give away. We used some really nice Yeti mugs with our company logo once and those were very popular
If you have a little bit of budget, a gift card or charitable donation in their name is always nice
A link to your screen survey with a call to action to fill it out if they are interested in becoming a beta tester
A deadline by which you would like them to fill out the survey - I like to make it about a week away
Step 5- Internal training and communication
Sometime just before your recruitment email is set to go out, assemble the customer-facing teams (client success, professional services, sales, technical support, etc.) and make sure they are aware of your beta test initiative and prepared to answer customer questions about it. There is nothing worse than being blind-sided by a customer who knows more than you do about what your company is doing.
Step 6 - Documentation (optional)
If the feature you’re testing is very complicated and robust, you might wish to create some preliminary documentation about that feature for users to refer to during the test. However, if your goal is user experience testing, use documentation sparingly. If you’re trying to create an intuitive application and you want to know if you’ve done that successfully, you may not want users to go into the test with a lot of information up front.
Step 7 - Collect feedback
I like to divide my test subjects into two groups:
A small group of interviewees for a live, observed test
A larger group of testers who can provide quantitative feedback via a survey
Group 1
For the smaller group, reach out to them via email (consider Ccing their client success manager here) to schedule a mutually agreeable time for the interview. Then develop an interview script based on the tasks you identified in Step 2. During the call, ask them to share their screen (using Teams or Zoom) so that you can observe them going through the test steps. When I conduct an interview, I usually invite a team member along to take notes for me about their observations and things the user says during the interview so that I can give the user my full attention. Along the way, look for areas of UX friction where they cannot perform the task or questions they ask about functionality which may not exist. Take note of any potential bugs you see them run into.
Group 2
For the larger group, design another survey in Microsoft or Google Forms. This survey will ask them to run through the testing tasks you identified in step 2 and will ask them to answer follow up questions about how it went.
For example:
Imagine you have been asked to design a new business card for your company using XYZ new feature.
Task 1: Navigate to the XYZ feature in the ABC application and create a new business card document.
How difficult was it to complete Task 1?
Did you encounter any issues completing Task 1? Please describe the issues you encountered.
Did the XYZ feature do everything you expected it to do?
Is there anything else you would like to tell us about Task 1?
…and so on
When the survey is ready, once again enlist the help of marketing to draft an email to your finalized list of participants. This email should once again contain the following information:
What you would like them to do
How long it will take them to do it
Why you would like them to do it
What’s in it for them
Sometimes all you can offer is that their feedback will shape the future development of a feature which will eventually provide value to them
You might be able to offer them a free subscription to that feature for some period of time
You might have some marketing swag laying around that marketing will let you give away. We used some really nice Yeti mugs with our company logo once and those were very popular
If you have a little bit of budget, a gift card or charitable donation in their name is always nice
A link to your final test survey with a call to action to fill it out
A deadline by which you would like them to fill out the survey - I like to give them a couple of weeks for this if the test will take them a little bit of time
Consider setting up a reminder email to go out just before the deadline because people get busy and forget to walk through the test.
Step 8 - Act on feedback
Review your interview notes and survey answers for common themes. Create work items/tasks/tickets/product backlog items for your development team to address the most common areas of feedback. Make sure to use your product skills to prioritize the feedback accordingly. If needed, create a new roadmap initiative to add missing functionality identified by your beta users.
Step 9 - Make sure your testers feel the love
If you promised an incentive for participation, make sure you follow through on providing it.
If you implemented anything based on the tester’s feedback, it’s a great idea to reach out to them and let them know. You can even ask them to try it out and let you know if it met their expectations.