How Thumbtack Revamped Its Contact Center Metrics for 2019
Add bookmarkThe biggest conundrum of the lead generation business model is how to keep your paying customers happy by appeasing the non-paying customers who use your service.
At digital marketplace Thumbtack, a unicorn startup valued at $1.3 billion, contractors sell services such as dance lessons, DJ services, or even officiating a wedding through a paid subscription that provides them with leads on local gigs.
Balance the needs of different customer segments
Thumbtack’s customer base is split into two camps of buyers and sellers, each with their own expectations and buyer personas. "Customers" access listings for free, while "pros" pay to interact with customers.
Setting the right metrics for customer support was key to maintaining customer satisfaction on both sides and encouraging buyers and sellers to continue using the service, whose effectiveness hinges on a large volume of users interacting at all times.
“We want to deliver impactful experiences that help our customers and pros find help and achieve success on our platform,” Chris Wardle, Thumbtack’s head of global support, said in an Online Event with CCW Digital, ‘Contact Center Metrics for 2019 and Beyond.’
As businesses pivot from traditional contact center metrics like speed and efficiency towards “softer,” less easily measured yardsticks like CSAT, organizations including Zappos (which set the record for longest CS call), Pier 1 Imports and Thumbtack realize these KPIs are not one-size-fits-all, and have tinkered with organization-specific metrics of their own that are tied to value creation and their bottom line.
Customer satisfaction doesn't guarantee loyalty
Given that Thumbtack charges a contractor or “pro” each time they send or receive an email from a lead, their lifetime value is commensurate with how frequently they use the site. For their first several years in business, Thumbtack tracked CSAT as a primary metric, “but what we did find after a deep dive was that it was poorly correlated to a pro or customer’s future use of Thumbtack,” Wardle explains.
In fact, data showed that customers who rated high satisfaction used the platform less, while those scoring in the bottom two ramped up their use. It pointed to a crack in Thumbtack’s business model, where paying professionals would take relationships with established leads offline, so those happy customers with more leads were more likely to abandon their subscription.
“It really challenged our thinking, so we wanted to find a metric that would re-evaluate the ultimate purpose of our customer support teams.” That purpose was retention of paid subscribers, measured by a metric Wardle calls “likelihood to continue.”
“It allows our agents to have more of a sense of value creation in each interaction they’re having,” he adds.
Your contact center metrics should be customized to your business
Diverting agent focus away from CSAT allowed for a more nuanced view of actual customer satisfaction - because a happy (and profitable) customer is one who continues to do business with you, not one who writes one glowing review and then bolts.
“It’s taking a step further where not only are we leaving you satisfied, but we want you to use the platform more than you’re currently using it,” explains Wardle. In May, Thumbtack introduced a new feature called Instant Match, which automates the bidding process for job requests by providing instantaneous quotes to the contractors deemed best fit.
It’s among a raft of new features launched this year that have delighted customers while incensing Thumbtack’s paid subscribers, who pay upfront for a matchmaking service that isn’t guaranteed to result in a gig, rather than being charged each time they choose to correspond with a prospective customer.
Users of the site allege that gigs go the pros who are willing to pay for the most matches, rather than those with the highest reviews and best service. Thumbtack’s website does not mention this, stating: “You pay when a new customer contacts or hires you,” but angry customer reviews on the Thumbtack Community forum suggest otherwise.
Employee reviews on Glassdoor.com are overwhelmingly positive, as are most customer reviews on ConsumerAffairs.com, which presents an intriguing dichotomy where delighting one set of customers (or stakeholders) comes at the expense of the other.
Agents should be involved in creating the new metrics
While Thumbtack’s pricing model might need some ironing out, employee reviews on Glassdoor and Indeed from customer support agents are predominantly five-starred. In the process of overhauling its contact center metrics, Wardle made sure agents were involved in the process by hosting focus groups with them and nominating a representative from each team as a designated “change partner.”
“We didn’t want to necessarily make it seem like they were cheerleading for this change,” he says. “We wanted to partner with our agents and say, ‘This is something we want you to be a big part of.’”
Thumbtack developed four complementary key metrics with both agents and customers in mind. In addition to measuring “likelihood to continue,” Thumbtack measures cases per hour, refunded dollars per case, and survey push rate score.
“Before we implemented refunded dollars per case as a metric, we were just giving credits and refunds to protect the customer experience,” says Wardle, “because we found that it had a higher correlation to CSAT.”
After tracking this metric, Wardle says the company has saved almost half a million dollars in a three-month period in revenue loss.
“Customer support centers are commonly known as cost centers, but that doesn’t mean they don’t create value for the business.”
This combination of metrics allows for a bird’s-eye-view of department-wide quality assurance as well as individual agent performance, so that for an agent with few cases per hour but a high customer “likelihood to continue” it’s understood that they are taking the time to deliver better service but could be challenged to “do things at a slightly faster pace.’
Some problems are caused by metrics, not agents
Such an approach is indicative of a larger trend in the industry where contact center managers realize they can’t judge an agent on unflinching metrics like average handle time, or rate them on just a few calls.
In fact, the big-picture mindset is that when an agent posts unsatisfactory numbers it isn’t necessarily the agent’s fault; it could be indicative of a structural problem like a spotty knowledge management base, an agent support interface that’s difficult to use, or management prioritizing the wrong metrics.
For information on how to turn your contact center metrics into actionable change, read our Special Report on Actionable Analytics.