Solve for the Harder Problem [Interview with Bilal Mahmood]
Bilal Mahmood is cofounder and CEO of ClearBrain, a Y Combinator backed predictive analytics company that’s used by firms like Chime Bank. He’s also a good friend of mine and someone I greatly admire for his resourcefulness and resilience (important traits for any founder!)
Bilal’s company recently launched a product called Casual Analytics that uses a new algorithm that can automatically distinguish causation vs correlation, without running an experiment. I thought it would be a good opportunity to ask him about how they identified the need for this product, what problems they faced building it, and what they’ve learned from the experience so far.
Even if you don’t really care about analytics or data or AI, this will still be an interesting interview in terms of building a great product and navigating the challenges of entrepreneurship. I’m still thinking about his 7 lessons as they apply to my own company, and I imagine you’ll find them useful as well.
JS: You’ve been building products in data analytics for some time now, starting back at Optimizely. Maybe we can start there — what were you working on then?
BM: I started as the PM of the Data Science team at Optimizely, an A/B testing platform. What data science meant at Optimizely was about analyzing product utilization and forecasting key goals.
JS: So you’re at Optimizely and you’re helping the company understand its own business through your analysis of the data. Where did this lead you?
BM: Working on data analysis at Optimizely helped me see a bigger opportunity. We would build these predictive models for our marketing and sales teams.
But the next question was always “Why? Why are my users going to churn? Why do users go from low probability to high probability?” Etc.
This is a super hard problem. Understanding the why behind an action requires understanding what causes it in the future, versus correlated to it in the past. The tools at our disposal at the time, Google Analytics and Chart.io were insufficient in distinguishing between the two.
JS: That’s interesting. Google Analytics is used by millions of websites. Why isn’t it good enough to handle this?
BM: Traditional analytics tools can only tell you things that happen in the past. They analyze behaviors that happened prior to someone doing a specific action – signup, purchase, etc.
And they then use those behaviors as a proxy. Like – “Okay, people do this more before someone signs up, that must mean that it’s maybe causing them to sign up.”
That’s not really true. That’s correlation, not causation.
JS: So then you started ClearBrain, to solve this problem of predicting causation?
BM: Not immediately.
When we first started ClearBrain 3 years ago, I didn’t actually know how to completely solve this problem. I did know that it needed two components – a technology to automatically predict any outcome, and an algorithm that could specifically predict causation.
I focused on the technology first. We built a custom domain-specific language to translate any data type into a machine readable format. Effectively a universal schema for machine learning.
JS: I’ll be honest, that sounds a lot like a technology searching for a problem, no?
BM: Haha, yeah it did snowball kind of into that.
It is a super cool technology, and we even filed a patent on it. But I’d say its also the first mistake I made without realizing it.
Startup wisdom always says “Solve a problem, not a technology”. I did think I was solving a problem by automating machine learning. But it was my own problem, not a customer’s problem.
Lesson #1: Don’t just invent technology—solve a customer’s problem
Automated machine learning by itself wasn’t useful. It’s only useful if it solves a customer pain point.
JS: So how did you find the customer pain point for this technology?
BM: I reflected back on the inspiration for starting the company in the first place.
The inspiration again was that while I was at Optimizely, our marketing and sales teams were asking which users were most likely to sign up, and also why they were most likely do so.
I still thought the latter question was too hard to answer, so I focused on applying the ML technology we built to help with the former.
JS: That sounds like lead scoring. Predicting which users were going to do a certain action. That was the new direction?
BM: Effectively yeah. But that sort of became my second mistake, namely market size.
We had converted our technology into a product which we called predictive audiences. We used our automated machine learning platform to enable any business stakeholder to build any audience by their probability to signup, purchase, etc – any conversion event.
But when we went out into the market, only the largest companies were buying it.
JS: That doesn’t sound like a bad thing.
Lesson #2: Make sure the customer problem you are solving has a large market
BM: Well, it was great at first. Big companies were, and still are, paying us six figure contracts.
But selling to large companies has its disadvantages as a startup. The sales cycle is slow. They engage infrequently. That makes the time to iterate on your product very slow.
As a startup, time and velocity are key. If you’re going to be enterprise-first, make sure you’re able to convince a VC to give you a boatload of cash upfront.
In our case, we realized that while our predictive audiences / scoring product was useful to some large companies, it was too advanced of a use case for most. But it took us a very long time to figure this out, given the slow sales cycle of enterprise sales.
JS: So how did you expand the market opportunity of your product?
BM: At first, we tried to re-sell our existing product to smaller companies. But that didn’t work either.
You can’t just re-sell the same product to two different customer verticals. Companies of different sizes have different needs, buying motions, even if they’re trying to solve the same problem.
Optimizely for instance shut off its self-serve plan because they realized their A/B testing product was more successful for companies with more traffic. They made the mistake of never building a new self-serve product, but that’s a topic for another time.
Lesson #3: Have a self-serve and enterprise products to increase your market potential and speed up learning
JS: So it sounds like having both a self-serve product and an enterprise product are important in your opinion.
BM: Yes. Look at the most successful companies in the last year. Slack, Stripe, Zoom. All had massively successful IPOs or multi-billion dollar businesses. And all of them have served both self-serve and enterprise customers.
Having a self-serve product enables you to learn faster from your customer, and know it can make money. And having an enterprise product enables you to make the actual money. Even if 80% of your revenue comes from 20% of the customers, the bottom 20% can grow into your top 80%.
JS: So we should just copy Slack and Stripe’s playbooks?
BM: Well, another thing interesting about each of these companies is that their underlying products were themselves not new. Slack is a better HipChat. Stripe a better Paypal. Zoom a better Webex.
That struck a new insight for me that most of the new wave of successful startups (in SaaS at least) aren’t inventing something wholly new, but simply re-inventing an existing product with at least one differentiated value prop. It also does all the research for you that that product has enough of a large market.
Lesson #4: Really successful startups often look like existing products—just with one major tweak or update
JS: How did you take that lesson forward with ClearBrain?
BM: I assessed what are the existing billion dollar products that already exist out there in analytics. Adobe Analytics, Google Analytics.
It was already clear that those were multi-billion products with an identified market, customer problem. So rather than try to build a single feature of that product experience (i.e. predictive audiences), I decided to rebuild the whole platform.
JS: What was your differentiation though?
I thought back to why I started ClearBrain again in the first place. It was because the traditional analytics tools I was trying to use could not answer the question of “Why” – namely the problem of causation vs correlation.
It was clearly such a hard problem that no one else had solved it. But the hard thing is what would make it both differentiated and defensible.
Lesson #5: Solving for the harder problem gives you differentiation and defensibility
JS: But you said before you didn’t know how to solve the problem of causation vs correlation?
BM: Yep, and I still didn’t.
But that was another important lesson I learned as a founder. My job wasn’t always to come up with solutions. It was to identify problems. And enable (and listen) to my team’s solutions.
The solution to solving the problem of predicting causation in turn came from one of our engineers. She had looked up older statistical techniques that had been used in the medical field to predict treatment effects. She proactively assembled a lunch & learn, and taught us about concepts like propensity score matching. My cofounder in turn recognized they had applied similar techniques while at Google Ads, but at a larger scale.
It was a turning point in our company. We realized we could apply these old techniques, using our auto-ML technology we’d already built, to rebuild an analytics platform that could actually predict causation vs correlation.
Lesson #6: A founder’s job is to describe the problem, not necessarily the solution. Your team is often much smarter than you to actually solve the problem.
JS: So how does predicting the “Why” or causal effects actually work? Isn’t that part of the challenge of AI in general is that sometimes it’s not clear why it’s operating in a certain way? Like with AlphaGo (an AI that beat the world’s top human Go players), even the programmers weren’t always clear on why the program was acting in a certain way except that it increases the probability of a wins at the end.
BM: Exactly. Machines make decisions by taking all these different behaviors, analyzing correlations and manipulating things to get to one big right answer.
We needed a new algorithm that could use the statistical techniques our engineer had proposed, and apply them at scale in the manner Google Ads did.
But again, the benefit of being less top-down about the product direction and giving larger level objectives for what we’re trying to do, the team filled in the details on how to solve it.
Our ML engineer spent six months researching the problem and found ways to automate causal methodologies like observational studies at scale. We’re effectively simulating the effect of an A/B test on thousands of variables in seconds, to predict the causal effect of every page, button click, and action on your website. We go into more detail about the methodology on our blog.
JS: So if I understand it correctly, the result of your causal algorithm is you can rank the most important insights on a website by their causal effect, rather than correlations.
BM: We like to say that what we’re doing now is actually kind of like Google PageRank for analytics.
PageRank helped Google deliver the best content to users based on the relevance of that content to their search queries. With Causal Analytics, you can enter a goal like signup or purchase, and we rank each behavior, ad campaign, email open on your website by its relevance to that goal. Relevance in the context of Google is determined by weight of links, while in ClearBrain it is determined by the simulated conversion lift.
JS: How do you actually know your results are causal and not correlated though?
BM: A couple ways.
In the background we run tests via synthetic data analysis. We create artificial data using gaussian models and randomly perturb the data to see if our causal projections can accurately predict the observed changes.
But in the end, the best proof our insights are causal are the results our customers see. Our customers have seen between 40%-100% lift in their conversions by productionizing the insights we recommend – which wouldn’t be possible if our insights were correlated.
Lesson #7: You know you’re on the right path when your customers tell you what they want, instead of you asking them what they want.
JS: How has the concept of causal analytics been received?
BM: Really well. It’s funny because a lot of times founders will ask how do you know if an idea has some semblance of product market fit. The unsatisfactory answer is often you know it when you see it.
In our case, after 3 different iterations over 3 years, we started to feel it last month when we launched Causal Analytics into an open beta. 1000s of users have signed up so far. They’re asking us to build specific features, instead of us trying to guess what they need. It’s a clear feeling.
JS: So to companies trying to create their own product category today, what’s the high level advice you’d give them.
BM: Amusingly, I’d give the same advice a lot of other founders gave me, but just took a long time to sink in. Namely:
- Don’t build a technology, instead solve for a customer problem first.
- Make sure that problem has a large enough market to sell into.
- It’s easier to validate markets as a startup by releasing a self-serve product, as it increases your pace of learning.
- But its also important to have a path to an enterprise offering which actually generates revenue.
- Your self-serve and enterprise offerings should not necessarily be the same product.
- The irony of creating a new product category, is that the most successful ones are just evolutions of an existing category.
- Copying an existing company leapfrogs a lot of the need for market research and validation.
- It’s important to still have a single differentiated wedge to stand apart from the competition.
- That differentiation can come from solving a really hard problem. Always solve the harder problem.
- Your job as a founder is to identify problems, not solutions. Your team is better at that.