The 6 Types of Case Studies You'll See in Data Science Interviews
Not all case studies are the same. Here are the six types you'll encounter — what each one tests, where candidates stumble, and which companies favor them.
Not all case studies are the same. This sounds obvious, but you'd be surprised how many candidates prepare for one type and then freeze when they get a different one.
I've conducted hundreds of data science interviews at FAANG companies, and the case studies I've given (and seen others give) tend to fall into a handful of distinct categories. Each one tests a different set of skills, requires a different approach, and trips people up in different ways.
If you know what type you're dealing with, you can adapt your approach in the first 30 seconds. If you don't, you'll waste ten minutes figuring it out — and that's time you don't have.
Here are the six types you're most likely to encounter.
1. The metric investigation
What it sounds like: "Our signups dropped 30% this week. What's going on?"
This is the most common type by far. You're handed a metric that's moving in an unexpected direction — usually down, sometimes up — and your job is to figure out why.
What it's really testing: Can you systematically narrow down a problem? Can you form hypotheses, prioritize them, and use data to confirm or eliminate each one? Can you get to a root cause (or close to one) without thrashing around?
Where candidates stumble: They jump into SQL without a plan. They segment the metric by one dimension, see nothing interesting, segment by another, see nothing interesting, and suddenly 20 minutes have passed. The fix is to state your hypotheses upfront and use each query to explicitly test one.
Companies that favor this type: Meta, DoorDash, Airbnb, Netflix
2. The experiment evaluation
What it sounds like: "We ran an A/B test on a new checkout flow. Here are the results. What do you conclude?"
You're given experiment results — sometimes clean, sometimes intentionally messy — and asked to interpret them. The question isn't "can you run a t-test" (though that might come up). The question is "can you figure out whether this experiment actually tells us what we think it does?"
What it's really testing: Statistical reasoning in a business context. Do you check for things like sample size, duration, novelty effects, and segment-level differences? Can you spot when a result that looks significant isn't trustworthy, or when a result that looks flat is actually hiding something?
Where candidates stumble: They take the results at face value. The experiment says Treatment beats Control, so they recommend launching. But the data has a Simpson's Paradox buried in it, or the test only ran for three days, or the "lift" is entirely driven by one small segment. The best candidates are skeptical by default.
Companies that favor this type: Airbnb, Spotify, Uber, LinkedIn
3. The metric design
What it sounds like: "We're launching a new Stories feature. How would you measure success?"
No dataset. No SQL. Just a business scenario and an open question about what to measure and why. This is a product sense case disguised as an analytics question.
What it's really testing: Can you think about a product from a business perspective? Can you define metrics that actually capture success — not just engagement vanity metrics, but the right balance of goals and guardrails? Can you anticipate tradeoffs?
Where candidates stumble: They list a bunch of metrics without prioritizing. "We should track DAU, WAU, MAU, time spent, sessions, retention, revenue..." That's not a measurement framework. That's a list. The interviewer wants to see you pick a primary success metric, explain why, and then identify 2-3 guardrails that protect against unintended consequences.
Companies that favor this type: Meta, Google, Spotify, Instagram
4. The funnel analysis
What it sounds like: "Our conversion rate from signup to first purchase has been declining. Here's the data."
You're given data on a multi-step user flow and asked to figure out where things are breaking down. This is related to the metric investigation, but the structure of the problem is different — you're working through a sequence of steps rather than segmenting a single metric.
What it's really testing: Can you decompose a complex metric into its component parts? Can you identify which stage of the funnel is responsible for the overall change? Can you distinguish between volume problems (fewer people entering the funnel) and conversion problems (more people dropping off at a specific step)?
Where candidates stumble: They look at the overall conversion rate and try to explain it as a single thing. But a funnel is a chain of steps, and the answer is almost always in a specific step, not the aggregate. The first move should always be to break the funnel down stage by stage and find where the drop is concentrated.
Companies that favor this type: DoorDash, Shopify, Airbnb, Wayfair
5. The growth or strategy case
What it sounds like: "Our user growth in Brazil has stalled. How would you approach this?"
This is the most open-ended type. There may or may not be data involved. The interviewer wants to see how you think about a business problem — how you'd frame it, what data you'd want, what levers you'd consider pulling.
What it's really testing: Business acumen and structured thinking. Can you break a vague problem into concrete, testable pieces? Can you identify the right questions to ask before jumping to solutions? Do you understand how growth works — acquisition, activation, retention, monetization — and where to focus?
Where candidates stumble: They go too broad or too narrow. Too broad: "We should improve the product, increase marketing spend, and expand to new channels." That's not an approach, that's a brainstorm. Too narrow: "Let's A/B test the signup button color." That's a tactic, not a strategy. The right level is something like: "First I'd want to understand whether this is an acquisition problem or a retention problem, because the playbook is completely different for each."
Companies that favor this type: Meta, Google, Uber, DoorDash
6. The take-home / exploratory analysis
What it sounds like: "Here's a dataset. You have 48 hours. Come back with your findings."
You get a dataset — sometimes with a specific question, sometimes with just "find something interesting" — and a deadline. No live interviewer watching you. Just you, the data, and a blank page.
What it's really testing: Everything at once. Can you explore data independently? Can you find meaningful patterns without being guided? Can you structure your findings into a clear, compelling narrative? Can you present results to a non-technical audience?
Where candidates stumble: Two failure modes. First, they over-engineer it — building elaborate models and fancy visualizations when the interviewer just wants clear thinking and a good story. Second, they under-scope it — they run a few queries, write up some surface-level observations, and call it done. The bar here is a polished analysis that shows genuine curiosity and a logical narrative arc, not a Jupyter notebook full of code dumps.
Companies that favor this type: Shopify, Capital One, Stripe, smaller startups
How to prepare for all of them
You don't need a completely different playbook for each type. The general framework — clarify, plan, define metrics, explore with purpose, conclude, recommend, communicate — applies to all of them. What changes is where you spend your time within that framework.
For investigations and funnel analyses, you spend most of your time in the exploration phase — segmenting, isolating, drilling down. For metric design and strategy cases, you spend most of your time in the planning and framing phase — defining what to measure and why. For experiment evaluations, you spend most of your time in the synthesis phase — interpreting results and checking your assumptions.
The best way to build this muscle is to practice each type at least a few times, so that when you sit down in an interview and hear the prompt, you immediately recognize what kind of case you're dealing with and adjust accordingly. That recognition — "oh, this is an experiment evaluation, not an investigation" — saves you from the ten minutes of floundering that sinks most candidates.
Know the types. Practice each one. And when you sit down in that interview, you'll know exactly where to start. (Rabbit Hole has cases across all six types, if you're looking for a place to get your reps in.)
Ready to practice?
Apply these concepts on realistic case studies with real datasets.
Browse Case Studies