Back to blog
Interview PrepMarch 31, 2026·5 min read

How Data Science Interviews Work at Uber

A detailed breakdown of Uber's data science interview process — why experimentation and causal inference dominate, and how to prepare for a marketplace that runs thousands of experiments at once.

Uber runs one of the most experiment-heavy data organizations in tech. At any given time, thousands of A/B tests are running simultaneously across rides, delivery, pricing, driver incentives, and dozens of other product surfaces. Data scientists at Uber aren't just supporting product teams — they're responsible for making sure those experiments actually work.

That emphasis on experimentation and causal reasoning shapes the entire interview process. If DoorDash interviews are about reasoning across a three-sided marketplace, Uber interviews are about reasoning under experimental complexity at massive scale.

The process at a glance

Uber's interview typically takes three to six weeks. The structure follows a five-stage model for most teams: recruiter screen, technical screen (one or two sessions), and a final onsite loop of four to five interviews, each lasting 45-60 minutes.

The recruiter screen covers role alignment and motivation — standard stuff. The real evaluation starts at the technical screen and continues through the onsite. Here's what each round looks like.

SQL and data analysis

Uber's SQL rounds use realistic, sometimes messy datasets. You might be asked to diagnose changes in trip completion rates, compare city-level performance across time periods, or compute cohort retention for a new driver segment.

The queries aren't designed to trick you. They test whether you can work with the kind of data you'd actually encounter on the job: large tables with imperfect schemas, time zones that matter, and business logic baked into the data model. Aggregation, segmentation, window functions, and trend analysis are the core tools here.

Interviewers evaluate both correctness and clarity. Can you structure a query that another data scientist could read and understand? Do you explain your approach as you build it? And critically — once the query returns results, can you say something interesting about what the data shows?

Experimentation and causal inference

This is Uber's signature round, and it's where they set the highest bar. The experimentation round isn't just about knowing how A/B tests work — it's about understanding when they don't.

Uber's two-sided marketplace creates specific experimental challenges. When you change something for riders, it affects driver supply. When you change driver incentives, it affects rider wait times. These spillover effects mean that a naively randomized A/B test can give you misleading results — the control group is contaminated by the treatment's effects on the shared marketplace.

Expect questions about how to design experiments in this environment. Common topics include switchback experiments (randomizing by time period rather than by user, which is how Uber tests many algorithmic changes), CUPED (Controlled-experiment Using Pre-Experiment Data) for variance reduction, and multiple testing corrections when you're running thousands of experiments simultaneously. Methods like Bonferroni and Benjamini-Hochberg for controlling false discovery rate can come up.

You don't need to derive these methods on a whiteboard, but you need to understand what problem each one solves and when you'd reach for it. If an interviewer describes a pricing experiment and asks "how would you test this?", they want to hear you think about interference, duration, power, and what could go wrong — not just "randomize users into treatment and control."

For senior roles, expect deeper dives into causal inference: difference-in-differences, instrumental variables, regression discontinuity. Uber has published extensively on their experimentation platform and causal inference work through their engineering blog. Reading that material before your interview is time well spent — and interviewers notice when candidates are familiar with it.

Product and marketplace case

Uber's product case round gives you an ambiguous business problem and asks you to work through it analytically. The scenarios are marketplace-specific: reducing rider wait times in a specific city, understanding a drop in driver engagement, evaluating a new pricing model, or sizing an opportunity for a new product surface.

The interviewer isn't looking for a perfect answer. They're looking for structure. Do you start by clarifying the problem? Do you form hypotheses before jumping into analysis? Can you identify what data you'd need and what the most important questions are? Can you connect your analysis to a concrete recommendation?

Marketplace intuition matters here. Strong candidates think about both sides of the market in every scenario. If rider wait times are up, is it a supply problem (not enough drivers) or a demand problem (too many riders in one area) or a matching problem (the algorithm isn't routing efficiently)? Each of those has different data signatures and different interventions.

Behavioral

Uber's behavioral round covers the standard categories — collaboration, impact, handling ambiguity — with emphasis on how you work in a fast-moving environment with competing priorities. Data scientists at Uber often support multiple projects simultaneously and need to make judgment calls about where to spend their time.

Come prepared with stories about influencing technical direction, navigating disagreement with cross-functional partners, and dealing with situations where the data was inconclusive but a decision still needed to be made. Uber values pragmatism and velocity alongside rigor.

What actually matters

Uber's interview is, at its core, testing whether you can be a rigorous experimentalist who also ships. The bar on experimentation and causal inference is higher than most companies — if you're coming from a role where A/B testing means "split users 50/50 and check the p-value," you need to level up before interviewing here.

But rigor alone isn't enough. Uber is a company that moves fast and operates at massive scale. They want data scientists who can design a clean experiment and make a pragmatic recommendation when the data is imperfect. That balance — statistical rigor plus business judgment — is what separates candidates who get offers from candidates who don't.

Start your prep with Uber's published research on experimentation. Understand the mechanics of their marketplace. Practice working through product scenarios where the answer isn't clean. And make sure your SQL is sharp enough that you're not burning interview time debugging syntax when you should be interpreting results.

(Rabbit Hole — practice experimentation and analytics cases designed for marketplace environments.)

Ready to practice?

Apply these concepts on realistic case studies with real datasets.

Browse Case Studies