Back to blog
CareerMarch 16, 2026·7 min read

AI Can Write Your Code. It Can't Think For You.

If ChatGPT can write a perfectly valid SQL query in seconds, what exactly are you bringing to the interview? Here's why the case study is the only round that matters now.

Here's a question that should be keeping data science candidates up at night: if ChatGPT can write a perfectly valid SQL query, a clean Python script, or an R analysis pipeline in seconds, what exactly are you bringing to the interview?

Not the syntax. Not the ability to remember whether it's DATE_TRUNC or TRUNC_DATE, or whether pandas uses merge or join, or which tidyverse function reshapes your data. AI handles all of that now, and it handles it well. If the only thing you practiced for your interview was writing clean code, you just spent weeks preparing for the part of the job that's about to matter the least.

This isn't a doom-and-gloom take. It's actually good news — but only if you understand what it means for how you should be preparing.

The skill that's getting commoditized

Two years ago, technical fluency could meaningfully differentiate candidates. If you could write a complex SQL window function from memory, build a clean analysis pipeline in Python, or whip up a statistical model in R without Googling every other line, that was a genuine signal. It meant you'd spent real time working with data.

That signal is dying. Not because SQL, Python, and R don't matter — they absolutely still do — but because the floor has risen dramatically. AI tools mean that anyone can produce syntactically correct, well-structured code with minimal effort. The gap between someone who can write code and someone who can't has collapsed. It used to be a moat. Now it's a commodity.

Companies know this. Interviewers know this. And the smart ones are already shifting where they spend their evaluation time.

The skill that isn't

Here's what AI can't do: tell you what analysis to run.

Give it a well-defined prompt — "write a query that shows weekly signups by channel" or "build a logistic regression on these features" — and it'll nail it every time. But that's the easy part. The hard part was deciding that weekly signups by channel was the right thing to look at in the first place, or that a logistic regression was the right model for the question you're actually trying to answer.

That decision — what to investigate, in what order, and why — is the entire value of a data scientist. It's the part that requires understanding the business context, forming hypotheses about what might be happening, and making judgment calls about where to spend your limited time. No AI tool does this for you. You still have to think.

And this is exactly what the case study tests.

When an interviewer says "engagement is down 20%, figure out what's going on," they're not testing whether you can write the code. They're testing whether you know what questions to ask before you start. Whether you can break an ambiguous problem into a structured investigation plan. Whether you can look at a result set and know what it means for the business — not just what it says mathematically.

That's planning. That's judgment. That's the thing AI made more valuable, not less.

Why case studies are becoming the interview

I've conducted hundreds of data science interviews at FAANG companies, and even before the current wave of AI tools, the case study was the most common reason candidates got rejected. Now it's becoming even more central to the process.

The logic is straightforward. If AI can handle the mechanical parts of the job — writing SQL, building Python pipelines, generating charts, even running basic statistical tests — then the interview needs to focus on the parts it can't handle. That means less time on "can you write this from scratch?" and more time on "given this business problem, what would you do?"

Some companies are already adapting. I've seen interview loops where candidates are explicitly allowed to use AI tools during the coding portion — because the screen isn't about code anymore. It's about what you do with the results. Can you interpret them? Can you spot when something looks off? Can you connect a data finding to a business decision?

The case study has always tested these things. What's changed is that it's no longer one round among many — it's becoming the round. The one that carries the most weight because it's the hardest to automate away.

The new skill stack

If you're preparing for data science interviews right now, your time allocation should reflect this shift. Here's what I'd prioritize:

Problem framing. Given a vague business problem, can you turn it into a structured investigation? Can you define what you're trying to answer, what metrics matter, and what you'd need to see in the data to confirm or reject each hypothesis? This is the skill that matters most in a world where execution is cheap.

Analytical judgment. You ran a query and got a result. Now what? Does this number mean what you think it means? Are there confounding factors? Should you trust this result, or does something look off? AI can produce outputs all day — the ability to critically evaluate those outputs is where human judgment lives.

Communication. You found something. Can you explain what it means to a non-technical stakeholder in two minutes? Can you turn a complex analysis into a clear narrative with a recommendation? This has always been important, but it's about to become the primary differentiator. When everyone can produce analysis with AI assistance, the person who can translate that analysis into business decisions wins.

Knowing what to ask next. This is the hardest one to teach and the one interviewers value most. After your first query returns results, what do you do? Where do you go next? This sequencing — the ability to build an investigation where each step logically follows from the last — is pure human reasoning. It's also the thing you can only develop through practice, not reading.

Notice what's not on this list: memorizing SQL syntax, grinding LeetCode, perfecting your pandas chaining, or tweaking R scripts until they're idiomatic. Those things aren't worthless — you should be fluent in the tools of the trade. But fluency is the baseline now, not the differentiator. The interview isn't testing whether you can write code. It's testing whether you can think.

What this means for your prep

If you've been spending 80% of your interview prep time on LeetCode, SQL exercises, and coding drills, flip that ratio. You should be spending the majority of your time practicing the full case study workflow: reading a scenario, asking clarifying questions, forming a plan, deciding what analysis to run, interpreting results, and communicating a recommendation.

The mechanical part — actually writing the code, whether that's SQL, Python, or R — is the smallest piece of that workflow. It's also the piece AI can help you with on the job. Everything else requires your brain.

This doesn't mean coding practice is useless. You need to be comfortable enough with your tools that they don't slow you down during a case study. But there's a difference between "comfortable enough to be fluent" and "spending three weeks memorizing every edge case of PARTITION BY or optimizing your scikit-learn pipeline." The first is necessary. The second is a misallocation of prep time.

The candidates who are going to thrive in this new environment are the ones who can sit down with an ambiguous business problem and an AI-assisted toolkit, and produce genuine insight. Not just a query. Not just a notebook. An actual answer to a business question, with a recommendation attached.

That's what the case study has always tested. AI just made it the only thing that matters.

The irony

Here's the thing I find most interesting about all of this: AI tools are making data scientists more valuable, not less. The bottleneck was never "can we write enough queries?" It was always "do we know what questions to ask and what to do with the answers?"

AI removes the bottleneck that didn't matter and exposes the one that does. If you're someone who can frame problems, think critically, and communicate clearly, you just got a massive productivity boost — because the slow parts of your job got automated and the important parts didn't.

But if the only thing you brought to the table was the ability to write clean code... well, that's a conversation you should probably be having with yourself sooner rather than later.

The case study is the interview round that tests whether you're in the first group or the second. And it's only going to get more important from here.

(Rabbit Hole — practice the part of the interview AI can't do for you.)

Ready to practice?

Apply these concepts on realistic case studies with real datasets.

Browse Case Studies