Skip to content
Li Tan
Essay 5 min read
· 5 min read · AI · Workflow · Data Engineering

Default to AI

My team defaults to AI now. The speed is real. But the time you save on the keyboard, you spend in review. There are two traps I keep watching people fall into.

The team I’m on defaults to AI. Most tickets start with a model in the loop, not a blank cursor. Spinning up a piece of work is faster than it has ever been. The AI does a lot of the heavy lifting, and I spend more of my day reviewing and shaping than typing.

That’s the part everyone talks about. The part that gets undersold is what shifts. The keystrokes go down, the review burden goes up, and the work shifts from doing to checking. If you’re not ready for that shift, AI does not actually save you time. It just moves the time.

Two misconceptions I keep watching people fall into.

Misconception 1: “If AI does 90% of it, you just check the result”

The pitch sounds great. Hand the model the full context, let it produce the analysis end-to-end, glance at the output, ship.

It works until somebody asks a question.

When you didn’t write the steps, you don’t own the details. You can read the result, but you can’t defend it. The first time a stakeholder pushes back with why this filter, why this cohort definition, what about pre-launch users, you’re standing there explaining work you never actually did. And if you want to verify the answer is right in the first place, you have to redo enough of it that you might as well have written it yourself. The “savings” evaporate at the review step, and they evaporate again, more publicly, during the readout.

There’s a second layer that bites harder than the first. Most companies have mediocre internal documentation, and Bay Area tech compounds the problem with high turnover. The people who knew what a table actually means have left, and what’s left in the wiki is half outdated, half wrong. If you point AI at that pile and trust the synthesis, you don’t get a clean summary. You get confidently rephrased misinformation. If you don’t already know the business well enough to spot it, the AI just helps you go further down the wrong road, faster.

The fix is not exciting: stay in the loop at the reasoning level. You don’t have to type every line. You do have to know why each line exists.

Misconception 2: “AI handles the ETL”

A lot of my old day was SQL. Pull the data, validate it, stage it, hand it off. AI drafts most of that for me now, and on simple sources it is great.

It falls over the moment the data model gets complicated.

If you have dozens of tables, overlapping keys, partial duplicates, and a dozen plausible definitions of “active user,” the model will pick some path through the tables. There’s no guarantee it’s the right one. What looks like a reasonable join is often a join from the wrong table, and the numbers come back looking sensible but actually wrong.

The fix is not a better prompt. The fix is upstream. You have to give the AI a join map before you ask it to write anything: which fact tables are canonical for which grain, which dim tables are deprecated, which columns are safe defaults and which look right but aren’t. The cleanest way I’ve found to build one is to sit with a data engineer for an hour, write down the common tables and their typical use cases, and pin a handful of validated queries the AI can pattern-match against. After that, the SQL it produces is dramatically better, because it’s not guessing structure anymore.

A small join map for the orders grainTwo fact tables on the left connect to dim tables on the right. Canonical joins use solid forest lines. A deprecated dim table appears struck through with a dashed line, labeled as the path AI tends to pick. One canonical join carries a “validated” badge for a query the model can learn from.JOIN MAP · ORDERS GRAINFACT × DIMFACT TABLESDIM TABLESfct_ordersCANONICALfct_paymentsCANONICALdim_usersCANONICALdim_users_v1DEPRECATEDdim_listingsCANONICALJOIN ON user_idAI WILL PICK THISVALIDATEDAn opinionated view of the warehouse. Not a search problem.
A small slice of one. The map gives the AI a curated view of the warehouse, instead of letting it choose.

There is no shortcut here. You have to know the warehouse before you can delegate against it. AI doesn’t replace warehouse knowledge. It makes warehouse knowledge more valuable.

One small habit that has saved me hours

When AI gives me something dense, like a paper, a derivation, or a tangle of statistical reasoning, I ask it to rewrite the explanation in simple, plain language.

Not dumbed down. Just stripped of the academic register.

Half the cognitive load in our field is decoding the prose, not the idea. When the prose is plain, the idea is usually obvious, and you can tell within seconds whether it’s the thing you actually wanted. The pain point or the key insight surfaces faster, and the learning curve on a new method shortens by a lot.

It sounds too simple. It works anyway.

The shape of the work, after a year of this

AI hasn’t taken work off my plate. It’s redistributed it. I write less, review more, debug more carefully, and spend more time building scaffolding (join maps, prompts, validated examples) that lets the AI do useful things instead of confident wrong ones.

The people who get the most out of “default to AI” aren’t the ones who hand off the most. They’re the ones who still know exactly what they’re handing off.