Attribution Framework

8 Types of Attribution
Every DTC Brand Should Know

And the 3 I actually use to manage $30M+ a year in ad spend.

Curtis Howland

The Problem

Every platform grades its own homework

Total Claimed
0
vs
Actual Orders
600
600 actual orders
Meta: 500Google: 400TikTok: 150
Meta
Click to claim credit
?
Google
Waiting...
?
TikTok
Waiting...
?
75% over-reported

1,050 “attributed” conversions. 600 actual orders. Every platform gave itself an A+.

The Framework

The Christmas Tree

Attribution works top-down. Each layer has a specific job. Most brands try to solve it from the bottom up. That's backwards.

The Attribution Tree:
Start at the Top, Not the Bottom
MER /
nCAC
Start here.
Business fundamentals.
CHANNEL ALLOCATION
Where should we spend?
Post-Purchase SurveysMMMGeo-Lift Testing
AD-LEVEL OPTIMIZATION
Which ads should we scale?
Multi-Touch AttributionPlatform Reporting
Most brands start here
and never look up
SUPPORTING INFRASTRUCTURE
Is our data clean? What are our targets?
GA4 / Server-Side TrackingCohort-Based LTV
None of this matters
without the layers above
Curtis Howland

The Star (top): MER/nCAC. Channel-agnostic. Tells you if the business is working, but not why. Accurate to the decimal.

Upper Branches: Post-purchase surveys, MMM, and incrementality testing. Which channels should get more or less budget?

Lower Branches: MTA (Triple Whale, Northbeam) and platform data. Which specific ads to scale or cut?

The Trunk: GA4, CAPI, server-side tracking. Makes all other tools more accurate. Doesn't measure attribution directly.

The Roots: Cohort-based LTV by channel. If Meta customers have 2.5x LTV vs TikTok at 1.8x, your CPA targets should reflect that. Sets the targets everything else optimizes toward.

Key Concept

Precision Accuracy

Precision vs. Accuracy:
Why AI Attribution Gets This Wrong
Accurate + Precise
The goal
Accurate, Not Precise
Surveys + MMM
Precise, Not Accurate
AI attribution tools
Neither
No system at all
Curtis Howland

Start bottom-up and you get tight numbers pointing the wrong direction.

Top-down: Start with blended truth (MER/nCAC), layer in surveys + MMM for accuracy, then MTA for precision within that accuracy.

If you start in the middle of the Christmas tree with an MTA tool, you end up precise but inaccurate. Tight numbers, wrong direction.

Start at the top with MER/nCAC, add surveys and MMM for accuracy, then layer in MTA for precision. You get both.

Most brands install an AI-powered attribution tool and try to perfectly track every individual customer's journey. They get precise numbers that feel scientific. But precision is not accuracy.

Layer 1 · The Star

Model 1: MER & nCAC

Total spend ÷ total revenue (MER). Total spend ÷ new customers (nCAC). The only metrics that can't lie. Your bank account doesn't inflate.

4.2x ROAS

Brand A

35% new customer ratio, 48% COGS, 8% platform fees. Lost $15K/month.

2.8x ROAS

Brand B

Better new customer mix, tighter COGS. Made $45K profit.

Maximum marketing spend = Gross Margin % minus OpEx %. Cross this line and you're losing money at the company level.

Target CPA = projected 3-month customer profitability. This is your efficient growth zone.

Everything else in this framework is about figuring out how to spend between those two numbers as effectively as possible.

Limitation

Tells you IF something is working, not WHAT is working. You can't make channel decisions from MER alone. It's the star on the tree, not the whole tree.

Layer 2 · Upper Branches

Model 2: Post-Purchase Surveys

Attribution Lies About Who Found You First:
Click Data vs. What Customers Actually Say
What Click Attribution Says
Last-click, GA4 default
Google Search
40%
Meta (FB/IG)
35%
Direct / Organic
15%
TikTok
10%
Google captures demand. It doesn't create it.
What Customers Actually Say
Post-purchase survey, first touch
Meta (FB/IG)
45%
TikTok
20%
Google Search
15%
Friend / Referral
12%
Other
8%
Meta drove 3x more first touches than clicks showed
Curtis Howland

“Where did you FIRST hear about us?”

Surveys measure what's memorable. An ad has to be remembered to be impactful. Click-based attribution overvalues lower-funnel by 250%+. We've seen TOF creative drive 13x more incremental acquisitions than BOF. Click data told the opposite story.

35%+
Target response rate
<10
Answer options

Add a post-purchase survey to your Shopify checkout (KnoCommerce, Fairing, or native). The question: “Where did you FIRST hear about us?”

Options should include every channel you spend money on (Facebook/Instagram, TikTok, Google Search, YouTube, Podcast, Friend/Referral, etc.) plus “Other” with a text field. Keep it under 10 options. Randomize the order.

Extrapolate to all new customers. Then calculate your cost per new customer response per channel. This tells you where to push budget.

Limitation

People don't always remember correctly. Long purchase cycles make this harder. Some channels consistently under- or over-report. That's where MMM comes in to validate.

Layer 2 · Upper Branches

Model 3: Marketing Mix Modeling

No cookies, no clicks, completely privacy-safe. Measures the relationship between spend and revenue over time. The real power: marginal performance curves.

The Diminishing Returns Curve:
Every Dollar Spent Has a Different Return
High EfficiencyDiminishing ReturnsWasted SpendHighLowIncremental Revenue per Dollar Spent$0$25k$50k$100k$150k$200kMonthly Channel SpendFirst dollars dothe most workMMM finds wherethis curve bendsSpending here? Moveit to another channel.
Revenue Response CurveCurtis Howland

Where post-purchase surveys tell you what customers remember, MMM tells you what the data says is driving revenue regardless of recall. The two work together: use MMM to validate surveys, use survey data to calibrate MMM.

The biggest benefit: measure marginal performance. Not just average, but where you are on the response curve. Where to put the next dollar.

When surveys and MMM agree, push budget aggressively. When they disagree, dig deeper before making a big move.

Tools: Meta's Robyn (open-source), Google's Meridian (open-source). SaaS: Measured, Sellforte.

Requirements

Minimum $500K+ per year in ad spend. Ideally 2+ years of weekly data. Needs clean historical data and someone who understands both the math and the business context. If you have messy data or plan to just wing it, skip it. Might do more harm than good.

Layer 2 · Upper Branches

Model 4: Incrementality / Geo-Lift

The Geo-Lift Test:
The Closest Thing to Proof in Marketing
DallasSPEND OFFHoustonSPEND ONTurn spendoff hereKeep spendon here
Curtis Howland

The closest thing to scientific proof in marketing.

Turn spend off in Dallas. Keep it on in Houston. Measure the difference. One test can save hundreds of thousands in non-incremental spend.

Questions it answers: Is retargeting actually driving sales, or just claiming credit? How much does YouTube actually drive beyond clicks? Would cutting branded search 50% lose anything? Is AppLovin/CTV worth the investment?

Tools: Haus, Stella, Meta Conversion Lift Study, Google Conversion Lift, TikTok Lift.

Trade-off

Tests run 4-8 weeks. Requires geographic diversity and enough sales volume for statistical significance. Expensive in opportunity cost since you're deliberately reducing spend in test regions. But one well-designed test can save you from wasting hundreds of thousands on non-incremental spend.

Layer 3 · Lower Branches

Model 5: Multi-Touch Attribution

My primary tool for daily ad-level decisions. You can see who purchased, where they came from, what they bought, and validate the attribution yourself.

The Account Control Chart:
Find Your Winners, Kill Your Losers
Scale ThresholdCPA TargetTEST ORKILLCUTTHESESCALETHESEWINNERS$120$90$60$30$0CPA (Cost Per Acquisition)$0$5k$10k$15k$20k$25kTotal Ad SpendBurning $14k/moat $95 CPAThese 5 ads drive60%+ of revenueEfficient butstarved for budget
WinnersScale TheseCut TheseTest or KillCurtis Howland

Account Control Charts: Plot CPA vs. Spend for every ad. Top ads should be low CPA, high spend. If they're not, you have a media buying problem no amount of new creative will fix.

With Facebook's dashboard, you just have to trust them. With MTA, you can audit it. You can look at first-click attribution to understand where someone originally discovered you.

Someone discovers you through UGC on Meta (first touch), sees a retargeting ad a week later (middle touch), then Googles your brand name (last touch). Last-click says Google drove that sale. MTA shows you the full picture.

Limitation

Still fundamentally click-biased. Overcounts total revenue across channels. Use for ad-level and campaign-level decisions. Not for channel allocation. That's what surveys and MMM are for.

Layer 3 · Lower Branches

Model 6: Platform-Reported Attribution

The Attribution Window Trick:
How Meta Inflates Your Numbers
Attribution WindowConv.CPAROAS
1-Day ClickStrictest signal
200
$50
2.1x
7-Day ClickMost brands use this
280
$36
3.0x
7-Day Click + 1-Day ViewMeta’s default. Inflated.
420
$24
4.5x
↑ I optimize here
View-through adds 140 conversions Meta probably didn't drive
Curtis Howland

Use it daily. Trust it never.

Purchase velocity: Compare 1d, 7d, 28d windows to know if you have an impulse product or a long-cycle product.

View-through inflation: If a huge % of conversions are VTC, the platform is padding your numbers. Force click-based optimization.

GA4 defaults to last-click attribution, which massively favors Google's own products. Someone sees your Meta ad, gets interested, Googles your brand to buy. Google Search gets credit. But Google didn't create that demand. Meta did.

Google built a free analytics platform that defaults to crediting Google. Use GA4 for web analytics (traffic, on-site behavior). Not as your source of truth for marketing attribution.

This is exactly why post-purchase surveys and MMMs matter. They tell you what created the awareness, not what grabbed the last click.

Layers 4 & 5 · Trunk & Roots

Infrastructure & Cohort LTV

Model 7: Server-Side Tracking (The Trunk)

After iOS 14.5, pixels lost massive signal. CAPI/Elevar sends conversion events server-to-server, bypassing browser restrictions. Without it, your data is leaking and every other tool is working with incomplete signal.

Model 8: Cohort-Based LTV (The Roots)

Sets your targets. Meta at $40 CPA with $180 12-mo LTV beats TikTok at $30 CPA with $95 LTV. Two key metrics: return rate % and purchase frequency. These drive the majority of LTV variance between channels.

Cohort LTV by Channel:
Cheap Customers Aren't Always Good Customers
$200$150$100$50$0036912Months Since First PurchaseGoogle wins onfirst purchaseMeta passesGoogle by Month 5$180 vs. $90.Same customer,2x the value.
MetaTikTokGoogle SearchCurtis Howland

Your cohort LTV data tells you what CPA targets should be by channel. That feeds your blended MER/nCAC target at the top. Which feeds channel allocation in the middle. Which feeds ad-level optimization at the bottom.

If you skip this step, you're optimizing the entire tree toward the wrong target. You'll efficiently acquire cheap, low-LTV customers. Great on a dashboard, terrible on the P&L in 12 months.

The example from the article: Meta customers have 35% return rate, 3.2x purchase frequency, $180 12-month LTV. TikTok: 20% return rate, 2.1x purchase frequency, $95 LTV. That $40 Meta CPA is a steal. The $30 TikTok CPA might actually be overpriced.

Hot Take

Why “AI-Powered Attribution” Doesn't Work

01

Biased Inputs

Starts with click data that already over-credits clicks and under-credits awareness. Garbage in, garbage out.

02

Fake Precision

ML on biased data produces scientific-looking wrong answers. That's worse, not better.

03

Black Box

You can't adjust for bias you don't understand. With MTA and surveys, you know the blind spots and can compensate.

The pitch sounds great: “We use machine learning to analyze millions of data points and give you the TRUE value of every touchpoint.”

But every AI model I've seen starts with click-based data as its foundation. Then layers complexity on top of the bias. The result is a biased answer that looks more scientific.

The value of understanding attribution isn't getting a perfect number. It's knowing where your model is wrong so you can compensate. When I use surveys, I know they over-credit memorable channels. I adjust. When I use MTA, I know it's click-biased. I adjust. With AI attribution? You get a number. You don't know where it's biased.

The right approach is top-down, not bottom-up. AI tries to perfectly attribute every touchpoint then roll up. You can't build accurate channel insights from biased touchpoint data, no matter how much AI you add.

My Stack

The 3 That Actually Matter

01  Multi-Touch Attribution

Triple Whale / Northbeam · Daily · Which ads to scale or cut

02  Post-Purchase Surveys

KnoCommerce / Fairing · Daily · Which channels drive awareness

03  Marketing Mix Modeling

Robyn / Measured · Quarterly · Validate and calibrate survey data

The Triangulation Model:
When to Trust Your Data
Post-Purchase
Surveys
Channel-level
MTA(Triple Whale)Ad-level
MMMValidation
HIGH
CONFIDENCE
Push budget
Strong
signal
Directionally
right
Validate
further
All three agree?
Scale fast.
They disagree?
That's where the
insight is.
Surveys say TikTok is big, MTA says it's weak → TikTok drives awareness that converts elsewhere
MTA says Meta is strong, surveys trending down → Retargeting heavy, less new customer reach
MMM disagrees with both → Run a geo-lift test to break the tie
Curtis Howland

When all three agree (MTA shows strong Meta performance, surveys confirm Meta as top first-touch, MMM validates highest ROI), push budget aggressively.

When they disagree, that's the interesting part. TikTok looks mediocre in MTA but shows up strong in surveys? TikTok is driving awareness that converts through other channels. Don't cut it based on MTA alone.

Meta looks strong in MTA and platform data, but survey share of “first heard on Facebook/Instagram” is declining month over month? Early warning your ads are becoming retargeting-heavy, even though dashboard numbers still look good.

Each tool catches different lies. That's the whole point.

Action Items

Do This Monday

$10K+ / month
1

Add post-purchase survey to checkout

2

Set up MER + nCAC tracking in a spreadsheet

3

Switch Meta to 7-day click attribution

$50K+ / month
4

Install Triple Whale or Northbeam

5

Build Account Control Chart (CPA vs Spend)

6

Set up CAPI / Elevar server-side tracking

$250K+ / month
7

Pull cohort LTV by acquisition channel

8

Consider MMM (Robyn, Meridian, or Measured)

9

Run first incrementality test on biggest channel

Start at the top of the tree. Work your way down. Never trust a single source.

Curtis Howland