And the 3 I actually use to manage $30M+ a year in ad spend.
Curtis Howland
1,050 “attributed” conversions. 600 actual orders. Every platform gave itself an A+.
Attribution works top-down. Each layer has a specific job. Most brands try to solve it from the bottom up. That's backwards.
The Star (top): MER/nCAC. Channel-agnostic. Tells you if the business is working, but not why. Accurate to the decimal.
Upper Branches: Post-purchase surveys, MMM, and incrementality testing. Which channels should get more or less budget?
Lower Branches: MTA (Triple Whale, Northbeam) and platform data. Which specific ads to scale or cut?
The Trunk: GA4, CAPI, server-side tracking. Makes all other tools more accurate. Doesn't measure attribution directly.
The Roots: Cohort-based LTV by channel. If Meta customers have 2.5x LTV vs TikTok at 1.8x, your CPA targets should reflect that. Sets the targets everything else optimizes toward.
Start bottom-up and you get tight numbers pointing the wrong direction.
Top-down: Start with blended truth (MER/nCAC), layer in surveys + MMM for accuracy, then MTA for precision within that accuracy.
If you start in the middle of the Christmas tree with an MTA tool, you end up precise but inaccurate. Tight numbers, wrong direction.
Start at the top with MER/nCAC, add surveys and MMM for accuracy, then layer in MTA for precision. You get both.
Most brands install an AI-powered attribution tool and try to perfectly track every individual customer's journey. They get precise numbers that feel scientific. But precision is not accuracy.
Total spend ÷ total revenue (MER). Total spend ÷ new customers (nCAC). The only metrics that can't lie. Your bank account doesn't inflate.
35% new customer ratio, 48% COGS, 8% platform fees. Lost $15K/month.
Better new customer mix, tighter COGS. Made $45K profit.
Maximum marketing spend = Gross Margin % minus OpEx %. Cross this line and you're losing money at the company level.
Target CPA = projected 3-month customer profitability. This is your efficient growth zone.
Everything else in this framework is about figuring out how to spend between those two numbers as effectively as possible.
Tells you IF something is working, not WHAT is working. You can't make channel decisions from MER alone. It's the star on the tree, not the whole tree.
“Where did you FIRST hear about us?”
Surveys measure what's memorable. An ad has to be remembered to be impactful. Click-based attribution overvalues lower-funnel by 250%+. We've seen TOF creative drive 13x more incremental acquisitions than BOF. Click data told the opposite story.
Add a post-purchase survey to your Shopify checkout (KnoCommerce, Fairing, or native). The question: “Where did you FIRST hear about us?”
Options should include every channel you spend money on (Facebook/Instagram, TikTok, Google Search, YouTube, Podcast, Friend/Referral, etc.) plus “Other” with a text field. Keep it under 10 options. Randomize the order.
Extrapolate to all new customers. Then calculate your cost per new customer response per channel. This tells you where to push budget.
People don't always remember correctly. Long purchase cycles make this harder. Some channels consistently under- or over-report. That's where MMM comes in to validate.
No cookies, no clicks, completely privacy-safe. Measures the relationship between spend and revenue over time. The real power: marginal performance curves.
Where post-purchase surveys tell you what customers remember, MMM tells you what the data says is driving revenue regardless of recall. The two work together: use MMM to validate surveys, use survey data to calibrate MMM.
The biggest benefit: measure marginal performance. Not just average, but where you are on the response curve. Where to put the next dollar.
When surveys and MMM agree, push budget aggressively. When they disagree, dig deeper before making a big move.
Tools: Meta's Robyn (open-source), Google's Meridian (open-source). SaaS: Measured, Sellforte.
Minimum $500K+ per year in ad spend. Ideally 2+ years of weekly data. Needs clean historical data and someone who understands both the math and the business context. If you have messy data or plan to just wing it, skip it. Might do more harm than good.
Compare. The difference is your incremental lift.
Run for 4-8 weeks. Tools: Haus, Meta Conversion Lift, Stella
The closest thing to scientific proof in marketing.
Turn spend off in Dallas. Keep it on in Houston. Measure the difference. One test can save hundreds of thousands in non-incremental spend.
Questions it answers: Is retargeting actually driving sales, or just claiming credit? How much does YouTube actually drive beyond clicks? Would cutting branded search 50% lose anything? Is AppLovin/CTV worth the investment?
Tools: Haus, Stella, Meta Conversion Lift Study, Google Conversion Lift, TikTok Lift.
Tests run 4-8 weeks. Requires geographic diversity and enough sales volume for statistical significance. Expensive in opportunity cost since you're deliberately reducing spend in test regions. But one well-designed test can save you from wasting hundreds of thousands on non-incremental spend.
My primary tool for daily ad-level decisions. You can see who purchased, where they came from, what they bought, and validate the attribution yourself.
Account Control Charts: Plot CPA vs. Spend for every ad. Top ads should be low CPA, high spend. If they're not, you have a media buying problem no amount of new creative will fix.
With Facebook's dashboard, you just have to trust them. With MTA, you can audit it. You can look at first-click attribution to understand where someone originally discovered you.
Someone discovers you through UGC on Meta (first touch), sees a retargeting ad a week later (middle touch), then Googles your brand name (last touch). Last-click says Google drove that sale. MTA shows you the full picture.
Still fundamentally click-biased. Overcounts total revenue across channels. Use for ad-level and campaign-level decisions. Not for channel allocation. That's what surveys and MMM are for.
Same brand. Same month. Same spend.
Conversions jump 110%. CPA drops 52%. Just by changing the window.
Use it daily. Trust it never.
Purchase velocity: Compare 1d, 7d, 28d windows to know if you have an impulse product or a long-cycle product.
View-through inflation: If a huge % of conversions are VTC, the platform is padding your numbers. Force click-based optimization.
GA4 defaults to last-click attribution, which massively favors Google's own products. Someone sees your Meta ad, gets interested, Googles your brand to buy. Google Search gets credit. But Google didn't create that demand. Meta did.
Google built a free analytics platform that defaults to crediting Google. Use GA4 for web analytics (traffic, on-site behavior). Not as your source of truth for marketing attribution.
This is exactly why post-purchase surveys and MMMs matter. They tell you what created the awareness, not what grabbed the last click.
After iOS 14.5, pixels lost massive signal. CAPI/Elevar sends conversion events server-to-server, bypassing browser restrictions. Without it, your data is leaking and every other tool is working with incomplete signal.
Sets your targets. Meta at $40 CPA with $180 12-mo LTV beats TikTok at $30 CPA with $95 LTV. Two key metrics: return rate % and purchase frequency. These drive the majority of LTV variance between channels.
Your cohort LTV data tells you what CPA targets should be by channel. That feeds your blended MER/nCAC target at the top. Which feeds channel allocation in the middle. Which feeds ad-level optimization at the bottom.
If you skip this step, you're optimizing the entire tree toward the wrong target. You'll efficiently acquire cheap, low-LTV customers. Great on a dashboard, terrible on the P&L in 12 months.
The example from the article: Meta customers have 35% return rate, 3.2x purchase frequency, $180 12-month LTV. TikTok: 20% return rate, 2.1x purchase frequency, $95 LTV. That $40 Meta CPA is a steal. The $30 TikTok CPA might actually be overpriced.
Starts with click data that already over-credits clicks and under-credits awareness. Garbage in, garbage out.
ML on biased data produces scientific-looking wrong answers. That's worse, not better.
You can't adjust for bias you don't understand. With MTA and surveys, you know the blind spots and can compensate.
The pitch sounds great: “We use machine learning to analyze millions of data points and give you the TRUE value of every touchpoint.”
But every AI model I've seen starts with click-based data as its foundation. Then layers complexity on top of the bias. The result is a biased answer that looks more scientific.
The value of understanding attribution isn't getting a perfect number. It's knowing where your model is wrong so you can compensate. When I use surveys, I know they over-credit memorable channels. I adjust. When I use MTA, I know it's click-biased. I adjust. With AI attribution? You get a number. You don't know where it's biased.
The right approach is top-down, not bottom-up. AI tries to perfectly attribute every touchpoint then roll up. You can't build accurate channel insights from biased touchpoint data, no matter how much AI you add.
Triple Whale / Northbeam · Daily · Which ads to scale or cut
KnoCommerce / Fairing · Daily · Which channels drive awareness
Robyn / Measured · Quarterly · Validate and calibrate survey data
When all three agree (MTA shows strong Meta performance, surveys confirm Meta as top first-touch, MMM validates highest ROI), push budget aggressively.
When they disagree, that's the interesting part. TikTok looks mediocre in MTA but shows up strong in surveys? TikTok is driving awareness that converts through other channels. Don't cut it based on MTA alone.
Meta looks strong in MTA and platform data, but survey share of “first heard on Facebook/Instagram” is declining month over month? Early warning your ads are becoming retargeting-heavy, even though dashboard numbers still look good.
Each tool catches different lies. That's the whole point.
Add post-purchase survey to checkout
Set up MER + nCAC tracking in a spreadsheet
Switch Meta to 7-day click attribution
Install Triple Whale or Northbeam
Build Account Control Chart (CPA vs Spend)
Set up CAPI / Elevar server-side tracking
Pull cohort LTV by acquisition channel
Consider MMM (Robyn, Meridian, or Measured)
Run first incrementality test on biggest channel
Start at the top of the tree. Work your way down. Never trust a single source.
Curtis Howland