What DTC Brands Get Wrong About Attribution
Ecom Podcast

What DTC Brands Get Wrong About Attribution

Summary

"DTC brands often misjudge attribution by blaming platforms like Facebook or TikTok when the real issue lies in campaign targeting and creative execution; adapting attribution models to fit unique business strategies can significantly improve marketing effectiveness."

Full Content

What DTC Brands Get Wrong About Attribution Speaker 1: TikTok works well. Facebook doesn't work. Like Facebook doesn't work or TikTok doesn't work. It's not about Facebook. It's not about TikTok. It's about your campaign. It's about your creative. It's about your targeting. It's about how many signals, how much budget you allocated, how well machine learning model within this campaign trained. So you should always compare campaigns like apples to apples. You should not compare like platforms. Platform is just, it's just platform. It's just inventory. Yes, of course, some algorithms are better, but still you can, it might be one campaign on TikTok that performs much better than another campaign on Facebook and vice versa. So it's all about campaigns, not about platforms or channels. Speaker 2: Welcome back to another episode of Chew on This. Today we bring you a special episode brought to you by SegmentStream. And we're going to be talking to SegmentStream's founder and CEO, Constantine, where we're going to be dropping down ideas and thoughts around the future of marketing measurement and also just talking about what really goes into marketing attribution nowadays. We're really excited to jump in. Constantine, first of all, we would have loved to have you here, but the fact that you're able to join us remotely is going to be incredible value for our listeners and viewers. For the few people who may not know you or haven't stumbled upon you on LinkedIn, etc., give them a little bit of your background. What got you into what you're working on today and what gets you driving the passion that you have behind some of the thoughts you have, especially when it comes into the marketing measurement and attribution space? Speaker 1: Great. Ron, Ash, thank you very much for having me. Yeah, I would say from the very beginning when we started the company in 2018, our primary goal was Find, uncover the truth and probably this is the most challenging task in marketing analytics because we work with really fragmented pieces of reality and there is no way essentially we need to agree that there is no way to see the full picture. So we need to take all these fragments and operate in this gray area to figure out what to do and to give some directions and actually this Very challenging and very interesting tasks, so actually the whole team was initially very passionate about solving it the best possible way. Speaker 2: It's incredible. You know, you jumped right into that, which I loved, which is there really maybe isn't a way to perfectly solve something, but it's more about maybe how you look at it and what elements that you can bring in to really parse out the best story, parse out the best scenario. For the world out there that maybe doesn't find it as native to them or doesn't find it as easy to understand, what would you say right now, whether it's for multi-touch attribution or just in general marketing attribution, What do you think is one of the biggest issues that is the underlying problem of why we have so much difficulty when we come out of the gate and try to be on multiple platforms but can't figure out attribution? What do you think are some of those underlying issues that cause that? Speaker 1: I believe that the underlying problem is that marketing measurement is very, very difficult and the same methodology cannot be applied to every business with different strategies, different funnel levels, and in many cases if you try to just Choose one specific multi-site attribution or marketing measurement methodology and just apply it to every single business. Probably results are going to be not the best. You might be able to measure something, but in many cases, every business needs its own methodology. And even if we talk about the attribution, All attribution models should be tweaked, should be calibrated for specific business with specific sales cycle, with specific funnel. So the main thing that we uncovered is that you cannot just have out-of-the-box attribution and feed it into every business. And of course, if you just have $20K, $30K budget, like you invest $20K, $30K a month, it might be not a big issue. Millions of dollars a month are at stake. Probably you would be willing to invest a little bit into fine-tuning your attribution and finding the best marketing measurement solution that you can trust for the next year to make informed decisions, especially if you plan to scale. Speaker 3: I think one thing that you mentioned is even if you're at maybe even a lower budget or if you are just leveraging a single channel, right? I think for brands that are just starting, it's quite literally seeing, okay, well, we spent X on meta, we generated X revenue on our Shopify dashboard. That becomes the most simple solution to kind of figure out. But then when you start adding on these complexities of additional channels, right, you start adding in I'm an influencer, right? That is definitely something that's messy because you don't necessarily know who's driving traffic versus somebody seeing something and then Googling your brand. And then even that traffic is probably clicking on a sponsored ad and then they get hit with a retargeting ad on Snapchat and then on TikTok or Pinterest, right? So how do you tell brands to at least Get started in understanding what to look for when it comes to attributing. Well, we're spending X amount on different channels, we're getting revenue, but we don't know where to actually optimize and tinker with the budgets. Where do you even start thinking about this? Speaker 1: Yeah, I would say budget is still important because when you just start with a small budget, essentially there are not so many choices. I've seen some Brands who have like 20k budget and they just allocate little bit into TikTok, little bit into influencers, little bit in Google ads, little bit in YouTube and after that they cannot measure anything because actually there is no significant effect and their marketing mix is too fragmented for their for their budget. So in my opinion still if you start with a small budget probably You're going to be fine with some basic attribution at first. You're going to be measuring like mid funnel, lower funnel, cookie-based attribution, and even in platform reporting might give a good understanding whether some of your creatives are working well, etc. For influencers, you can apply some coupon codes, etc. The real problem arises once you start scaling. For example, you already invest 200K in Facebook, 500K in Google, 500K into affiliates, like we have clients who invest like a million or two million a month. And here where the real challenge arises, because every single mistake, every single misattribution can cost you hundreds of thousands of dollars. In wasted spend and also much more in missed opportunity in terms of revenue. So I still believe that attribution problem is problem of companies who invest a lot. First of all, because a lot of things are at stakes and it's always finding a balance between investing like 20k, 30k into building good attribution model and So it should be a proper ROI for attribution as well. So I believe it's still, for most of small brands, I would definitely stick with something simple and target more like mid funnel, lower funnel, because even if you go upper funnel, You should be ready that you won't be able to measure it, at least in terms of impact towards your revenue. You will be able to measure some proxy metrics, some upper funnel metrics like impressions, clicks, engagement, but with smaller budgets, unfortunately, most of the incrementality measurement methodologies just do not work due to simple math and simple statistical significance of such experiments. Speaker 3: So when we talk about the conversion tracking being a complex system where it's very variable between business to business, right? I think one of your philosophies is taking a look at perhaps website visitors and behavior of the traffic from different sources. Can we talk a little bit about why looking at this data can be more important than conversion tracking in general? Speaker 1: Again, it depends from business to business. Of course, if your business is driving emotional sales when someone just comes to your website and buys something for a quite small price, probably going with cookie-based attribution is going to work perfectly fine for your business. The main challenges start when you have this long consideration period, long sales cycles. People come to your website, they start research, they start choosing what they're going to buy. Eventually, they come again and again and again, send the link within the household to another user. Your wife comes and checks whether you want to buy this particular furniture or this super expensive electric bicycle or whatever. And eventually, you come back directly after making a decision and getting a paycheck and you decide to buy. And what you see in traditional analytics tools is someone came from direct And eventually purchased something for $3,000 or $5,000 out of nowhere, while most of your upper funnel activities were not effective. So this is the kind of challenge where usually technologies like visit scoring come in and this technology helps analyze like what were initial drivers of these conversions that you see coming from direct non. Because the easiest The way to understand that this happens, so business has this kind of funnel, is when you scale down upper funnel, you see that you have less direct conversions, less brand conversions, less brand traffic, less organic traffic. So this is exactly the case we're talking about. And the idea, how can we redistribute this So-called unattributed conversions back to original traffic sources. And what is the best proxy metric that we can use to identify these channels? I can explain to you how this works in a little bit more detail if it's interesting. Speaker 3: I think so. I think for those who are potentially interested in understanding how to leverage website behavior, I think even for us, being in the supplement space, there's so many competitors out there and having an AOV of $100, $200, You have to actually understand, are people from certain traffic sources actually engaging with the content of your website to somewhat extrapolate that, okay, this particular audience segment is interested in what you have to deliver versus, well, are we just sending traffic from a channel source that's just sending Crap traffic, right? So like what tools are you using? What are you actually looking for specifically to actually measure the impact that your site has on, you know, different sources of traffic? Speaker 1: Right. So, first of all, we have some dimensions, let's call dimensions, so different attributes that do not change. Imagine there is a potential customer from a very small city somewhere in a specific state, and they came to your website by clicking Instagram or TikTok ad, and they started researching a specific supplement that they are looking for men's health or whatever. And then a few days after that, we can see that there is direct traffic coming from this specific city, small city in this specific state. Lending back to this specific supplement and then eventually making a purchase. So using statistics modeling and analyzing different behavioral patterns, it's possible with certain probability to understand that this might be the same user. Or at least there is certain probability which is quite high that this is the same user. And what we've uncovered that even though this is statistical modeling, this is like predictive analytics, When you work with big numbers, actually, it adds up. And a good statistical example, if you invest in 10 customers who have 10% probability to buy, at least one customer gonna buy. So if investing in one customer with 10% probability costs you 10 times less compared to buying customer who buys with 100% probability, essentially, like statistically, it's almost the same. And this is the whole idea how our methodology allows brands who already exhausted their lower funnel and mid funnel channels. And they see that, yes, they add additional money, they pour more and more money, but they already see these diminishing returns. And even though campaigns were performing really good, when they started investing their first $1,000, $2,000, now they invest additional money and they see that marginal loss is diminishing. They get less and less incremental revenue from the channel. And now they need to find a new source of traffic. They need to find Some kind of traffic that probably Facebook will not be like Facebook will be very hesitant towards targeting this users because actually there are no last click conversions and you pay for last click conversions. That's why Facebook say, okay, your target CPA is $50. I won't go for this audience on mobile that is like upper funnel, et cetera, et cetera, because you are not paying for this. But actually with statistical modeling, you could be paying for this customer, just paying less. You can pay for someone who converts with 10% probability 10 times less. And this way you have the whole new market that you can target. While your competitors are not targeting this market at all, because no one is betting on these customers, because there are no last-click conversions and there are no signals that are sent back to Facebook to let Facebook know that actually this was also a valuable visit, even though it did not convert from the first visit to your website. Speaker 3: Tactically, what should we be doing, right? So just to kind of dumb this down a little bit, right? When we're running, say, the sales objective on Meta, right? We're typically going after those who are in market and those who Meta thinks will click on something and buy right away. That, I would say, is maybe 5% of the user base on Meta across Facebook and Instagram. It's obviously not in market, but we have the chance to market to them, show them our USPs, the benefits of our products, so that when they do become in market, we are hopefully top of mind. When you're running these types of objectives, whether it's running reach campaigns or video views or even a sales objective, but maybe going after view content or add to carts, going a step up in the funnel. Obviously, there is no direct ROI right off the bat. When we're running ads on a day-to-day basis, we're looking at who clicked today, who bought today and setting a return on that and understanding is it profitable or not. When you move up that funnel and you're trying to attack that 95%, what are some of the metrics you should be looking at to predict, okay, potentially in the future, these people will end up coming back and buying from us? Speaker 1: Yeah. So as I've mentioned, first of all, we use some parameters that never change, like geolocation. We know exactly like if, uh, People from, you have a global business and people from India usually do not convert. It doesn't matter whether you have a lot of impressions and a lot of traffic and a lot of high engaged traffic from India. If you just have two conversions per month, probably this traffic doesn't make much sense. So we have geolocations and we can convert this engagement score based on geolocation. So it has different currency, different exchange rate, but also we can understand behavioral patterns. And this is exactly what we do. We make an assumption that some of your customer journeys are not fragmented. So yes, we understand that people interact between different devices and browsers, iPhones, iPads, etc. But let's assume that at least 10% of your customer journeys are not fragmented. They really heard about your brand for the first time, they started research, they found supplements they want to buy and eventually made a purchase. So we take this like 10 or 20% longest customer journeys that include all funnel levels and we inspect them using machine learning. So what pagers Like, typical user who didn't know before about your brands spend time on. Whether they spend time in listing, do they apply filters, do they go to the product page, how much time they spend on the product page before they make a decision, do they read reviews, do they scroll, etc. And then we consolidate these patterns and then we extrapolate these patterns to all customer journeys, upper funnel customer journeys that did not end up with a conversion. And we see that if someone came from Instagram or Facebook and we observe similar pattern that gives us understanding that this user has 50% probability to buy based on this behavioral pattern, we can assign a score, for example, 0.5 conversions, 0.3 conversions to this traffic source. So now, and we can even create a synthetic conversion and send the signal back to Facebook to let Facebook know this is not a worthless visit. We appreciate this click. This is not useless. It's not trash traffic. We want more customers like this so that internal look-alike model of Facebook could be enhanced with this additional signals and retrained and expanded in terms of reach to target more users who are actually interested in At least we see these patterns of upper funnel engagement. Really cool. Speaker 2: Constantine, first of all, it's incredible the breadth of knowledge you have in how to interpret, especially when you may need this for your brand. I want to double down with something you said, which was like, you know, if you're a brand that's driving maybe an emotional purchase, which is, hey, I looked at something, I'm probably going to buy now, right? Maybe something, you know, like the platform you created may not make sense right away, but can you maybe give us like, for the marketer that doesn't know as much details about some of the, how the kind of, tracking and some of the other elements that work, can you give maybe more signals that we should look for? Like I love the signal you mentioned, which is like, hey, if you start to scale down upper funnel and you see direct conversions fall, right? That potentially means that upper funnel piece that, Wasn't getting credit probably had an impact in what's showing up as direct conversions. What are some other signals that we should be looking at as brands that could potentially say, hey, it may be time to look at from a more mature lens how your attribution is working? Speaker 1: Yeah, essentially, first of all, we need to start with an assumption that all attribution models are wrong. Speaker 2: All of them. Speaker 1: Any model that you're going to start with is wrong. The question is how you're going to start iterating on this model and eventually in maybe in six months this model going to become your source of truth for at least next year and you keep evolving it and you keep evolving it and you keep evolving it. And there are two different mistakes that I see with attribution models. Imagine you found some fancy attribution model out of the box and you implement it on your website. There are mainly two types of challenges. One challenge, of course, is over-attributing. So you see a lot of conversions going to direct, organic, brand, retargeting, affiliates that promote coupon codes, etc. So this is the first problem. And most of the market actually knows about this problem and everyone knows that we should not trust brand. Maybe we even exclude it from optimization. We just run it for fun or just to protect our brand. Few try to measure actual incrementality, but in many cases, it's just kind of hygiene, I don't know. But the second problem is when you have a lot of channels like TikTok, like Pinterest, like Display. That actually does not show any conversions based on last click, but the team somehow believes that these channels are driving brand awareness. And this is the most dangerous part, actually. Because if you believe in something really strongly, at some point you're going to be looking for self-fulfilling prophecies and there is a confirmation bias in place. You're going to be searching the market. For vendors who can approve and justify your beliefs and for example, we've seen We've seen some examples with, I'm not going to name any vendors, but we've seen some examples where there was a brand and they invested a lot into display, like really a lot DV360, and they were showing a lot of incremental value from DV360 based on impressions, based on clicks, based on some kind of correlations, MMA, Bayesian models, et cetera. And there were a lot of conversions really attributed. So it's another bias. You attribute a lot of conversions, but you don't see anything based on last click. It should be, by default, a trigger to you. Like if I see a lot of conversions attributed by some methodology, but zero conversions attributed based on last click, Yes, it's fine to have a hypothesis that this is brand awareness. This might be incremental. People might not be clicking but coming later from different device and browser, etc. But you should test this hypothesis and to be skeptical always. The idea, like in science, you should be skeptical about any hypothesis and it should be proven. And if some attribution model really shows a lot of conversions and you see that, oh, DV360 or Display actually contributes towards 10% of your revenue and shows really great rolls, There are two ways how you can measure this, like with confidence. So first way is geo-holdout incrementality testing. So actually you can split, especially in US it's very easy to do, you can split all the regions into test and control and just stop showing your ads in test group and see whether there's any incremental impact. There are of course some limitations because geo-holdout tests have so-called minimal detectable effect. So if actual impact of this... That's why I say You should not diversify and make your marketing mix too fragmented if you have small budgets. Because if effect of particular channel towards your revenue is smaller than 5%, probably you won't be able to apply any incrementality measurement methodologies. So the first way is to run GeoHoldout test. But some brands might say, oh, we really believe in this channel. If you're going to run your holdout test, we're going to be missing some revenues. We're going to stop our ads for four weeks or for three weeks in 50% of states, and we're going to be losing a lot of revenue. Okay, if you believe so much into incrementality of this channel, you can do vice versa. You can scale it two times and see how it impacts your revenue. Because if you're going to scale your top performing channel two times, It will be reflected in your revenues. If you scale it, but there is no incremental revenue, it's called marginal analytics. It means that marginal loss of this investment is zero, and probably this investment is not incremental. So, and this is actually how we go with our clients through all this hypothesis. So, we implement, we usually have baseline cookie-based last click attribution, or first click, it doesn't matter actually. It's a single device, single cookie attribution. And then we implement our visit scoring attribution, which is a combination of deterministic and probabilistic measurement that is possible to calibrate. And then we see, like, which hypothesis we can get from here. Oh, actually, we gave a lot of credit to demand gen while there are no last click conversions. This is the first hypothesis. Should we scale? How much should we scale to be able to test it and validate? Okay, we scaled, we validated, and we found out that there were mostly bots coming from demand gen, and there were no real users. But visit scoring is sensitive to bots, because if bots are coming, and now bots are very sophisticated, and sometimes you score their behavior and they really look like real users. So we understand, okay, we need to Calibrate. And we calibrate visit scoring to exclude specific patterns and then run the test again. And this way we iterate and iterate it and iterate till we find the perfect marketing mix that is fully balanced. And then we start budget allocation based on marginal analytics and marginal roles. Trusting attribution as a single source of truth. I just wanted to add that you should always trust your attribution as a single source of truth. So, it's a little bit some kind of contradiction here, right? So, you should never trust your attribution, but you should always trust your attribution. The idea is you should never trust numbers, but you should always, like, the only way for you to validate numbers is to trust attribution, follow attribution, and then see whether there is incremental impact or not. Whether marginal or all changes or not. For example, attribution tells you scale retargeting five times. Okay, if you believe in this scale retargeting 5-type, if you don't believe it, scale it down and measure incrementality the same way. So you can actually, it's possible to measure marginal rolls both ways. You can scale up or you scale down depending what you believe. Your beliefs should only dictate whether you're gonna scale it up or scale it down for the purpose of testing. Speaker 2: One more question, Constantine. I think it's incredible, again, the fact that you have so many probably examples of what you've seen, what's kind of stood out to you, because you get a kind of bird's-eye view at a lot of different brands and you don't have to call out specific brands, but I find it incredibly helpful when you give some of these examples, like the one you gave about a brand that worked scaling their display, and then maybe when you peeled back the layers and you're like, The last click is not telling the same story as what their tool is saying. Can you call out some other examples that you've seen that maybe are a little bit even more fascinating where it's like, hey, it looks like this, but then when you kind of peel the back, whether it's with SegmentStream or even you being able to look at it from a different lens, what are some other things you've seen? Because I think our viewers would probably find it very practical to be able to say, oh, I fit into that category. Let me go and now look at this again. If you have a few other examples, I think it'd be super, super valuable. Speaker 1: Yeah, I would say probably the best examples are examples where some beliefs were not confirmed and there were so many cases like this. It's also a psychological factor because actually a few times we lost clients because of that. So we are very obsessed about finding the truth and measuring incrementality. And initially, one brand hired us to justify their investment in TikTok and a few other channels to CFO. And we were not able to justify it because we thought that actually like ROS is much lower comparing to meta. And we measured traffic quality. We ran incrementality test. Unfortunately, this is the case. But what was really fascinating to me that in many cases attribution is secondary. It's not that important. So many people think that attribution is super important and all the decisions should be done based on attribution. But actually the primary metric you should be measuring is marginal ROAS because even if you're going to use last click attribution, what we've uncovered, we've made an experiment and we've uncovered that, for example, you have a brand campaign and it has super, super high ROAS, like 20x. And imagine like you don't know about this bias of brand campaigns. You just follow the attribution and you just trust attribution and you start investing more and more in brand. What you're going to find out that you've added first $1,000 into brand, you get $20,000 in return. You get $1,000 more, you get $19,000. At some point, you're going to add $1,000, additional $1,000, but you're going to get only $100 in return. So the idea is that Many campaigns have very, very steep diminishing returns curves. And even though their average rolls looks good, because on all analytics reports, we look at average rolls. We look like, okay, how did our generic search campaign perform last week? We look at total money invested and total revenue, and we have average ROAS, but we don't know what is a marginal ROAS. What's going to be the ROAS if we're going to invest additional $1,000 into this campaign? And in many cases, what we have uncovered that there are so many campaigns, especially in paid search, in retargeting, of course, in brand search as well, that have really high average roles, sometimes 5x, 4x, 3x, but marginal roles is already less than one. So this was something really, really surprising. And the idea is that brands are not able to see this because all platforms show only average roles. And they keep investing, keep investing, keep investing, till finally it becomes one X ROS. But during the time of investments, they were burning money, essentially. They could have stopped at the beginning of these diminishing returns curves where their marginal ROS was still above So what we found that actually sometimes it's more important to do proper marginal analytics. It's even more important than like tuning your attributions. First you need to invest in marginal analytics and then and only after that you can look into attribution. Because for many lower funnel channels, for many mid funnel channels, they saturate very fast. And marginal ROAS diminishes very quickly compared to other upper funnel campaigns that can scale linearly up to $1 million per month easily. And at some point, even with last-click attribution, marginal ROAS might be the same between these channels. Speaker 2: I love it. Speaker 3: I want to talk about kind of the future of where a lot of this is heading. I mean, over the last six, seven years, right, we've gone through some of the biggest shifts in digital marketing, right, first started with the iOS update. And that changed everything from more of a Direct attribution model to more of a modeling attribution model, right? Meta is basically getting information or whatever information that it can get and then potentially modeling something out, right? So it could be something that's somewhat accurate, but it's more directional than anything else, right? Now you have Google and essentially cookies going away, right? So I know that they've delayed this rollout a little bit further, but when... Speaker 1: Actually, they canceled it. Today it was an announcement that they've decided to abandon their idea to duplicate third-party cookies. Speaker 2: Do you think there's enough to chat around like what AI can do to some of this tooling and modeling or is there not that much yet to make it valuable? Speaker 1: Yeah, it depends what we call AI, because right now we're mostly obsessed with LLMs, with chat GPT, etc., with more like generative AI. So I would say AI was already in place for a long time, like all meta algorithms are based on AI, all Google's algorithms are based on AI, all our visit scoring algorithms are based on AI. So I believe the whole modeling is based on AI. So it's already playing a very huge role where in many cases analytics shifts from like deterministic approach that was viable when we had few devices, cookiesware, Not expiring so fast and not changing so fast. There were no privacy restrictions and different legal and technical restrictions. But now I would say it's A healthy combination of probabilistic and deterministic approach with constant hypothesis validation. Right now, we ourselves are heading towards this. So in my opinion, the thing that you cannot fully automate is to build your source of truth. So first, you need to build a source of truth, how you measure all your marketing channels. Once you have this in place, everything else can be automated. For example, In our platform, once we calibrate the attribution and we build a really custom, fine-tuned attribution for our clients, after that, you can just click Apply button. And SegmentStream will start automatic budget allocation across all your campaigns. We connect to API. We start controlled budget shifts for all your Google campaigns and Meta campaigns to understand marginal ROAS, to understand elasticity, to build diminishing returns curves, to find the ideal spend level for every single campaign and then scaling and then scaling. So essentially like You don't even need to go inside your ad platform anymore. You can just click apply button and everything works for you. So it's like this agentic model where you have a proper AI or proper data model at the heart of your system and then just agents connect to Google, to Facebook, to TikTok and just manage your budget and manage your campaigns. So essentially it's already here, but maybe a little bit in a different form. And the biggest challenge probably is to build a source of truth because we haven't found yet a solution how this can be done without experimentation. And if we can automate this experimentation in AI, for example, Matt already launched this incremental attribution. What is it? In a sense, they're kind of doing list studies under the hood in a fully automated mode to understand incrementality. Of course, like you could perceive this With a pinch of salt, how they measure this and how they uncover the algorithms. And of course, I also shared on LinkedIn a very detailed explanation why this approach is very biased towards test group where ads are exposed. First of all, because of stitching mechanisms and it's still measured based on attribution. But the idea is that maybe at some point we will be able to automate these experiments. So once we're going to automate these experiments, probably we can fully automate building this single source of truth. And we already do a lot of research here, but there are a lot of also political issues here. Imagine like you have a client, a head of digital, and without the acknowledgement for some reason, like 50% of their spend is In particular, campaigns shut down just for us to measure incrementality. So still, I believe for some time, all these experiments need to be confirmed by brands and by someone who is responsible for the budget. Speaker 2: Yeah, I think that was the one thing I was very curious about and just, again, would love to know from your experience from working with incredible brands and larger brands and enterprise brands is, I think there's an element here where, depending on the in-house structure you have, right, because there are companies who are, they have an agency for TikTok, they have an agency for Pinterest, they have an agency that's managing Meta, and then they have an agency that's managing Google, right? And I think when you're coming in and saying, hey, like everything seems it's like it's being managed from an external facing point of view, you almost compartmentalize each one separately. You talk to each one separately. Each of them are building their own case of what's performing, what's not. And I think when it comes down to it, it's like, you know, right now you may say, hey, use something like a triple whale. And I think everyone attributes and looks at that in its own way. When it comes to using something like a SegmentStream and getting more advanced with how you should be looking at consideration for attribution, how do you look at companies and brands that do have so many different ways that they're managing different marketing modules? So a brand that has agencies for everything, right? Do you make it almost like there's a priority become like, hey, the agency has to have the ability to answer certain questions in a certain way or look at data in a certain way? Is it just coming down to everyone using the same tool and you have to kind of be okay with just saying, hey, where credit's due, credit is due, and where credit's not due, you take it back? How do you build the management piece of this? Because I think that becomes Part of it is like you hire an agency to do something and tell you something and then you're kind of like, hey, no, that's not what it is, you know, and so how have you seen the management of this be the best when you're having a setup like that? Speaker 1: Yeah, I have a few utopian ideas, actually, how this could be managed. And for example, I actually, myself, I criticize a bit the model of agencies where they charge like, for example, 10% of your ad budget. But there is one model that actually might work based on this approach. Imagine like it's just hypothetical. Actually, we have a client who has like three or four agencies managing different parts of the budget, different platforms. And sometimes it might make sense, for example, if someone is Great specialist in Facebook, another one in TikTok, so you know, like, as a tech specialist, like how algorithms work, etc. I mean, this specialization might make sense, actually, instead of being agnostic specialist, you know some platform in depth. But in this case, you should not be responsible for budget allocation. So essentially, you can perceive this as a financial portfolio. Imagine there is a TikTok agency and their goal is to launch TikTok campaigns with some Test budget. And if these campaigns actually show good performance inside our platform, we're going to start scaling them if we see based on our source of truth. We're going to start allocating more and more and more budget to these campaigns till it fits unit economy of the brand. For example, brand has, we need marginal ROAS of at least 2x or 1x. So we're going to start scaling. So agency cannot scale TikTok. Above the test budget level. If we start scaling TikTok, it means agency did a very good job, like preparing creatives, like targeting, account structure, value proposition, etc. If we scale, they get percentage from the budget. But if these campaigns are really bad, we withdraw budget from there and reallocate to some Facebook campaigns launched by another agency who propose different creatives, etc. So it's just like a utopian idea how if you have fragmented responsibilities at some point, you need to extract budget allocation responsibility as an external, to external Agency or advisor or someone in-house, etc. The worst thing that you can do if you have fixed budgets for different platforms, like we allocate 10k to TikTok, 50k to Facebook, and we already fixed this for the next year. So this is the most horrible approach you can take. In our platform, for example, when you create optimization portfolio, It's usually campaign level. You add campaigns from Facebook, from TikTok, from Snapchat, from DV360 in one portfolio and then start optimizing based on your single source of truth. And many times we have requests from clients, can you make this at platform level? Your Facebook might be performing better than TikTok. But not because Facebook is better than TikTok, but because your campaigns within Facebook are properly optimized within Facebook. So if you're going to optimize campaigns within TikTok, TikTok might outperform. So that's why you should make campaign level portfolios and then relocate budget between campaigns, not between platforms all the time. So this is what we usually see. It gives the best growth in terms of revenue. That's why I usually don't like TikTok works well or Facebook doesn't work. Facebook doesn't work or TikTok doesn't work. It's not about Facebook. It's not about TikTok. It's about your campaign. It's about your creative. It's about your targeting. It's about how many signals, how much budget you allocated, how well machine learning model within this campaign trained. So you should always compare campaigns like apples to apples. You should not compare like platforms. Platform is just, it's just platform. It's just inventory. Yes, of course, some algorithms are better, but still you can, it might be one campaign on TikTok that performs much better than another campaign on Facebook and vice versa. So it's all about campaigns, not about platforms or channels. Speaker 3: Yeah, I think even for us, you know, trying to understand, at first we were splitting platforms, right? For example, you know, Facebook would show one thing, TikTok would show another, right? Typically TikTok shows less, worse results than Facebook because it's not really a click on an ad you buy and you're, you know, off to the races. It's more top of funnel and awareness in a way. I think the way that we're measuring and I think a lot of brands are measuring that are somewhat mature is We're kind of looking at things at a blended approach, right? So if you're looking at your blended new customer acquisition costs, new customer return on ad spend, you know what your baseline is. You know what you need to be at based off your business goals, right? If you add in another layer of complexity or another layer of ad spend on a different platform, at least the way that I think about it is one, over time does revenue increase? Does efficiency stay the same or decrease or improve, right? I guess the question here is, one, is that the wrong way of doing things? And how can brands know when they're measuring things, they're doing it the wrong way? Speaker 1: Yeah, actually, currently, I'm writing a book for CFOs and CEOs, how to properly approach measurement, because they're not that technical and they, like marketing teams, provide them with lots of numbers, attributions, fairy tales, hypotheses, and they either believe or do not believe, or they just become very conservative. But actually, there are two approaches. So blended ROAS, when you're talking about blended metrics, they are not that bad. It's much better when I see sometimes CFOs give KPIs based on last click attribution. This is horrible because you can tweak, you can manipulate, you can invest more in retargeting to get your bonus, you can invest more in brand search, you can try focus, like super focus on this lower funnel affiliates, et cetera, and just get your bonus and get amazing ROAS numbers based on last click. The next level, you can go and make blended ROAS targets. Speaker 2: It's good. Speaker 1: It's more correlated with actual financials, like it's closer to real money. And more far from science fiction, but still, if you have a good brand, a lot of traffic might be coming from organic, really coming from organic, really coming from direct. So how to separate in this blended roles the impact of your ads? What if you shut down all your ads and revenue going to remain the same? So that's why the next level you can go is from blended roles, you can go to marginal roles. So, and how to test marginal ROAS? Again, it's not possible to test it when your marketing mix is in statics. Like if it's like you just invest 10K in TikTok, 10K in Facebook and just keep it for the whole year, there is no way for you to measure marginal ROAS because you will not be able to identify incremental effect. But instead, you can run some hypothesis. So, if you really believe that TikTok is an awareness channel, that people interact differently compared to Facebook, And again, you have hypothesis that later they somehow remember your brand and come directly from different device and browser. Then you can just test it. If you see like, if you even implement this blended approach, you can just keep all the variables same. Keep the same spend, Google, Facebook, not touch it for next two weeks and just scale TikTok two times if you believe that it's really good. Then scale it two times more. And if your incremental revenue is flat, then unfortunately, there is no incrementality. But if you invest an additional 100k in TikTok and your revenue increased by 200k, it means your marginal loss is 2x. Maybe your reports show 8x, but now you at least know that it's 2x. Of course, you might say there are some seasonality effects, some fluctuations week over week, etc. But still, if If channel really shows significant effect based on particular measurement methodology, it should be noticeable if you're going to scale it. And it also should be noticeable if you're going to downscale it and shut it down. For example, if you're some MMM, next-gen MMM or whatever attribution shows that TikTok has 8X ROAS and contributes towards like 500K of your revenue, okay, shut it down for one week. And if nothing changes, okay, you might have another hypothesis. It's a long tail. Actually, people saw my ads. Okay. Shut it down for two weeks. It's still no impact. Probably this is just a fairy tale. And actually this ads are not incremental. And many people are afraid to validate their fairy tales. Because like you've just said, I believe that TikTok is a different platform. People just scroll and they do not click. But maybe they don't scroll, they don't click and they don't buy. Maybe it's a really different platform. People just don't care. We need to validate this in real money. And this is the hardest part to actually persuade that this experimentation culture should be embedded into each marketing team. Because right now, a lot of marketers, they have fear of failure. What if we're going to run an experiment and we will find out that TikTok, where we invested for the whole year before, like millions of dollars, is not incremental? What's going to happen? Hopefully, we will never test it and no one's going to know. I'd better work for another year in this company and then change a job than someone going to know that I've invested one million into non-incremental channel. So that's why this culture of experimentation should be encouraged, that failure is not bad. Failure is where we get this learning so that we can move further with better decisions. Speaker 2: Love that. Speaker 3: Constantine, thank you. I mean, this has been super, super helpful in allowing even us to think about how we're measuring things differently. If there's one last actionable insight, one thing you want viewers and listeners to go back and implement in their business today, what would that one thing be? Speaker 1: I'm a little bit biased because I've been working in this space for so many years, but from what I've been seeing in clients we work with, clients we're talking to, this smart tech ad tech space is so complex and also it is so corrupt. In a sense, like, vendors, partner with ad platforms, agents, like, every party has its own agenda, and you're just a CMO of a D2C brand, or founder of D2C brand, and you don't have any knowledge in this, like, huge ecosystem where everyone is interconnected. My advice would be hire some external advisor for at least like, I don't know, three hours a month. Someone who can Be on your side, not affiliated with ad platforms, not affiliated with vendors, not going to shop talk after parties and drinking champagne with someone, but someone who is working for CEO, protecting their budget and eliminating waste, like Dodge in government now. Something like that. Someone smart who works for you at least three hours a month as an external advisor. And you know that is not affiliated with ad platforms, agencies or someone who is interested in increased spend, increased spend, increased spend. Unknown Speaker: Chew on That. Speaker 2: That was great. Speaker 3: If you want more from us, follow us on Twitter, follow us on Instagram, follow us on TikTok and check out the website ChewOnThis.io.

This transcript page is part of the Billion Dollar Sellers Content Hub. Explore more content →

Stay Updated

Subscribe to our newsletter to receive updates on new insights and Amazon selling strategies.