In this issue I decided to try something new: to record the audio version of this week’s newsletter. So, here we go. Enjoy!
Using a different attribution model for each marketing campaign is like cheating.
I know, I know, I get it. If you are the newsletter guy, you’ll definitely want to report 10 new leads instead of the actual 8, by using the last click Interaction attribution model. And if you are the social media guy, you will surely prefer to report 20 new leads instead of 15, but that can only happen by not applying any attribution model. Still, at the end of the day, you will only have 23 new leads, not 30.
So what’s the point of lying to yourself? Why go through all that trouble to get inaccurate data? It defeats the whole purpose of using attribution models in the first place.
And, believe it or not, this is common practice.
It is actually preferable to use no attribution model at all, and apply a standard discount of 10% – 30% to your results, rather than use distorted results.
But then again – if you have the option of using attribution models, why not make the best of it?
The whole point is to be able to objectively compare the efficiency of your campaigns, whichever department is responsible for them.
How? Consistency is key. You can only do this by using the same attribution model, otherwise the results will be skewed.
What attribution models are there, you ask?
Well, there’s First Interaction, Last Interaction, Last non-direct Click, Linear, Position based. And then of course: Time Decay.
The last click attribution model shows the last touch point of a user before becoming a lead or customer. The reason why it’s so popular goes back to the beginnings of the Internet, when analytics tools were only capable of linking traffic sources to sessions, not users.
Still, usually people use more than one traffic source until they convert. So considering only the last click doesn’t do justice to the previous traffic sources used which all contributed to the visitor converting.
This led to the need of creating a variety of attribution models that take into account the previous clicks as well, not just the last one.
Arguably, the most popular attribution model on the market at the moment is last non-direct Click introduced by Google Analytics. It’s the improved version of Last Interaction, and logical step further, as it ignores direct clicks.
Why? Because many times visitors find your site from a traffic source, may it be paid or not, but take their time until they make a purchase decision. They might come back to your site a few hours or days later and convert. So it’s just natural to attribute the conversion to the traffic source that preceded the direct visit to your site.
Now, promoting new products or services via social media doesn’t usually trigger sales right away, but it certainly has a big impact on the visitors. In this case, a position based model (a.k.s. U attribution model) might work best, emphasizing the first and last interaction.
But if you only use social media for promoting your products, and no other marketing channels, the first interaction might work best for you, as that is the only channel that brings clients.
There’s also the linear attribution model. As expected, this model treats all the traffic sources equally and gives them equal percentages from the conversion. For instance, if someone converts after using 5 traffic sources, they each receive 20%. Technically speaking, this is a very easy-to-use model, but it doesn’t necessarily reflect reality 100%.
Last, but not least, there’s Time Decay – the model of choice used by InnerTrends.
Why Time Decay?
Here at InnerTrends we think that in the vast majority of cases it is not realistic to attribute all the credit to one traffic source and one traffic source alone. It is also irrelevant to take into account too many traffic sources, so only the relevant ones should make the cut and be analysed. So far, Time Decay does all that, offering a good balance between the value it brings and how easy it is to apply.
So, Time Decay takes into account all the traffic sources one used to convert, but gives them credit in reverse order. In other words, it attributes the largest percentage of the conversion to the last traffic source, then to the last-but-one and so on until the first one.
Basically, this model considers that the first traffic sources only put your product on the map for the users, but they were not powerful enough to attract them, or maybe the users weren’t ready to commit at that point.
These are the advantages of using Time Decay:
What you already know:
- Every traffic source gets a score, none of them is ignored as long as it is relevant.
- It uses a simple algorithm that allocates points to each traffic source according to how close it is to the moment of conversion.
- The traffic sources closest to the decision moment get the highest scores. It’s just fair, isn’t it? If one of the first traffic sources had been so efficient, why didn’t the user convert faster?
- Traffic sources have a life cycle of their own, so naturally they register fluctuations in performance. Finding out about them as quickly as possible would be a huge opportunity for improvement. Time Decay gives you this opportunity. How?
Precisely because it gives more weight to the latest traffic sources: the bigger the performance fluctuations of the those traffic sources, the higher the impact they have on the overall results.
So, by using Time Decay, you will be able to prioritize which traffic source should be fixed first with more certainty than by using other attribution models.
- It has a standard definition, which makes it objective. Other attribution models can be adjusted by various workmates to suit their campaigns better. Consider the position based model for instance: you have to set the percentages of each media touch point.
But, as every other attribution model, Time Decay has its downsides.
Of course, which model doesn’t? But any issue is an opportunity for improvement.
What are the issues and what can you do about them?
For one, not every session of a user is to be included in the calculation. Some of them are irrelevant and, if taken into account, could only lead to inaccurate results.
For instance, if a user clicks on your site by mistake, and leaves it just as quickly, that source should be ignored. And so should a session that was opened and left untouched for a very long time.
Say a user reached your site, and decided to explore it later on, but forgot to do so for a few days. There is no activity on his part, but every time he turns on his computer, another session is registered. Will you count every session, or ignore that traffic source until you register some activity?
I think you know where I’m getting to.
How to improve the Time Decay attribution model?
Impose certain limits. For instance:
- Consider only the relevant traffic sources.
- Set a minimum amount of time for a session to be included in your calculations. For instance, if a user spends less than 30 seconds on your site, that session is not valuable.
- Dismiss all sessions with no or very little activity. There should be a minimum of engagement on the part of the user to get some value out of his time spent exploring your site.
This way you’ll make sure that users didn’t land on your site by mistake, and that the traffic sources we are analyzing brought real value to table and are worth analyzing.