One of the biggest annoyances for every paid-traffic affiliate marketer is dealing with the bot traffic, which results from the high amount of various BOT’s, that are surfing the internet. According to some estimates, more than half of all internet traffic is actually caused by BOTs in one form or another – crawlers, spiders, rippers … some are legit tools, some not so much.
There is not much we can do about the google crawler, for example, SE crawlers are an acceptable part of the internet ecosystem. There are many legit reasons for running automatic systems that mimic user behavior, so I’m not gonna complain about such tools at all.
But what really SUCKS, is BOTs that are deployed by fraudsters, in order to defraud advertisers and media buyers. There is a whole lot of cheaters, who create ton’s of sites with fake traffic and sell these BOT traffic impressions to various traffic networks.
To get an idea about how HUGE this problem actually is, read this Forbes article : https://www.forbes.com/sites/thomasbrewster/2016/12/20/methbot-biggest-ad-fraud-busted/
Some traffic networks do their best to detect this fake traffic, but many of them just turn a blind eye to the problem – every impression they can sell brings them profit, whether it’s real or fake … so some networks simply don’t care and act like the problem is not theirs to solve.
One way or another, BOTs don’t buy, so we don’t want them in our campaigns, or at least as low amount as possible. Since these clicks totally screw up the optimization process, is in the interest of every affiliate, to detect them and block placements/sources with too high % of fraudulent traffic.
When optimizing a new campaign, you need to make sure that you are working with human traffic, it’s completely pointless to attempt any optimization with fake BOT traffic – they won’t register, they “click” too much, they screw up all the data and deteriorate the click distribution …
That’s why it’s absolutely crucial to know how big % of the traffic is fake, what are the placements with the highest % bots … and block those.
SO HOW DO WE DETECT BOT TRAFFIC?
In all honesty, there is no 100% reliable method out there. Some bots are so advanced that it takes literally detective work to uncover them, and that’s not something we could do. There are specialized companies, who audit traffic and combat AD fraud, but their services are expensive … forensiq.com for example.
The good news is, large part of the BOTs is “stupid” and we can catch them with a few simple tricks. Let me give you a few tips on how to do it now. Please keep in mind that each of them requires some kind of a tracker, like Voluum, for example. You need this to be able to see what placements/sources are actually sending the BOT traffic.
The best is to use a tracker that supports “multi-offer campaigns” – you will be sending legit traffic to offer 1 and BOT traffic to offer 2. In voluum for example, you can handle this easily with using “click/1, click/2 URLs”. Every tracker is different, so I’m not gonna get into details here, you will need to consult it with the support of your tracking solution.
1. BOT click trap : Put “invisible” clickable links on your landing pages. These can be 1×1 pixel images, text using the same (or very similar) color as the background, links placed very low on the page (below the fold) in areas that are not visible (by disabling scrolling).
Link these invisible links to the offer #2, if it’s getting clicks, it’s caused by BOTs… simply because no human could possible notice them and stupid BOTs click on everything.
This is the oldest detection method, so some BOTs are designed to pass these traps – especially text links in the same color are often ignored, that’s why it’s recommended to use SIMILAR color and not the same. Same goes for the 1×1 links, but if you change it to 10×2 for example, humans will still not notice 🙂
Put this code into the source code of the html file, into the <head> part:
window.location.href = “http://www.yourvoluumurl.com/click”;</script>
You have to replace the http://www.yourvoluumurl.com/click with your actual click url, depending on what tracker you are using.
In this case, some BOTs will not understand the redirect and they will not get to the offer. Real users will be redirected though. As a result, the higher the CTR of this LP will be, the more human traffic you are getting. In case of a 0% CTR of this LP, all the traffic were BOTs.
You can use any Landing Page for this basically, there has to be at least one click-able link in it though. Link this link/button to offer #2.
Then place the snippet below into the <head> part of the source code. Link it to offer#1 :
window.location.href = “http://www.yourvoluumurl.com/click1“;
And put this code into the <body> part, you can change the number 200 to 300, 400 or so :
<body onload=”setTimeout(redirect, 200);”>
What’s gonna happen now is this : the LP will load, but after 200 ms (or other) it will automatically redirect to offer #1. The redirect happens too fast for a human to be able to click on any links on the LP, but BOTs can do that, so if you see clicks on offer #2, it’s from BOTs. And additionally, if you don’t see clicks on offer #1, it means there are BOTs that don’t understand the redirect … so it’s like a double check.
NOTE: Let me repeat that all these methods are simple and pretty basic, so they will not detect advanced BOTs. BUT, large number of BOTs are really this simple, so it’s still very effective to use such methods, especially when testing a new traffic source for the first time.
DO I HAVE TO KILL EVERY PLACEMENT/SOURCE WITH BOTS?
Let me tell you one thing : there is NOT A SINGLE traffic source is free of BOT traffic. It is everywhere 🙂 The problem is, when the % of fake traffic reaches certain amount. As a rule of thumb, it’s virtually impossible to make a placement work, when then % reaches 70% or 80% … so feel free to block such placements immediately.
The rest CAN work, but it makes optimization complicated.
Here is a simple approach that I’m using and it should work for you too. When starting a new campaign, I run a BOT test initially and cut every placement that gives me more bots than the AVERAGE of that particular source. It becomes pretty visible after a rather small amount of traffic.
It’s pretty much always following this pattern : some placements have super high % of bot traffic, majority is in the middle, then some have very low % of bots. I cut the super high % ones and set a thresh-hold based on the rest, simply by estimating a rough average. Then I cut everything that has more bots than this average.
Now I know that I’m working with the BETTER part of the placements of this particular source. At this point, I switch to regular landers and start with the normal optimization process. In case I’m able to get into profit, I unblock some of the placements that I initially blocked due to higher than average BOT % … some of them will work too.
A TIP FOR YOU:
Keep one thing in mind when buying traffic : it’s an auction! All standard media buyers, CPA marketers or other affiliates … they all buy the traffic because they are able to turn profit on it, and that is what sets the bid levels. All these people are buying the impressions including the BOT traffic, just as you, so this factor has already been calculated into the price.
If all BOTs vanished right now, the prices would simply go up quickly, because the traffic would suddenly become more productive. With this in mind : the presence of BOT is mainly a major optimization problem, it also causes financial loses, but the auction model keeps them down somehow.
Except for one type of advertisers – branding companies who only look at numbers, without actually measuring the effect of their campaigns. They will buy anything that looks good on paper. That’s why traffic networks love them so much and that’s why performance marketers hare them 🙂
Thanks for reading!
[grwebform url=”https://app.getresponse.com/view_webform_v2.js?u=Sym6E&webforms_id=8162402″ css=”on” center=”off” center_margin=”200″/]
But I have a question,
Now we can exclude sites or zones that have high% of robots.
But I saw a lot of high% robots sites,there are still many affiliates running.
I’m sure they can make a profit.
Do they have a way to make robots do not impression their banner?
There are several options as to why they are doing it :
1. They don’t run bot detection and keep on loosing on those placements. Maybe they are running tests.
2. Even a high % bot placement can work, it’s just very hard to optimize such traffic. That’s why I advice to test on placements with low bot % and then retest the high % bot placements.
3. It could be branding advertisers that don’t care 🙂
4. Bot % is not the same for all GEOs, or targeting options. So they might be targeting a segment with lower bot %.
… there could be more reasons for this too 🙂
How about situations where you are using a lander? I intend to use this on my RON campaign so that I can keep picking out dubious site IDs while still running traffic through my LP.
I’m using Voluum and this is my flow now:
Campaign Link (CL) –> Landing Page (LP) –> Offer Page (OP)
1. Based on your 3rd method, what should I do to make it flow:
CL –> Bot Lander –> LP –> OP ?
2. On this part:
window.location.href = “http://www.yourvoluumurl.com/click”;
What should I replace “http://www.yourvoluumurl.com/click” with to make the bot lander work redirect to my LP and not have data/tracking/postback screw up? Placing the original campaign loop will just send it into an infinite loop I think.
Hope you can help… cuz my traffic source seems to have a new wave of bot traffic that I’m trying in vain to detect.
I have a campaign with following path in Voluum:
Campaign (A) –> LP (A) –> Offer (A)
I need to change the path to
Campaign (B) –> LP (B) auto redirect to –> Campaign (A) –> LP (A) –> Offer (A)
In Path (1), I have a Voluum token/variable in LP (A) that is extracted from Campaign (A) link that will determine which image will appear on LP (A).
For Path (2), how do I get the token/variable values from Campaign (B) to be communicated to Campaign (A) via LP (B) so that LP (A) can know what image to show from the token/variable values from Campaign (B)?
You overengineered. It’s very simple.
You auto redirect after 300 ms to offer, afther that they are two possible outcomes.
1. It redirected succcessfully. Click detected in tracker.. Yayy it’s HUMAN
2. It did not redirected. Click not detected, because bot closed page. Shit it’s BOT
So you just monitor CTR and filter stats that are low like 40, 50, 50% etc.
So good luck!
Yes, unfortunately you are right 🙂 This method is good for catching simple BOTs, that are still the majority of BOTs found in the most popular traffic networks. When it comes to highly advanced BOTs as you described … situation is MUCH more complicated.
Do you have a sample to achieve a perfect bot which is indifferentiable from human?
Nope, I don’t run bots.
Hey. I know another way to combat with bots. It works with networks according to the CPI/CPL model. And then they themselves deal with their traffic, with their bots, proxy and other shit:)