When you already had campaigns, when you were already pouring something, something brought profit, and something - not, one disease appears. You think that you have grabbed God by the bosom, and you already know everything. You start to launch "like last time, only unique". This text/headline worked stably, but this time - for some reason it does not work. And you sit and think, what's wrong?
The secret is simple: everything changes - trends, creatives, algorithms, everything. What worked yesterday, today already gives a minus. And the only way to clarify this is to test.
The devil is in the details, and sometimes it is enough to replace only the text to start receiving profit. It seems like a trifle, since 90% of success is creativity, but it is in such trifles that the success of the campaign lies. And if you do not test these trifles, then you are slowly digging a hole under your profit...
How to test? Probably, the most correct answer will be - systematically. Have a plan, understand what and why you are changing, and how to measure the effectiveness of these changes.
Therefore, today we will talk about testing.
First, you need to understand what hypothesis you are testing. In addition, keeping, for example, a table with the results of these hypotheses is systematic.
Let's say you made 6 creatives with the same approach, launched them on the same target audience for objectivity of the results. First of all, we pay attention to CTR. Of course, it happens that CTR is bad (less than 1 percent), and the cost of the lead is okay - then it is logical to leave this advertisement. But, of course, the lower the CTR, the higher the cost of the lead on average. Next, we look at the cost of the lead, because a high CTR does not guarantee that the lead will be cheap. But the lead is not our ultimate goal. Our ultimate goal is conversion, so if the difference in lead price is insignificant, then it is worth spending a small amount on these creatives and seeing if there is a difference in how they convert. Often, such a difference always exists, and of course, you need to choose the creative with the lowest price for conversion. And after this analysis, we scale the winner and test a new pack of creatives.
And what should we test anyway, assuming that we know the working approach, but we cannot find a working creative?
Hooks. The first 5-10 seconds of your video are either catchy, or there is a scroll to the next post - and the lead is lost. It is at this stage that most of the traffic is filtered out, and testing the same creative with different hooks makes sense. And here not only the text is tested, but also emotional triggers: interest, anger, shock. Usually, if you find a hook that works, then the emotional triggers of other potentially good hooks will be the same.
Call to Action. Two identical creatives can have different results simply because of different calls to action. For example: "Musk created a new platform that helps - has already helped hundreds of Canadians" and "Canada is in shock! How did he do it?" - will have different results. Need to test)
Colors. The easiest to test, but far from the last in importance. On some creatives, just guessing with the color, you can double the CTR! In my experience, on average, colors on a red background work best, especially if there is a lot of background space. Probably because it is more eye-catching and stands out among the gray tape.
Changing the voice of the announcer.
Here it is simple, male or female? Changing the voice of the announcer of the same gender does not make sense, unless the previous one was just terrible.
The main mistake during testing is too many changes at a time. You need to test 1 hypothesis, and not make a change in both the creative and the landing page, and think, why did it start working? Maybe it was just luck, or a change in the landing page, or maybe the title decided...
You don't just test to find "better", you test to create a library of what works. And if you approach the issue systematically, you won't need to invent from scratch, but just use existing patterns + experience gathered from tests.
Comments (0)
Readers haven't left any comments yet, be the first!