Email is Dead – Why bother…?
Everybody knows that email is dying, and that social media or augmented reality (or something else) will take over and change the direct digital marketing world. I am made aware of this by the dozen or so emails every day advising me of the situation, most of them offering best practice advice, encouraging me (in my words – not theirs) to jump ship before I and my email marketing skills sink.
So, it’s in print – it must be true! Some of these emails are sent to me from companies who are ‘in the know’, so who am I to ignore them? And why should I disbelieve them?
Separately, I attend webinars, seminars and conferences on email marketing – all provided for the benefit of advising me what I should do more of and what I should ditch in the world of email marketing (until of course email has well and truly sunk – see above paragraph). I am advised to send emails in the evenings to B2C recipients and in the daytime to B2B recipients; if I send on a Friday I will reap greater results than if I send on a Monday; I should ensure that the emails I send contain only a tiny amount of image content versus text content; and most confusingly (because I really want to follow the advice of these experts, as they have obviously spent long hours in doing their research for my benefit) the optimum number of characters for a subject line is 3…or 17… or 37… or 43…?
Worry not, help is at hand…. I am offering yet more advice to this pool of best practice expertise from subject matter experts.
My advice is simple – don’t blindly follow either my advice, or the advice of others (experts or not), instead please listen to the advice, test it for yourself and develop your own best practice.
I should point out at this juncture that I have been very unfair to those offering best practice recommendations (above) – this advice is (I am sure) always offered in good faith by others who have tested it and found that it has worked for them and their recipients, some of this advice will also have been on the basis of extensive research across a number of companies…and all of it is offered with the hope that others will benefit from it. The point I am making of course is that it is easy to simply take on board advice without question when we are all as busy as we currently are, and forget that something which is ‘best practice’ for one company and its recipients will be completely different to that which is best practice for another company and its recipients.
… And as for those advising me that email is dying due to social media and augmented reality…, well my advice to them is to watch and learn – there are already many ways in which these media complement each other and this is just the beginning of a beautiful, long-lasting relationship; social media has already begun to change the digital marketing landscape, as will augmented reality…and then the next media…and then the next – but as long as current media does not stand still there is room for them all if used appropriately and not independently of each other.
Anyway, on with the theme…
Nobody from I-know-everthing-there-is-to-know-about-email.com can know what works best for your recipients or for your brand…in fact, without carrying out a single piece of research (which is definitely not my advice here), you are almost certainly better-placed than any email expert sitting outside of your brand to understand what works best for your recipients (and your brand)…but you don’t want to stop there, you want to keep improving…why else would you be reading this article?
Using your current data, creative and email software you can run some simple tests – you’ll be amazed at how much more you can improve your response rates by simply applying the learnings you have gathered from running your own tests, and how much more you understand about your recipients for future (powerful) campaigns.
Why is it important to run email tests?
In today’s economy it is important to maximise the returns you get from every campaign, regardless of channel or media. Email is just one of the many media available to do this, and with (usually) a high ROI due to the ‘cheap’ nature of email sends it makes sense to further maximise its potential. Email (as with most –if not all- digital media) offers the luxury of visibility of ‘instant’ results, specific to each recipient, which in turn allows us to obtain highly accurate results – as and when we require them – it appears to be a perfect media to run tests which can serve only (when carried out effectively) to improve your already favourable ROI.
Email testing allows you to optimise email content specific to the target group, it also allows you to know when to send each group of (or individual) recipients the email which will deliver the greatest results. It allows you to test which copy, content and offers etc. each group of customers react most favourably toward, enabling you to continually improve your email results.
It is important to constantly re-assess what you are doing in email marketing to understand whether what you are doing is still working for your recipients, is there something you could be doing better? Are you getting the greatest ROI? Etc. You will be unable to answer any of these questions truthfully without testing (guesswork really doesn’t count anymore (if it really ever did), when it is so easy to run tests).
“But…” I hear you say…
“…but surely it’s complex and time-consuming to run my own tests…and I can save so much time if I just implement others’ advice…”
There’s really no need for it to take much time to set up and measure your tests, and each test will only be as complex as you want to make it. Simply employing advice on email content, time or day of send, or subject line length without first testing it may improve your response rates if you are very lucky, but…would you normally gamble on the back of somebody else’s advice without first checking the odds and calculating whether you can afford the risk?
Now that we’ve discovered that we are not irresponsible gamblers, let’s get started…
What will you test? When will you begin testing? Here is where the excitement begins – you’ve just started to change the odds in your favour!
Well, the list of what you could test is almost endless – your choice however of what to concentrate on first will no doubt be based either on the targets you are measured against (quite right too), or the ‘lowest hanging fruit’ (the area/s that will bring about the biggest gains) – hopefully you’ll be lucky enough to find that these 2 things are the same.
As we cannot cover every test scenario in a single article we may as well begin with the basics…the subject line – so simple yet so fundamental…if your subject line isn’t compelling enough to make your recipients open the email then all your efforts will be wasted.
Before you begin to get creative with potential subject lines however you need to need to give a thought to planning the test. The process for testing really needs to be defined before any testing is undertaken, it is quite simple but if not followed the test is likely to fail…there are only 6 steps to simple testing:
Step 1 – Definition: Without first defining exactly what you wish to test you may easily find that your goals move, resulting in you attempting to measure the results of a test that was never designed to deliver the goal that you are now testing. This sounds very simple, indeed obvious, but (if you are like me) you will find that you want to run before you can walk – attempting (before you have even begun to complete the first test you have already designed) to find and implement ways to improve it …and before you know it you will be attempting to measure something the original test was never designed to deliver…or worse, you will change the design of the test part way through, delivering neither one goal or the other.
Without clearly defining what it is you wish to test, how can you be sure whether the test has succeeded or failed?
Step 2 – Segmentation: Now that you know exactly what it is you wish to test you will need to segment your database, allowing a group of recipients for each part of the test e.g. if the email is going out to 100,000 recipients, and you are running an A/B test, you will need to split your database into 3 groups – 2 smaller groups of 5,000 and 1 large group of 90,000 recipients (assuming that a control segment has already been excluded – if you don’t already have a control segment, you may choose to create a 3rd group of 5,000 recipients from the remaining 90,000). By segmenting in this manner you are able to perform your test, analyse your results and send the majority of recipients the email that generated the most favourable results.
Step 3 – Email creation and tracking: Design your 2 test emails (following the rules below) and give them each different tracking codes to ensure that you can measure each one separately for comparison.
Step 4 – simultaneously send both emails…wait…and watch.
In sending your emails simultaneously you are ensuring (to the best of your ability) that all conditions remain the same apart from the one thing you are testing. If you were to send the 2nd email at different time or on a different day or week you cannot be sure that your respondents would behave in a similar manner to how they would have behaved when you sent your first email (we all know that our behaviour or mood can change from one moment to the next based on any number of things from the weather, to the date in relation to pay day etc.), this could result in a very different result to the one you may have seen had you sent both emails together – in successfully carrying out any test the you must ensure that conditions between both test scenarios are comparable where humanly possible.
Step 5 – Analyse: When the test has been running long enough to drive reliable results you will be able to measure and compare the results. In carrying out this analysis you will want to consider if there was anything that didn’t work as expected. If you do see unexpected results (or test conditions altered in a way in which your test results will not be reliable) you will need to consider whether you can utilise your results and send the remainder of the database the favourable email, or whether you will need to carry out a further test prior to this final send. Should you need to carry out a further test you will again need to split 2 groups of 5,000 from the remaining batch of 90,000, leaving 80,000 recipients to send to when you compared the results of this (your final) test. Then simply repeat steps 3-5, ensuring that conditions between the 2 (new) test scenarios are comparable.
Step 6 – Control: You have now compared data from the final test and defined which of the 2 test scenarios was more favourable, leaving you with the ability to confidently send the ‘winning’ email to the remainder of your recipients.
In addition to simply using your test results for this email send however, you must adopt your learnings for future email sends – after all, you have spent time and effort learning how your recipients respond to email content…the least you can do is continue to send them the content they prefer…and reap the rewards of improved response rates for having done so.
Let’s get back to the subject line test example…
Using the above process, a subject line test would look something like this:
Objective: Increase open rate by 25%
Segments (assuming 100,000 recipients): 4 x 5,000 recipients, 1 x 80,000 recipients
Content: As we cannot use the same subject line for every consecutive send, we can either choose to run a subject line test every time we send an email (this is preferable to any other option, as it represents recipient behaviour relative to the time you want to send the email), or alternatively we could use theme’s to test subject lines e.g.:
Subject line 1 – Contains a call to action
Subject line 2 – Contains the recipient’s name
Subject line 3 – Very small number of characters
Subject line 4 – Very generic
For this example the subject lines could be:
1 – Buy Now! Free Wallpaper Stripper When You Spend Over £100
2 – Peter, spend over £100 and get a free wallpaper stripper
3 – Free Stripper
4 – Do some DIY this weekend
Create a single email; copy it 4 times so that you have 4 test versions plus a version to send to the remainder of your recipients. Assign an email to each of the 4 test groups then add a separate tracking code to each of the 5 emails
Send: (simultaneously send the 4 test emails)…and place bets with your team mates on which subject line you think will win…wait…and watch
Analyse: When each of the 4 test emails have generated enough of a response rate to define a clear ‘winner’ measure the open rate for each email, define which drove the most opens (and announce which team member won the bet).
Control: Send the email to the remaining 80,000 recipients with the ‘winning’ subject line, record the test results for future use and know that you have maximised your chances of recipients wanting to open the email (significantly increasing your open rate.
As with everything, there need to be a few rules around testing, I have a few of my own rules I will share with you, but I’m sure that you will have your own to fit with your brand that you will wish to add to the list:
- Above all, keep it simple – if it’s not simple you will not find the time or effort to complete the test
- Test only 1 thing at a time, otherwise you will make a very simple test into a very complex one, where you are unable to separate which aspect of the test drove what result.
You can (if you are sure you have the time to do so) carry out a number of separate tests per email as long as each test is carried out independently of each other e.g. (using the 100,000 recipient example) you may choose to run 4 subject line tests like those above (4 x 5,000 recipients), then separately, you may wish to A/B test whether the ‘follow us on Twitter’ button drives a greater number of clicks at the top of the email than at the bottom (utilising a further 2 groups of 5,000 recipients, leaving 70,000 recipients remaining).
In the above examples it is possible to combine both tests whilst carrying out the subject line test, the results of the 2nd test (the Twitter button) will not however be comparable, as these results will be driven by the varying number of recipients opening each of the subject line tests. Due to such added complexities I strongly suggest that if you are new to email testing, you stick to a single test per email for simplicity & time-saving.
- Always begin with your objective or hypothesis e.g. Objective – ‘Increase open rate by 25%’ or Hypothesis – ‘Including the recipient first name in the subject line increases open rate’
- If you have begun with the objective/ hypothesis you will know what you need to measure (in the above example you need to measure open rate). Be absolutely clear when setting your objective exactly what metric/s you will measure to analyse the results of your test.
- Stick to a single measurement where possible (if this is not possible do not exceed 2 measurements per test) – if you make the measurement analysis too complex you are unlikely to find the time to carry it out, and the test has failed before it has begun (and it would have increased your open rate significantly).
One final rule…Tell your bosses that you are carrying out tests in order to deliver these great results. Get them to take part in the ‘guessing which scenario will win’ competition, so that they are bought-into the importance and relevance of each test. It is very important that you follow this rule – if you don’t shout about the good work you are doing to improve email response rates nobody else will!
Oh, and just one more rule…Don’t just test something once and assume that you will always get the same result (remember that your behaviour changes from one minute to the next…so does that of your recipients), run each test at least 2 times (once on one email send and then again on another email send), if you get the same result both times you can be reasonably confident that the first was not a fluke, if however you see a different result the 2nd time, run the test a 3rd time to see which result was best likely to represent the response rate of your recipients. When you are happy that you have ‘nailed’ the correct result don’t forget to keep a record of the tests (and results) that you have carried out, because you will need to re-run each test on a periodic basis as recipient behaviour will naturally change over time.
What tools are required for testing?
There is no need to purchase expensive tools to run email tests, generally you only require the email software you currently use, a calculator, somewhere to record your on-going tests and results (e.g. Excel), and PowerPoint (or similar) if you wish to show-off how your test results have improved response rates.
Whilst I have detailed above how to manually run split tests such as a subject line test, you may find that a conversation with your email software provider (or your IT team if you use an in-house email software solution) will result in you finding that you already have access to testing tools which have been built into the software. These tools can make testing much simpler, they may do as much as run the split test for you (naturally you will have to define the test parameters, design the email and define the test groups), and if they are not currently able to do this, you may find that this functionality can easily be built in, allowing you to run a test on every email send, which in turn will deliver improved results on an on-going basis.
My list size is too small to test…
Some lists are naturally smaller than others, this does not however result in you being unable to test, it simply means that you may have to split your full database into 2 halves to run the A/B test, and take the learnings from this test for your next email send, alternatively, after having ran the A/B test on this email send you could then swap the test on the next email send, so that you have effectively doubled the size of each scenario test over the course of 2 email sends.
Generally speaking, no email list is too small to test, which is just as well really, since testing is the only true way of understanding you recipient behaviour, and whilst you may have a fair idea what your recipients respond well to, you will be surprised (through regular testing) how many times you are proved wrong.
An email database can be segmented into various profiles such as frequency & recency of purchase or value of previous purchases, number of documents downloaded or by subject matter of the documents downloaded, by demographic and geographic data or clickstream data (from emails already received & from website behaviour) etc.etc. etc. Once you have this segmentation in place, you are able to use varying content according to preferences & profiles. This then allows you to test these varying content on a ‘small’ group per segment; the size of this ‘small’ group is dependent on the size of your database, the more personalised the content however, the smaller the group will need to be – in an ideal world you would test content on an individual recipient basis – some of which is already possible given the ability to import dynamic content into email, including the download (by individual recipient as they open the email) of personalised content fed by recommendation engines. You are able to test all of this before sending the email containing the most frequently clicked (& converted) content to the remainder of the segment – Yippee.
What else should I test?
Again, this is likely to be defined by your targets or the ‘lowest hanging fruit’ as discussed earlier, but there is almost an infinite number of things you can test, here are just a few:
Personalisation – Should you use the recipient’s name in the subject line (as mentioned above)? Should you use it in the copy? Should you use their first name or their title and surname? Each of these will need to be tested separately in order to provide reliable results.
‘Buy now’ versus ‘More info’ button – It is possible that your recipients respond more to a ‘more info’ button than a ‘buy now’ button. One of the recipient databases I use on a frequent basis generates a higher volume of clicks when I use the ‘more info’ button, but the ‘buy now’ button drives a higher conversion – there is no way I could have known this without testing – now that I am armed with this knowledge I can use it to my advantage: where generally the objective with this particular list is to increase revenue, there be times when I need a higher volume of recipients clicking through i.e. when requesting recipients to check out the new beta site (this is when I would choose to use the ‘more info’ button rather than the ‘buy now’ button.
Day and time of send – This is very important to test, whilst I used the examples above of being advised to email B2B recipients in the daytime and B2C recipients in the evening, this test is not as simple as it may first appear.
Your B2B recipients may be small businesses who spend all day away from the office, and then switch on their computer when they get home at night; conversely your B2C recipients may choose to open their email in their lunch break at work, but may choose not to purchase until they get back home in the evening. This test may have to be split into multiple parts, beginning with analysing what time of day each recipient segment (B2B / B2C) opens their email – if you have a large enough list you may be able to send your email out in waves every hour for a 24 hour period (or even better – every hour over a 7 day period), this will enable you to understand your recipient open rate over time.
Regardless of when you actually send the email however, depending on how frequently you currently send emails, you may find that you have driven specific recipient behaviour to some extent i.e., if you always send at 9am on Tuesdays, you may find that the bulk of ‘opens’ occur 10-12pm on Tuesdays; conversely, your test may show that the bulk of ‘openers’ however don’t actually tend to open their email until Wednesday evening (regardless of when the email is sent).
You may find that it is enough to know when your recipients open your emails – this knowledge can at least ensure that you send your email in time for it to appear at the top of their inbox, rather than half way down.
Now that you understand when recipients open your emails however, you are likely to want to know when the bulk of your purchasers/ transactors (I’m sure this isn’t a word, but you know what I mean) open their emails. These are the recipients you really want to target first; they are your bread and butter so you need to ensure that if they open your emails on Tuesday evenings your email is sat at the top of their inbox (regardless of when they make their purchase… remember that some of your recipients open their email at lunch time but choose not to purchase until later).
Don’t worry (at the moment) about inbox timings for recipients who aren’t purchasing – if they are opening their emails and not purchasing then it is likely that the content does not meet their current requirements, so this would be the area you will need to concentrate on prior to inbox timings for these recipients – all is not lost however, now that you understand who these recipients are you are able to run tests around email content and call to action etc.
In addition to testing the time of send you will also want to test day of send….you don’t want your email to sit at the bottom of any of your recipients’ inbox simply because you sent it at the right time but on the wrong day.
Once you understand when each (purchasing) recipient opens your emails you are likely to find that your email software supports the ability to send emails to individual recipients at the time/ day that they open the email. If your email software doesn’t support this you can always group your recipients into ‘approximately the best time to send’ groups to improve your email response rates.
Personalised ‘from’ line –The ‘from’ line on the email is the line in your inbox which details who the email is from e.g. ‘from Sally at Emails R Us.com’.
You may find that your recipients respond better to a ‘from’ line such as ‘Jenny at Domestic Imperial’ rather than ‘Domestic imperial’. This is generally because adding the name to the subject line takes away some of the impersonal branding and gives a more ‘personal’ feel to the email and the brand, recipients may even feel that the email was solely put together for them by Jenny, and they might even feel that they could get to speak to Jenny should they decide to phone.
Personally I have found that this works best when speaking on more of an individual basis to much smaller groups of recipients i.e. when asking them if they would like to take part in a study or requesting that they leave a detailed review. You must remember however that the recipient is less likely to respond well if they receive one email from ‘Jenny at Domestic Imperial’, and then the next from ‘(someone else) at Domestic Imperial’; whilst it still appears to be personal, it may appear that they are being moved from one person to another (rather than being assigned to a single member of staff); worse still, should the recipient receive one email from’ Jenny at Domestic Imperial’ and then another from ‘Domestic Imperial’ they are likely to feel that this has removed any of the personal touch afforded in the first email.
Savings messages – Are recipients more likely to convert if you state ‘save 30%’ versus ‘save £60’ or versus ‘£60 off’?
Sometimes the answer lies more in the number than the actual saving, e.g. if the product is a high value product ‘Save £200’ is generally a stronger message than ‘save 10%, conversely, with a low value product ‘save 65% appears to be much stronger than ‘save £3.45’. Knowing this however does not take away the necessity to test the theory – in a time when the majority of us are looking to save money we are finding ourselves increasingly exposed to massive ‘sale’ and ‘save’ messages, sometimes therefore you may find that your recipients are more interested in the actual value of money off rather than the percentage saving (or vice versa) – but you will not know which your recipients respond more favourably toward without testing.
Length of email – Does the length of email really make much difference to the conversion rate? Do recipients stop scrolling down the email before they reach the bottom? Do recipients even bother to scroll at all? Where should I put the product I most want the recipient to click on/ purchase?
These questions are all very specific to your recipients and your brand. Personally I have found that by placing 2 ‘deal’ banners at the bottom of my emails I get a good click rate throughout the full length of the email (remember that, in addition to recipients driving our behaviour we will (to some extent) drive theirs if we consistently follow a set behaviour in email content). I have found through testing that recipients appear to convert better when they click in the middle or bottom of an email, but the higher rate of clicks appear at the top and bottom.
Frequency of send – In an age where recipients are receiving more emails than ever it is important to know how frequently to send emails to your recipients for optimum response rates. Your email sign-up process may take care of this (and if it currently doesn’t you may find that a conversation with your email software provider will enable rules to allow this), but if it doesn’t – or if you prefer not to use this function, you may wish to test the frequency of your email send. You may need to run separate tests for transactional versus non-transactional emails, as the response rates for transactional emails are usually far greater for transactional emails due to the recipient having driven this email send by transacting with you.
Incentive versus no incentive – A common use of incentives is a reactivation offer or an incentive following an abandoned order. It is useful to test these on a frequent basis, as you may find that recipients become used to them and either delay transacting until they receive their next incentive, or alternately they are so used to receiving the same incentive that they no longer notice it (either is a bad result for your brand). You will no doubt wish to test not only the value of the incentive, but also the frequency, possibly by customer segment – you may notice that some customer segments do not alter their transactional behaviour regardless of incentives offered, where others increase their transactional frequency when offered an incentive.
Personal experience has shown me (and shocked me) when first sending emails requesting recipients to leave a review against the products they had purchased – I tested 4 subject lines, 3 of the 4 contained detail of an incentive to leave a review (…a chance to win £100), and the remaining subject line contained no incentive. The winner – by a long way was the subject line with no incentive (reminder to test everything …even when you feel you know the answer).
Abandoned basket email: products or no products – The tendency (where an abandoned basket email is sent) is to show which products remain in the recipient’s basket. You may find however if you send the same email without products that you get a higher volume of recipients clicking back through to the site (where there may have been no interest in doing so on receipt of the email detailing the abandoned products due to there no longer being an interest in purchasing these products). On clicking through to the site (regardless of whether the recipient purchases the basket contents) you may find that they are compelled to purchase by a different offer).
There are so many more things that you can test than mentioned above, remember to test the things that enable you to meet your targets first – these are the ones that will most impact your recipients (and undoubtedly the bottom line). Understand which of your emails drive the greatest ROI before deciding what to test – these are the emails to concentrate on driving even better results, once you are happy that you have completed testing these you can move onto the remainder of your emails.
Most tests mentioned above will not need repeating on every email send, although I heartily recommend that you run a subject line test on every send (and run frequent subject line tests for triggered emails). All other tests (non-subject line) will need to be repeated on a frequent basis to ensure that you are keeping up to date with the requirements of your recipients, and maximising your results.
You cannot test personalisation without first deciding that you will either continue to send personalised messages to the same recipients (regardless of test results) or agree that you are ‘comfortable’ or at least ‘OK’ in knowing that these recipients may feel a little ‘cheated’ in receiving a personalised email followed by non-personalised emails.
Expect to be proved wrong; it is likely that you are not always accurate in predicting the behaviour of your recipients (none of us are), Remember that nobody knows exactly what your recipients want all of the time – even they sometimes don’t…that’s why something as simple as the right call to action or ‘save’ message can drive them to make a purchase that (just a few minutes before) they didn’t know they were going to make.
Testing is not done in isolation; you do not simply test then send. You need to keep testing to ensure that the first result was not a one-off. Only then can you be confident that you are maximising the efficiency & effectiveness of your email programme. You will then find that the more things you test the more things you want to test…it really will never end.
Finally, just a reminder of the steps (and rules) above – keep it simple, don’t test anything until your objective (or hypothesis) is clear; have only 1 measure per test; do only 1 test at a time; don’t assume that the first result is always right – always test the same thing twice before moving on and keep track of your tests and results. Shout about your successes – you’re about to change the world of email marketing…for your brand – Enjoy every moment of it!