How to Conduct A/B Tests That Will Drive More Advocacy Actions
By Brian Rubenstein
(This blog post originally appeared on the VoterVoice website resource page.)
There is no better way to learn how to most effectively engage your list than to regularly conduct A/B tests. Yet, many organizations either don't test or conduct tests that don't provide actionable insights for improving their advocacy campaigns. Following these simple tips will result in your organization gaining a better understanding of what motivates its volunteers or members, and sending emails that drive more actions.
What Should You Test?
Nearly every component of your advocacy email can be tested. Here are specific elements that can drive major improvements in your action rates:
Subject Lines
Short vs. long
Urgent vs. non-urgent
Direct vs. intriguing
Serious vs. humorous
Email Content
Including an image vs. no image
Action button vs. only hyperlinked text
Issue description focused on statistics vs. street-level impact
Personal story vs. just the facts
The Voice You Use
Grassroots director vs. lobbyist
Staff vs. volunteer
Formal vs. informal
Email Logistics
Day of the week
Weekday vs. weekend
Time of day
Takeaway: Select two elements you want to test in your next campaign.
Test Only One Thing at a Time
The most common mistake people make is testing multiple variables simultaneously. If you test both a new subject line and issue description in the same email, you won't know which change drove the improved (or decreased) performance.
Keep it simple. Test one specific element and be intentional about the difference between your two versions. Then integrate what you learned into your next set of tests.
Takeaway: Before launching your test, write down the exact element you're testing and what you hope to learn from the results.
Choose Between a Full List or Sample Test
You have two options for testing your list - sending to everyone or testing a sample first.
With a full-list test, half your list receives version A and half receives version B. This works well when you're under time pressure to send your action alert or if you have a smaller list (under 10,000 records).
For less time-sensitive advocacy alerts, test with a smaller segment first. Send version A to 20% of your list and version B to another 20%. After about four hours (or a full day if you have the time), send the winning version to the remaining 60%. This approach lets you send the optimized version to a far larger percentage of your list.
Takeaway: Choose your testing approach based on your timeline and list size. If you have time, use the sample method. If you need quick results, test the full list.
Test Regularly and Early in Your Campaigns
It’s beneficial to test a concept at least twice before drawing a definitive conclusion. That will help account for any anomalies related to a particular issue or moment in time. If your two tests show different results then look back to ensure the concepts were consistent across both tests. If so, test two more times to determine which result should be considered the truth.
Also, don’t wait for a critical legislative moment to begin testing your emails. Conduct most of your testing during the early parts of your campaign so you can send optimized emails that drive the strongest response when the actions matter most.
By building testing into your regular email schedule you’ll be sure to have the answers you need at the most important moments of your campaign.
Takeaway: Create a testing calendar for the next three months, identifying specific elements to test in each campaign.
Track What Matters Most and Use What You Learn
When analyzing your test results, focus on the metrics that truly matter. For advocacy that typically means completed actions.
While open rates can provide interesting data (despite being grossly inaccurate due to Apple’s iOS changes), it's the number of people who actually complete your advocacy action that will determine your campaign's success. This is true even for subject lines as they can impact far more than just open rates. Your subject line sets the tone for your email content and can impact your clickthrough and action-taking rates as well.
Create a simple tracking system that documents:
What you tested
The results for each version
What you learned from the test
The insights from your tests are only valuable if you put them into action. Consult this information regularly when writing future emails and be sure to integrate your learnings.
Takeaway: Focus on the metrics that truly impact the results of your campaign and integrate those learnings when creating new campaigns and content.
Comments