Dynamic Testing experiences are best used when your testing goal is to quickly learn and maximize ROI. Think carefully about the experience you are interested in testing and how to best measure the immediate effect of the experience on the user's behavior.
Keep in mind that Dynamic Testing experience automation drives continued ROI. You should allow a Dynamic Testing experience to run over time to minimize cost and reduce risk. Doing so ensures that your site delivers the best experience to visitors at all times while also providing value to your business. If you promote the variant performing best at a particular moment, you introduce risk because that best-performing variant may not always perform as well in the future. If you pause a Dynamic Testing experience because none of the variants are performing better than the others, you risk not knowing if one begins to drive value. Dynamic Testing experiences automatically recognize and adjust traffic when performance changes occur over time.
Develop and Launch a Dynamic Testing Experience
Proper planning can help you to ensure that a Dynamic Testing experience functions the way you expect it to.
Consider the specific action you want your customers to take as a result of seeing the experience. You can select any goal metric you want. However, the further away the goal is from the actual experience, the longer it may take to begin optimizing and driving value. For example, an experience on your homepage may have less of an impact on an Add to Cart screen as an experience on a product detail page.
Build any actions that you want to test and add them to a Dynamic Testing experience. Ensure that you validate any included action condition across all variants, and ensure site visitors can qualify for each variant in the Dynamic Testing experience. For example, if variant A has an action on the homepage and product page, then so should variants B and C.
Decide whether or not to use a control variant in the Dynamic Testing experience. Keep in mind that controls act as a separate variant in a Dynamic Testing experience. This means that if you have two variants and a control, they all begin with a 33% traffic distribution.
Perform QA measures and then preview the Dynamic Testing experience setup. Remember that you cannot change the configuration of a Dynamic Testing experience once you activate it.
When you are happy with the configuration, schedule or activate the Dynamic Testing experience.
Analyze Experience Results
Change is a good thing! A Dynamic Testing experience automatically detects performance changes and shifts traffic accordingly to maximize ROI as much as possible. It may be somewhat out of your comfort zone to rely on a test for which the distributions may be high for one variant one day and high for another later on. This performance is perfectly normal. Understand that the Dynamic Testing experience engine never stops working. It picks up on changes to customer behavior as they occur and always guarantees you're driving the maximum ROI at any point in time.
Resist the Urge to Promote a Winner Prematurely
One variant may perform better now, but variant performance may change in the future. Promoting a winner introduces risk that Dynamic Testing experiences help you avoid.
If a variant gets 99% of the traffic, this means the decision engine has recognized a drastic separation in performance between the leader and all other variants.
Lesser-performing variants may also provide value to your test. Just because a variant isn't the best doesn't mean there aren't points in time when the worst variants perform better than the best variant. In these instances, the current chance to win will be lower than the best variant, but not 0%. The decision engine detected value to the overall test and will exploit it to maximize ROI based on your goal metric.
Observe Decisions Made Over Time
The traffic distribution adjusts as the decision engine learns which variant is the best. Traffic distribution fluctuations over time indicate a pattern of learning by the engine. Traffic may gradually trend positively or negatively for variants that are better or worse based on goal performance.
Compare Variant Goal Performance
Measure lift of all variants against the worst performing variant or your control variant. Observe the actual performance of each variant to determine if a winner or loser has clearly separated itself.
Consider Business Cycles and Behavioral Effects
The length of time a Dynamic Testing experience has run and outside influences may affect the performance of that experience. Promotions or demand-generation campaigns can lead to temporary fluctuations in variant performance. Account for the ebbs and flows within your known business, cycle such as weekends, holidays, and high- or low-traffic periods.
Understand 'Current Chance to Win'
If one of your variants performs better than the others, it has a higher chance to win based on what Monetate knows right now.
If variant B has an 82% current chance to win, this means that 82% of the time variant B is the best experience to improve your goal metric based on what Monetate knows up until now.
Continued Learning Drives ROI
Even though the current leader receives most of the traffic, that variant may not always perform the best within the experience.
If variant B has an 82% current chance to win, then variant A will be the best experience to improve your goal metric 18% of the time.
Dynamic Testing experiences are built to continue learning and dynamically reallocate traffic to other variants should an underperforming variant begin to beat the current leader in the future.