Introduction
A/B testing, also known as split testing, is a method of comparing two versions of a webpage, email, or
other customer touchpoints to determine which one performs better. In the context of a customer
insight journey, A/B testing helps businesses understand customer behavior, preferences, and pain
points, enabling data-driven decisions to optimize the customer experience.
A/B testing requires a significantly sized audience to be effective. Running tests on audiences of less than
1,000 can produce unexpected results such as unequal distributions in the variants. Running tests on
extremely small groups almost always produces disproportionate distributions. For example, if your
audience only has ten members, it’s possible that one might go to variant A while nine go to variant B.
This is expected behavior when the audience size is so small that the statistical model doesn’t have
enough data to reach the required distribution.
This document outlines how to implement A/B testing within the customer insight journey, including
planning, execution, analysis, and optimization.
Understanding the Customer Insight Journey
The customer insight journey refers to the process of gathering and analyzing data about customer
interactions, behaviors, and feedback across various touchpoints. It typically includes the following
stages:
- Awareness: The customer becomes aware of your brand or product.
- Consideration: The customer evaluates your offering.
- Conversion: The customer makes a purchase or takes a desired action.
- Retention: The customer continues to engage with your brand.
- Advocacy: The customer promotes your brand to others.
A/B testing can be applied at any stage of this journey to improve customer engagement and
satisfaction.
Planning Your A/B Test
Define Your Objective
• Clearly state the goal of your A/B test. Examples:
- Increase email open rates.
- Improve click-through rates on a landing page.
- Boost conversion rates on a checkout page.
Identify the Variable to Test
• Choose one element to test at a time for accurate results. Examples:
- Email subject lines.
- Call-to-action (CTA) buttons.
- Images or visuals.
- Page layouts.
Formulate a Hypothesis
• Create a hypothesis based on your objective. Example:
- Changing the CTA button color from blue to green will increase click-through rates by 10%.”
Select Your Audience
• Divide your audience into two random, equal groups:
- Group A (Control): Receives the original version.
- Group B (Variant): Receives the modified version.
- Optimizing the Customer Insight Journey
Implement the Winning Version
- Apply the winning version to all customers to improve overall performance.
Iterate and Test Again
- Use insights from the test to formulate new hypotheses and run additional tests.
- Example:
If changing the CTA color improved conversions, test the CTA text next.
Document Learnings
- Record the results and insights from each test to build a knowledge base for future optimization.
Best Practices for A/B Testing in Customer Insight Journey
1. Test One Variable at a Time:
Isolate variables to accurately determine what caused the change in performance.
2. Segment Your Audience:
Test on specific customer segments (e.g., new vs. returning customers) for more
targeted insights.
3. Run Tests for an Adequate Duration:
Ensure the test runs long enough to account for variations in customer behavior (e.g.,
weekdays vs. weekends).
4. Leverage Customer Feedback:
Combine quantitative A/B test results with qualitative customer feedback for deeper
insights.
5. Monitor External Factors:
Be aware of external factors (e.g., holidays, promotions) that may influence test results.
Examples of A/B Testing in Customer Insight Journey
Below is the Step by Step Guid to implement the A/B test in the customer insights journey.
Process
we will Start from creating two Emails Version A and Version B for A/B testing
- Go to Customer insight journey Module

2. Click on Email Tab on the Right Side you can see in Below Screenshot and Click on New

3. Create Email Version A as below

4. Create Email Version B as below

5. Here you can see both emails versions

6. Now Click on journey Tab and Click on New journey

7. Setup the journey Audience, exclusion, Time Zone and Start time for the journey

8. After Setup the journey click on + icon in above to add more Action

9. Click on A/B Test

10. Select Channels (we are using the emails so I select both email on left/right version)

When you select the A/B test tile, the side pane opens.
The pane contains the following parameters:

1. Name Your Test
Give your test a unique name to easily identify it in the A/B test panel and journey analytics.
2. Select Email Versions
Choose the two email versions you want to test. You can do this by selecting them from the journey builder or the side panel.
3. Define Your Initial Audience (For Segment-Based Journeys Only)
- If you’re using a segment-based journey, you can hold back a portion of the audience before finalizing results.
- For trigger-based journeys, this option isn’t available since customers enter the journey based on their actions.
4. Set Experiment Distribution
- Decide how you want to split your audience between the two versions.
- The default is 50-50, but you can adjust it between 10% and 90% per version.
- Typically, Version A is the control, and Version B is the test variant.
5. Choose a Winning Metric
- Select the criteria to determine the winner:
- Most opens (best for increasing engagement).
- Most clicks (best for measuring interaction).
- Most goal events completed (best for conversions).
6. Set the Test Duration
- You can either:
- Let the system automatically pick a winner when statistical significance is reached.
- Set a specific end date and time.
- If no winner is determined after 30 days, the system will send the default version.
7. Choose a Default Version
- If no clear winner is found by the deadline, the default version will be sent.
A/B Test Types in Segment-Based Journeys
If using segment-based journeys, you can choose:
A/B Test Without a Control Group: Works like a trigger-based journey, where customers enter as they come, and a winner is determined along the way.

A/B Test With a Control Group:
You define how many customers are tested first.
- Example: If you have 100 customers, you may test on 20% (10 per version).
- After testing, the remaining 80% get the winning version.


Final Steps
- Select the email version A for the left.


2. Select the email version B for the right.

3. Save and publish the journey.

Conclusion
A/B testing is a powerful tool for gaining customer insights and optimizing the customer journey. By
systematically testing and analyzing different elements, businesses can make data-driven decisions that
enhance customer engagement, satisfaction, and loyalty. Regularly incorporating A/B testing into your
strategy ensures continuous improvement and a deeper understanding of your customers’ needs and
preferences.
Readmore : Beginner’s Guide: Stripe Payment in D365 via Power Automate
FAQ’s
A/B testing in Dynamics 365 compares two versions of emails, pages, or other touchpoints to determine which one performs better, helping improve customer engagement.
In the Customer Insights Journey module, create two versions, define audience distribution, select a winning metric, and set a test duration.
Common metrics include open rates, click-through rates, and goal completions, depending on your business objectives.