How to run marketing experiments?

Sep 1, 2023

Experimentation
Experimentation

Introduction

Marketing leaders at fast-growing businesses are constantly searching for new avenues of growth. These must contribute to the top line and increase market penetration. Most marketers use touch-based attribution to steer this growth work.

In our previous posts, we've discussed why this approach can lead to bad decision making and how Marketing Mix Modeling is a useful method to estimate incrementality. However, to find causality or to validate new ideas, MMM isn't enough. You need experimentation.

In this blog post, we'll explore the importance of conducting marketing experiments and provide an overview of the various methods available. We'll explain why we advocate for Geo Testing, as well as the common pitfalls of experimentation and how to avoid them.

Why run marketing experiments?

Marketers have three broad options to calculate return on marketing investment — Touch-based attribution, Marketing Mix Modeling, and Experimentation.

While Touch-based attribution has become very popular, it has foundational problems. It’s unable to estimate incrementality and can be relied on only for digital event tracking use cases. It’s also not able to support all channels.

Marketing Mix Modeling is able to estimate incrementality and is a robust alternative to touch-based attribution to measure return on investment. However, it has one gap. It’s based on correlation and not causation.

Controlled experiments are the most scientific way to establish causality. In combination with Marketing Mix Modeling, experiments provide a strong foundation to understand and communicate the incremental sales generated through marketing investments.

A controlled experiment is a scientific test done under controlled conditions, meaning that only certain elements are changed, while all others are kept constant. This is done to reduce the potential for bias or error in the results. It also helps to ensure that any changes in the results are due only to the elements that were changed.

In marketing we use controlled experiments to test the effectiveness of a channel or a technique. but they're also commonly used in other fields, such as when testing a new drug or validating a scientific hypothesis.

Experiments are a useful tool often, but there are two scenarios where they’re your only practical option:

  • When you have no historical data on a specific channel or strategy, and

  • When you have a need to generate very precise and certain answers.

While experimentation has become a standard practice in product and growth teams, the same can't be said about marketing experiments. We think this is a missed opportunity. There are clear and strong reasons for every marketing team to invest in experimentation.

In the next few sections, we'll show you how.

How do you identify experiment hypotheses?

The starting point for a successful marketing experiment is to have a clearly defined hypothesis. A hypothesis is an assertion based on research and data and should be specific, measurable, and testable to yield meaningful results.

Developing hypotheses starts with understanding the problem you're trying to solve or the opportunity you're trying to capture. You can use data, user research, and customer feedback to identify these. Insights from Marketing Mix Modeling are a great source of hypotheses that can be validated further with experimentation. From there, you need to prioritize based on ideas that show promise or have the most room to help you grow.

Generally, we see five common types of hypotheses in marketing experimentation:

  • Increasing or decreasing spend on a specific channel

  • Investing in a completely new channel

  • Testing a new campaign (creative, audience)

  • Testing a new bidding strategy

  • Testing a new price/offer

It's also important to consider the cost, time, and effort associated with running an experiment before devising your hypothesis. Experiments require careful planning and often involve collecting data beforehand to measure any changes accurately after implementation.

Finally, hypotheses must be easily measurable so you can draw conclusions from them when analyzing data from your experiment. This means identifying key performance indicators (KPIs) such as impressions, leads, pipeline, or sales that'll help you track progress over time and determine if your experiment was successful or not.

How do you execute marketing experiments?

There are three primary methods of experimentation available, each with its own set of advantages and drawbacks.

Platform A/B Testing

On-platform testing consists of two flavors: Conversion Lift from Google and Meta, and A/B testing on each marketing platform (e.g., LinkedIn, emails, etc.). In Conversion Lift, the ad platform creates a control and treatment group automatically and serves your ad to the treatment group only. You then upload your conversion data and rely on the ad platform to compare impact for you. Platform A/B tests provide an easy way to test small variations in creative but come at the cost of handing over full control to the platform.

The primary advantage of using either of these capabilities is their ease of implementation. However, a disadvantage is the lack of transparency regarding the experiment setup. Additionally, due to the individualized methodologies of each platform, one must become familiar with N distinct approaches, thereby raising the risk of errors.

Geo Testing

Geo-based testing helps marketers test their hypotheses in real-world environments by exposing a particular population to an experiment while keeping other populations as controls. We recommend using a synthetic control approach, which uses machine learning to identify and synthesize multiple regions to form your control and treatment groups. This approach provides the highest accuracy and least uncertainty in the results you can observe.

The biggest advantage of using geo-based testing is that it generally works for any media channel. Another benefit is that it’s ad-platform agnostic. The biggest disadvantage of geo-based testing is that it requires you to develop the necessary statistics skillset on your team and collect geo-level data to setup the test.

Observational Studies

There are other observational studies such as cohort analysis or the simple act of turning media off and on again. While it may be tempting to use these approaches, they’re not an effective replacement for experiments because they don’t have a control group. These approaches help you learn about correlation and not causation.

Just like product and growth teams at fast-growing businesses have invested in building or buying product experimentation platforms, we recommend marketing teams should make the same level of investment in marketing experimentation.

What are the common pitfalls of experimentation?

Experimentation is a powerful tool for marketers, but it's important to understand the potential pitfalls that come with it.

The most common mistakes made when running marketing experiments include:

  • focusing on short-term gains rather than long-term value,

  • failing to set up proper experimental controls,

  • using incorrect metrics, and

  • not properly analyzing the data to draw actionable conclusions.


To get the most out of your marketing experiments, it's essential to look at both long-term and short-term gains. You may see results in the first few days that don't last, or don't translate into meaningful gains. Instead of aiming for quick wins, experimenters should take a holistic approach that considers larger trends and patterns. This means running experiments over extended periods and collecting enough data to make reasonable predictions about future behavior or outcomes.

Furthermore, experiments must have robust and trustworthy controls, to ensure proper isolation and to avoid measuring effects caused by something other than your experiment. This is especially hard in marketing experiments because you're not always in control of the test and control splits.

Another consideration is picking the right metrics. Marketers need to pick metrics that are measurable, can easily be tracked back to the control and test groups, and are sensitive and timely. Metrics that can only be computed several months or years after an experiment treatment (e.g., renewal rate) aren't ideal candidates for inclusion in an experiment set up. Instead, we recommend picking leading metrics that have causal relationships with those eventual business outcomes (e.g., usage).

Finally, validating and verifying the results of an experiment is key for ensuring accuracy and avoiding costly mistakes such as false positives or wrong assumptions. Options include re-running experiments on some frequency, or conducting A/A tests, where both groups receive the same treatment. Poorly conducted experiments are likely worse than no experiment at all because they provide confidence and certainty when none exists.

How do you get started?

Now that you know the importance of experiments, the source, and type of hypotheses, the various methods of conducting them, and mistakes to avoid, it's time to start experimenting.

Just as we outlined in our post on Marketing Mix Modeling, it's wise to first align on the desired outcome metric. Subsequently, formulate your hypotheses and become conversant with the available methods.

We encourage you to start small and build your confidence gradually.

Paramark works with teams with multi-million-dollar budgets on experimentation and is well placed to guide you through this complexity. If you need help, please reach out.