Measuring Experience

Measuring Experience

For decades, those of us in the experiential space have embraced the idea that our path to full legitimacy as a marketing discipline goes directly through a universal, standardized metric that can evaluate an experiential campaign’s effectiveness, regardless of the tactics employed.

And despite the fact that the industry would warmly embrace such a metric, no one has been able to deliver the goods. This raises the question, why oh why has a universal metric continued to elude us?

Perhaps the answer lies in the fact that we’ve simply been thinking about experiential marketing all wrong.

Perhaps, like those poor stargazers who so many centuries ago insisted that the earth was at the center of the solar system but couldn’t quite make the math work, our thinking has been off the mark from the word go, in how we’ve been framing the question.

We’ve all seen this kind of diagram, with each different marketing discipline getting a slice of the whole:



One consequence of this thinking and visualization, however, is that it implies an equivalence between the disciplines that just isn’t there. This, in turn, has led many to expect each of the disciplines to behave and be measurable in equivalent ways.

But with experiential marketing, it’s not quite that easy. The tactics and channels it can employ are too many to count. And the discipline is is regularly engaged at different stages of the sales cycle. Alex Smith suggests much the same point in his article “The Fatal Flaw in Attempts to Measure Brand Experience” when he says that “[experiential marketing can’t have] a single standard formula like those enjoyed by other channels [because] brand experience isn’t a channel at’s a technique.”

Unlike the other disciplines, experiential marketing exercises its creative concepts through 1) any of a number of tactics and channels, in 2) a one-off(ish) infrastructure that’s built entirely or partly from scratch, 3) to entice participants to take specific action and that then 4) amplifies that action through the variety of the more permanent channels, such as digital, social, and PR. With this in mind, experiential’s place on the wheel might rather be more accurately visualized like this:


Measurement as a practice requires a level of constancy, both in its units of measure and in the characteristics to be measured. (For example, feet are great for measuring the comparative heights of two things and terrible for measuring their comparative temperatures.)

And yet, constancy is precisely what’s missing from experiential. Points 1-3 above are always different from campaign to campaign: Different tactics are leveraged in different infrastructures, encouraging differing audiences to take action unique to that particular campaign. Only point 4 remains constant from campaign to campaign. Here, we again turn to Mr. Smith to succinctly express the point: “If our aim is to prove the effectiveness of experiential, a good approach would be to explore how it can have a multiplying effect on other channels.”

When we focus our measurement attention on the characteristics that campaigns consistently have in common and design our conversion metrics accordingly, we can confidently say that without campaign X, behavior Y would not have happened. And that’s exactly what universal, standardized metrics are all about.

Networking Naturally

Networking Naturally

The Psychology of Sharing Experiences

The Psychology of Sharing Experiences