Continuous tracking – to be or not to be? | Continuous tracking – to be or not to be? -
ein

Continuous tracking – to be or not to be?

: Monday, October 25 - 2004 @ 17:10

And since this debate continues I thought it is relevant to look at why continuous tracking is so often recommended whilst dipstick research may still have a role to play.

Continuous research can be used in many areas
e.g.

- Advertising Tracking
- Brand Health Tracking
- Customer Satisfaction
- Media Monitoring.

And it is easy to see why dipstick research can actually be misleading when you are looking at any of the above areas. Let us take a request to do a pre/post ad effectiveness study. The idea is that if we measure ad and brand awareness before an ad is launched and again after the ad is off air/press we will get a measure of the impact of the advertising.

If a new ad campaign launched in, say, October then we might have done a pre study in September and post study in November. Let us see the kind of interpretation that can be made:

If ad awareness pre launching the ad was 20% in September and post in November was 30% the assumption is that ad awareness has gone up. And that the new ad therefore has done well.

Now let us assume that we had access to continuous tracking from the month of August through to November. Any one of the scenarios below may actually have taken place:

Possible scenarios

This shows the value of continuous tracking to reach a correct interpretation and also raises issues around the timing of the pre/post study. Doing fieldwork a week after or a few weeks after the ad campaign has ended may give different results since there is a wear-out effect to any advertising. But continuous tracking can actually capture potential ad “wear out”.

Continuous tracking also takes into account that competitive activity (media, promotions, re-launches) may happen at different times of the year. For some categories the weather, for example, affects consumption – and hence brand usage in particular months will increase or decrease because category consumption itself goes up or down.

Tracking media consumption/channels/programs is another area where periodic or dipstick measurement does not work. We know that media consumption changes depending on events, seasons (vacation months and Ramadan have different patterns), and depending on programming content which may vary month to month.

The same principle holds true for other types of research – customer satisfaction for example – where satisfaction scores can change month to month, and doing a dipstick or point-in-time study may give misleading results.

In fact continuous tracking should be long-term to fully realize the benefits – watching client and competitor activity to understand what worked / did not work, looking at relationships between marketing and ad spends with brand usage and preference etc. In other words continuous tracking offers one major advantage – learning, drawing conclusions about cause and effect.

So what are the advantages of dipstick surveys? Apparently cost, since you can work with lower sample sizes – say 300 pre and 300 post – a total of 600. But this is not such a big advantage for the value you get by doing even short-term continuous research – 50 interviews each week for 16 weeks – a total of 800 interviews.

Not such a big price to pay to avoid the pitfalls of wrong interpretation as described earlier. In some instances though it will make sense to conduct dipstick research – say a comprehensive customer retention study or a typical usage & attitude study, where the purpose of the study is to take a snapshot of the market environment at one particular point in time.

Today's Top Stories

Posted by

Monday, October 25- 2004 @ 17:10 UAE local time (GMT+4) Replication or redistribution in whole or in part is expressly prohibited without the prior written consent of Mediaquest FZ LLC.

AME Info Services







Business DirectoryVIEW ALL

Search by name

Search by industry

Browse alphabetically





JobsVIEW ALL

Search for jobs

Latest Jobs