Steven Searle, Data Operations Manager at Mustard, on how to maintain data integrity in the age of the Panel.
Without data, market research would be an impossible task. Or at least quant would be.
It is proven that difference-making insight plays a critical role in inspiring and driving growth – the number one aim for many organisations. Whether it be improving customer satisfaction and customer experiences, exploring brand perceptions, understanding the effects of macro-factors on consumer behaviour, or testing a new concept, many of the decisions taken will be based on quantitative data. Data collected through market research is imperative, providing clients with a better understanding of their business relative to the wider market, alongside the strategies being implemented to drive further and future success.
Without data, we wouldn’t know whether customers are being “satisfied”, or whether they would recommend. We wouldn’t know what people think of our brands or our concepts. We wouldn’t know whether strategies or tactics are working as intended.
Mustard’s clients regularly make some pretty big and important decisions following the recommendations emanating from our insights. So why would anyone consider working with data that is “sub-optimal”?
The role of panels in capturing quantitative data
Online research panels are companies that recruit, manage and maintain large groups of individuals who have voluntarily agreed to participate in survey studies or other research activities (usually for incentives). The individuals who participate in online surveys are typically pre-recruited, and are targeted / selected depending on the client request (e.g. a client might request we engage with female millennials that buy into a particular category with a given frequency / regularity).
When talking about “data quality” in respect to panels, we are effectively referring to the necessity and the ability to measure whether those who are filling out a survey are in fact, who they say they are. Are people answering fully and truthfully? Or are people “catfishing” to take advantage of the incentives / rewards on offer? Ensuring a high quality of panel data is, of course, critical as the better the quality of data, the more accurate the results of our survey is, and effectively, the better the outcome for the client’s research.
The good, the bad & the ugly: pros and cons of panel data
Online panels have revolutionised quantitative research by delivering against three of the most pressing requirements -speed, volumes and (low) costs. That is definitely good. Interviewer led research is labour-intensive – the cost and timeframes to deliver equivalent sample sizes can be often more than double.
Panel data is effectively data at your fingertips – requiring minimal(ish) effort with maximum results. Yet, despite panel data having its benefits in terms of ease, speed and cost effectiveness – this doesn’t mean it exists without its perils.
There are historical threats we have needed to manage for some time, such as:
- Speeding (participants rushing to get through the survey as quickly as possible
- Flat-lining (similar to above – participants clicking through grid questions)
- Non-sensical responses (participants unwilling to put effort into open-text responses)
- Catfishing (pretending to be someone they aren’t i.e. second guessing the qualification criteria)
All of these threats are clearly detrimental to data quality, as participants aren’t taking the time to answer the surveys honestly and with consideration.
It is not just singular participants that pose a threat. We sometimes find duplicate responses as a result of “bot farms” (automated solutions which fill in the surveys on a larger scale).
What are the consequences?
Left unchecked, the consequences of poor data quality can damage the integrity of the whole research programme – and could feasibly cause damage to clients’ businesses if they are taking decisions based on compromised data-sets.
If spotted – but too late in the day – there is likely to be significant (and costly) remedial action required, which can impact on deadlines (as well as client relationships and reputational damage).
Having the last byte: how to minimise risk and improve data quality
Thankfully, there are already a stream of existing actions and processes that both the Mustard research team and our trusted partner networks put in place to prevent poor data quality.
We work from an approved supplier list with which we have service level agreements with, and who are chosen because we know how these suppliers combat survey fraudsters at source. And while technology is our enemy to some extent, tech solutions can also be extremely useful as we make use of digital fingerprinting software, multi-factor authentication, reCAPTCHA, facial recognition, gibberish detection etc.
We also build our surveys with data quality in mind, and use varying ways to try and prevent poor data quality. For example, building in honeypot questions into surveys as standard procedure has proven beneficial, alongside red herring questions (essentially a trap for those not paying attention). At the point of analysis, we check average survey durations and remove / replace the fastest 5-10% as standard. We conduct duplicate response checking, automated geo-location removals, and the inclusion of attention- check questions, as well as general sense checks performed by members of the team.
How is Mustard making the difference to improving data quality?
Ultimately, we can make the difference by acknowledging there are issues, and being on the front-foot when it comes to addressing them.
It’s vital that we carry out our checks and balances as we recognise that there is a vulnerability in online panel data. Instead of ignoring it, we can deal with it appropriately and provide the best quality data (and best service) for clients.
At Mustard, we see it as our job to stay ahead of the market in ensuring that we have the highest quality of data possible. We know that there are increasing risks as a result of the speed to which artificial intelligence is moving, and the readiness in which fraudsters use digital spaces for a fast buck. So, we keep doubling down to ensure the results that we get from online survey are robust. We’re experienced enough in the market research industry to know that some companies aren’t being proactive and addressing this issue with enough seriousness. Online research is the present and the future, and we want to make sure that the future of our industry is not compromised by poor quality panel data.