3 Key Takeaways from SampleCon 2024

Missed SampleCon? Here are 3 things you need to know from the conference.

You could be forgiven for thinking that a conference held near a lake in Georgia might focus on BBQ or enjoying the balmier weather. And while attendees at SampleCon last week did enjoy beautiful surroundings, pickleball tournaments and delicious coffee (thank you Cint!), the hottest topic of conversation was data quality.

It was Shifra and Ben’s first time attending – here are their 3 key takeaways from the conference:

 

1. A thoughtful approach to AI & synthetic data

While many industries have jumped on the AI bandwagon, SampleCon attendees were united in taking a more reasoned approach. The consensus was that synthetic data may soon be relied upon to complete survey quotas. But with the caveat that with data quality already a known issue, there’s an awareness that with AI, the principle of garbage-in, garbage-out applies. The prospect of training LLMs and AI with poor quality data being a very real cause for concern.

With no considerable increase in budgets on the horizon, the thinking was that ultimately surveys will rely on fewer ‘real’ participants, with increased verification and a higher CPI, generating higher quality data, and that data will ‘feed’ the models. Even with a higher CPI, the cost efficiencies of synthetic data mean it’s likely to net out at the same cost.

 

2. Acknowledgement that the participant experience must improve

It was particularly heartening to hear that there’s a general consensus the participant experience must improve - at Ayda it’s something we’re passionate about. Particularly interesting discussions were held around:

The value of a central clearing house for participant data
In some cases, participants are sent from pillar to post, and it’s not uncommon to be asked to provide the same demographic data multiple times, all for one single survey. It’s a terrible experience and one that would be significantly improved with a centrally verified repository.

Designing surveys to engage participants
With attention spans notoriously limited, as an industry we need to move away from designing 20-30 minute surveys and embrace more innovative approaches.

Shifra’s co-panellist on The Good, The Bad, and the Ugly of Data Insights, Kayte Hamilton shared some particularly innovative approaches. It was fascinating to hear about her experience of using Instagram, its inbuilt functionality designed to drive engagement, and the influencers who are on the platform, create and engage audiences and insights.

Improving participant remuneration
In the race to the bottom to secure contracts, participant remuneration has suffered. Completing many surveys earns you even less than the minimum wage – it’s a poor reward for the effort required.

For those running businesses, the question is: How can we price ourselves up again to enable us to deliver the quality we want to?

Treating participants like community members
There are a number of companies with sizeable panels who engage with participants as if they were members of a community, taking a very human and often 1:1 approach to recruitment and retention.

This approach was echoed by Shifra’s co-panellist Dimitri Poliderakis of Potloc who specialises in sampling hard-to-reach audiences. The audiences Potloc builds are highly verified and high quality, and treating them like a community is part of what engages and retains them.

If we treat our panels like communities, could we reach the data quality we want? And could qual at scale actually be the solution to our survey data quality problems?

The community approach is in stark contrast to the river sample or programmatic approach that dominates surveys, where participants are largely treated as being disposable. While this is a huge source of volume for the sector, it’s also a huge headache. In the face of such dominance, is it actually within our control to effect change?

Questioning the industry’s role in improving the participant experience
As an industry we’re incentivised to solve these problems, and in a Global Data Quality roundtable that Shifra ran with Bob Birdsell and Sandy Casey we discussed what role we can play.

While a top-down, regulatory approach might work, it doesn’t sit well with the industry: we’re less about being dictatorial and regulation, and more about supporting each other to achieve great things. Instead, there’s more of an appetite to explore opportunities to train researchers on emerging best practices in the participant experience.

 

3. An appetite for collaboration on data quality technologies

While most of the attendees represented research companies, a number of research technology vendors like Ayda were also in attendance.

There are multiple ways that vendors can contribute to solving data quality problems through the application of technology. There is no single silver bullet – so rather than approaching it from a competitive angle, we’re asking how we can come together to solve the problems collaboratively for the industry.

 

What next?

For Ayda, firstly it’s to say a massive thank you to Rachel Alltmont, who organised and curated such a superb conference, and invited Shifra to participate as a panellist and round table facilitator.

We’ll be bringing the insights we gathered, and the relationships we’ve begun, back into our day-to-day at Ayda. We’re excited to continue to pursue data quality by championing the participant experience, and through collaboration with other technology companies. Watch this space!

Similar posts

Want to hear more from Ayda? Subscribe to our email newsletter!