World Consumer Rights Day: the race is on to protect consumer rights in the age of AI
March 15th is World Consumer Rights Day (WCRD), marking the day in 1962 when US President John F Kennedy talked about concepts of consumer rights to safety, to be heard, to choose and to be heard in a formal, legislative setting.
62 years on, consumer rights have been adopted by governments across the world and expanded on to cover education, sustainability, redress, access and digital issues.
WCRD 2024: Fair and Responsible AI
This year the theme for WCRD is ‘Fair and Responsible AI’, which comes in the week that the first horizontal AI legislation has been passed in the EU. The AI Act outright bans some systems that pose an unacceptable risk (with some exceptions for law enforcement and national security) and requires that high risk AI systems carry out risk assessments, document their deployment and have humans in an oversight role. Consumers also have the right to an explanation about how an AI system makes decisions about them.
But as with any consumer rights or protections on paper, these rights are worth little without the enforcement to back them up. The enforcement of consumer protection law has always been lacking, but the challenge has increased in the world of digital markets and mass uptake of connected, consumer technology.
Consumers already face excessive data collection, fake reviews, dark patterns, harmful content and sales of dangerous goods. The mainstreaming of AI systems in consumer finance, health, retail, search and public services could see these multiply in more complex and unpredictable ways. The race is on to find ways to enforce consumer protection not just for existing harms in digital markets but for those driven by AI.
Is enforcement of consumer protection up to the AI task?
Consumer protection authorities can make up ground in the race by looking to the potential of technology to support enforcement activities. The EnfTech project is studying the promise and potential pitfalls of these innovations. AI is often referenced not just in terms of the harms and risks it poses but its potential to monitor, detect and make decisions on how to deal with those harms.
In the hands of consumer protection authorities, AI has much to offer. In our report ‘The Transformative Potential of EnfTech in Consumer Law’ published in January 2024, we found authorities who are already making use of technology to safeguard consumers and keep markets in check. There were several front runners testing out AI-enabled solutions for consumer protection including:
AI-enabled fake countdown timer to detect misleading pressure selling practices (ACM, Netherlands)
An AI-powered assistant detecting abusive contract clauses in consumer contracts (UoKIK, Poland)
Consumer complaints data pipeline where free text complaints were cleaned and summarised by machine learning techniques. (CMA, UK)
AI for EnfTech: some problems to think through
AI adoption for enforcement tasks is growing in authorities who are further along in the use of technology, but its use remains behind that of other supervisors and regulators – most notably the financial sector.
This is in part explained by the availability of data other agencies may have compared to consumer agencies. Financial institutions for example, are mandated to provide a flow of structured data in common formats to their regulatory body, giving data science teams a rich dataset to work with.
Using AI well is difficult, and in this early stage there will inevitably be a lot of experimentation. The report also looked at other problems that rolling out AI in enforcement throws up including:
The Low hanging fruit problem: linked to the data availability challenge, there’s a risk that enforcers chose to prioritise the issues they monitor and enforce on the availability of data and the ease of analysis and not by broader criteria, such as the harm caused or consumer segment impacted.
The Opacity problem: the essence of machine learning is that it teaches itself to spot patterns and make decisions, these are not always easy to understand of explain which will make identifying where a harm or breach of law has occurred difficult. Conversely, an authority using an AI-enabled enforcement system will be open to challenge from companies, and they need to be able to explain how their system came to particular conclusions
The Hype problem: AI is big news, and authorities are under pressure to modernise their enforcement, but is reaching for this tool for the right reasons, when a different approach might be more appropriate?
Rolling out any EnfTech solution and more specifically those built on AI requires working through this series of problems which are covered in more detail in our report and in an upcoming chapter called ‘The use of AI in the EnfTech toolbox: is AI a friend or a foe? in AI and Consumers (Cambridge University Press, forthcoming).
Technology keeps on moving, and so must enforcement
The theme of WCRD this year is a useful reminder that technology keeps on moving and brings with it new risks but also opportunities. For consumers to be aptly served by AI, safeguards need to be in place and enforcement authorities need to increase the rate at which they learn about the technology and seek to embrace it to provide better protection.
If you want to know more about using AI in consumer protection enforcement or have any examples from your work you wish to share, please get in touch at info@enftech.org