Skip to content

AI and Hidden Personalised Pricing: A Fairness Problem

Person holding a credit card and shopping online on a laptop at a wooden desk with a smartphone nearby.

AI may unobtrusively swap a single public price for multiple private deals on the very same product, new research suggests.

That conclusion reframes digital pricing as a fairness issue, because customers might never realise they have been targeted to pay more than someone else.

Hidden checkout maths

In modern online checkouts, it is already possible for identical items to show different prices to different people at exactly the same time.

Dr Miroslava Marinova at the University of East London (UEL) says platforms can take pricing beyond broad market indicators and steer it towards the maximum each individual is prepared to accept.

When those differences are concealed from the people who face them, one product effectively fractures into multiple unseen versions of the same offer.

This is the line the article draws before moving from covert prices to the question of how the law ought to respond.

AI estimates willingness to pay

At the heart of this hidden variation is algorithmic personalised pricing: prices set by software for a specific person, rather than a public price that shifts in response to the market as a whole.

Rather than simply tracking demand, the system tries to infer willingness to pay - the highest amount a shopper will tolerate before walking away.

Data such as clicks, location, purchase history, and even moments of hesitation can tighten that estimate around an individual instead of a segment.

That shift from market-based pricing to person-based pricing is what turns a familiar retail practice into a more complex legal challenge.

Personal pricing feels unfair

Studies using consumer experiments report that people judge individualised prices as less fair than segment-based prices, even when both are driven by data.

A key driver is social comparison: shoppers assess whether a price is acceptable by comparing it with what they believe someone else paid.

“When pricing becomes invisible and personalized, fairness becomes a central issue,” said Dr. Marinova.

If customers begin to suspect a private surcharge, confidence can evaporate fast, and a system that may be efficient on paper can still appear stacked against them.

When dominance changes everything

Under Article 102 - the European Union’s rule against abuse - dominant companies are not allowed to impose unfair selling prices.

This is significant because the paper frames hidden personal pricing as an exploitative abuse: behaviour that uses market power to extract more from buyers.

Unlike a shop-wide promotion, the worry here is differential treatment without an obvious justification that consumers can verify.

The argument is strongest where competitive pressure is limited, which is why dominance is treated as central.

Old law, new code

The researchers’ case relies on established competition law, rather than waiting for a new statute tailored specifically to AI.

Because software can adjust prices instantly and without notice, regulators may find it difficult to detect patterns unless they can access system records.

“The next step is for regulators to move from theory to action,” Marinova said.

As these systems become less transparent, the focus moves away from generalised anxiety about AI and towards audits, meaningful explanations, and evidence of objective justifications.

Why Britain is watching

In Britain, the Competition Act already bans abuse of a dominant position, including the imposition of unfair selling prices.

That language can accommodate the same concerns Marinova raises under EU law, even though Brexit changed the institutional setting.

A 2026 government consultation also proposes expanding the Competition and Markets Authority (CMA)’s ability to investigate algorithms across both competition and consumer protection.

In practice, the pressing issue for Britain’s regulators may be stronger powers rather than new legal theory.

A lack of transparency

Price transparency erodes when each shopper is shown a slightly different deal and there is no longer a single public “shelf” price.

Without a shared benchmark, customers cannot know whether they secured a bargain or were selected to pay extra.

Search tools and comparison sites only work when sellers reveal prices in a comparable way - precisely what hidden personalisation is designed to bypass.

Under those conditions, competitive discipline weakens, particularly when one platform controls search, data, payment, and the final checkout.

Legitimate versus hidden gaps

Not all personalised pricing is inherently abusive, because firms often vary charges for genuine cost-based reasons or loyalty-related incentives.

Student discounts, clearance reductions, and location-linked delivery costs are typically grounded in reasons shoppers can recognise.

Where pricing is concealed and tuned to individuals, consumers have little leverage to challenge it.

At that stage, the practice can stop resembling smart retailing and start looking like private extraction from buyers.

What regulators can do now

Robust oversight begins with documentation showing what data influenced a price, when the software was modified, and the rationale for those changes.

Auditors must be able to examine inputs, override rules, and outcomes across groups - not only the system’s declared purpose.

Regulators may also require authority to test live systems, not merely ask for documents. If they can recreate the path a price took, businesses will find it harder to conceal discrimination inside automated processes.

AI pricing is most concerning when it converts private data into private prices and then keeps the rationale hidden from the customer.

More explicit disclosure, stronger investigatory powers, and improved audit trails would not prohibit personalisation, but they would make unfair targeting simpler to demonstrate.

Comments

No comments yet. Be the first to comment!

Leave a Comment