Skip to content
Allison Nance Product & UX Designer
  • Home
  • Work
  • About
  • Blog
  • Contact
Design vs. Us

Why Your Grocery App Total Might Be Higher Than Someone Else’s

January 7, 2026 Allison

Most of us understand “discount logic.” You clip a coupon. You join a loyalty program. You buy in bulk. You shop a weekly sale. There’s a clear reason you paid less (or more) than someone else. It generally feels like a fair trade: you did something different, so the price changed.

Online grocery shopping can break that expectation. The “reason” isn’t always visible. You didn’t apply a coupon or change anything obvious. You just placed an order like you always do, yet the price you see may not be the same price someone else sees for the same item at the same store.

Consumer Reports investigated Instacart’s use of AI-driven pricing in online grocery orders. Below, I’ll break down what algorithmic pricing is and what the investigation found. I’ll also talk about how it differs from traditional A/B testing, and why it matters.

What “algorithmic pricing” means

Algorithmic pricing is when software tests and adjusts prices to learn how far a price can rise before people stop buying. This often results in different shoppers seeing different prices for the same item. 

In the Instacart investigation, researchers from Groundwork Collaborative, Consumer Reports, and More Perfect Union ran live tests with 437 shoppers, having people build the same carts at the same time and capture screenshots of the prices shown.

The example and charts below are from Consumer Reports and explain well how algorithmic pricing works.

1. Say five consumers are shopping for bags of Spuddies brand potato chips, each with a different “price sensitivity” level.

2. Shopper 1, who thinks Spuddies are by far the best chips and is intensely brand loyal, would pay up to $7.

3. Shopper 5 thinks they’re fine but no better than another brand that costs $3. The others fall somewhere in between.

4. If a retailer sets the price of Spuddies at, say, $4 across the board, it will sell bags for a total of $16.

5. That leaves a lot of potential revenue on the table—what economists call “consumer surplus.”

6. But if the retailer gets good at using data and algorithmic pricing to estimate how much each shopper is willing to pay, it might sell five bags for about $23.

What the investigation found

Across the tests, the report found:

  • 74% of items were offered at multiple price points, sometimes with as many as five different prices for the exact same item at the same store at the same time. 
  • The average basket total varied ~7% for identical carts; some items were shown up to 23% higher than others. 
  • Based on the observed average basket variation and Instacart’s own estimate of grocery spend, the report says that could translate to a swing of about $1,200 per year for a household of four.

Why this is happening 

Instacart acquired the AI company Eversight in 2022 and began offering pricing software to retailers in 2023.

In a 2023 shareholder letter, Instacart described its “smart rounding” offering as a “machine learning-driven tool that helps retailers improve price perception and drive incremental sales.”, noting that “thoughtfully pricing” items by sensitivity has led to millions of dollars in annual incremental sales for some partners.

A/B testing vs. algorithmic pricing 

In product work, we A/B test things like layouts, copy, offers, flows. Even pricing can be A/B tested. Not all of that is inherently unethical.

But there’s a moral difference between testing prices on discretionary products and running ultra-targeted, individualized price experiments on groceries and household essentials. 

A/B price testing on non-essentials tends to look like:

  • Optional purchase: you can walk away without real harm (e.g., a subscription add-on, apparel, entertainment, gadgets).
  • More consumer power: you can delay the purchase, compare across retailers, or substitute easily.
  • Clearer market signals: sales, couponing, and seasonal pricing are often more visible and widely understood.
  • Lower harm if “unfair”: if you feel manipulated, you might be annoyed, but you’re not put at risk of missing dinner.

Ultra-targeted algorithmic testing on groceries is different:

  • Necessity: people must buy food and household basics, even when prices creep up.
  • Opaque and individualized: you may not know you’re in a test, and you can’t reliably tell if someone else is being shown a different price.
  • Asymmetric information: the company knows your behavior and constraints; you don’t know the rules of the pricing game.
  • High risk of exploitation: when pricing is tuned to “price sensitivity,” it effectively means “charge the maximum this specific person will still pay,” which hits hardest for families budgeting tightly.
  • Personalized pricing can quietly penalize vulnerable situations: If an algorithm learns you tend to buy essentials urgently (like dinner ingredients on a weeknight) or you don’t switch brands easily, it may decide it can push your price higher, meaning the people with the least time and fewest alternatives get charged the most.

What this could mean for the future

This investigation focused on online ordering, but it raises a broader question: as more pricing becomes algorithmicly-controlled, where else could this show up?

Retailers are starting to roll out digital shelf labels (electronic price tags). I’m not saying individualized, data-driven pricing has been confirmed in physical aisles. But the technology makes it more possible for prices to change quickly, be tested more easily, and become harder for shoppers to verify.

That’s why it’s important to act now. 

If you want accountability

Consumer Reports is urging the FTC and state attorneys general to investigate Instacart’s pricing practices. If you agree this should be examined, you can sign their petition here:

Sign Petition

  • AI
  • Deceptive Design

Post navigation

Previous

Search

Series

  • Curated Clicks (12)
  • Deep Research Debrief (1)
  • Design vs. Us (3)
  • Everyday Design (2)
  • Resources (2)
  • Skill Lab (3)

Tags

AI Case Studies Deceptive Design Development Ethics Experience Design Service Design Systems Design Teams

Recent posts

  • Why Your Grocery App Total Might Be Higher Than Someone Else’s
  • Your year with ChatGPT
    Are You Using ChatGPT Like Search, or Like a Collaborator?
  • Bringing UX Off the Screen: What My Bathroom Renovation Taught Me

Continue reading

Your year with ChatGPT
Resources

Are You Using ChatGPT Like Search, or Like a Collaborator?

January 2, 2026 Allison

In 2025, ChatGPT introduced end-of-year recaps, similar to Spotify Wrapped. It showed stats, the kinds of tasks the user relied on it for, and even grouped users into an “archetype.” For my archetype, I landed in the Strategist category, alongside about 3.6% of users. That got me thinking. People use ChatGPT in many different ways, […]

Skill Lab

What if it’s Misused?: Why Ethical Imagination Matters in AI Design

May 29, 2025 Allison

In the fall of 2023, a quiet private school in Lancaster County, Pennsylvania, became the center of national attention, not for academic excellence, but for a disturbing misuse of technology. Deepfake images had surfaced online depicting several female students in explicit, AI-generated nudes. None of the images were real. But the impact was. The photos, […]

Design vs. Us

The Business of Digital Manipulation: How your Brain is Hacked to Create Addictive Digital Products

May 8, 2025 Allison

The Hidden Systems Behind the Apps We Can’t Stop Using Many people don’t realize just how calculated our digital habits have become. The apps we open dozens of times a day aren’t just popular by accident, they’re designed to be addictive. And the companies behind them aren’t just hoping we’ll stick around; they’re engineering entire […]

Continue reading

Your year with ChatGPT
Resources

Are You Using ChatGPT Like Search, or Like a Collaborator?

January 2, 2026 Allison

In 2025, ChatGPT introduced end-of-year recaps, similar to Spotify Wrapped. It showed stats, the kinds of tasks the user relied on it for, and even grouped users into an “archetype.” For my archetype, I landed in the Strategist category, alongside about 3.6% of users. That got me thinking. People use ChatGPT in many different ways, […]

Skill Lab

What if it’s Misused?: Why Ethical Imagination Matters in AI Design

May 29, 2025 Allison

In the fall of 2023, a quiet private school in Lancaster County, Pennsylvania, became the center of national attention, not for academic excellence, but for a disturbing misuse of technology. Deepfake images had surfaced online depicting several female students in explicit, AI-generated nudes. None of the images were real. But the impact was. The photos, […]

Design vs. Us

The Business of Digital Manipulation: How your Brain is Hacked to Create Addictive Digital Products

May 8, 2025 Allison

The Hidden Systems Behind the Apps We Can’t Stop Using Many people don’t realize just how calculated our digital habits have become. The apps we open dozens of times a day aren’t just popular by accident, they’re designed to be addictive. And the companies behind them aren’t just hoping we’ll stick around; they’re engineering entire […]

Allison Nance Product & UX Designer
  • Location: Birmingham, AL area
  • Location time zone: Central
  • Preferred working hours: Eastern or Central
Certified Military Spouse Owned
Info
  • Privacy Policy
Let's Connect
  • LinkedIn
  • Contact

© 2024 Availta, LLC