ICE Scoring

Use it when you need a fast, data-light way to rank your ideas by potential value.

Category

Prioritization & Decision-Making

Prioritization & Decision-Making

Originator

Sean Ellis

Sean Ellis

Time to implement

1 day

1 day

Difficulty

Beginner

Beginner

Popular in

Growth

Growth

Engineering

Engineering

What is it?

ICE Scoring is a straightforward prioritization framework that ranks product ideas, growth hacks, or feature requests based on three key dimensions: Impact (the potential benefit), Confidence (how sure you are about that impact), and Ease (the effort or resources required).

Born out of Sean Ellis's growth experiments, ICE helps you move past opinion-driven debates and zero in on projects that promise the highest return on your time and budget. By assigning a numerical score (typically 1–10) to each axis and multiplying them, you generate a single, comparable metric for every initiative in your backlog.

Whether you're a solo indie hacker juggling dozens of growth experiments or a PM coordinating cross-functional sprints, ICE Scoring cuts through the noise so you can invest in what actually moves the needle.

Why it matters?

When you're strapped for time and resources, ICE Scoring ensures you're not building the wrong thing. It shifts decisions from gut feel to a repeatable, transparent process, so you consistently pick experiments that drive higher conversion, retention, and revenue without wasting cycles on long shots.

How it works

Growth co-pilot turns your toughest product questions into clear, data-backed recommendations you can act on immediately.

1

List Your Initiatives

Start with a clean list of all potential experiments, features, or marketing campaigns you're considering. Break big ideas into discrete, testable chunks so each can be scored fairly.

2

Score Impact

For each item, rate the expected benefit on a scale from 1 (minimal lift) to 10 (game-changing). Use historical data, user feedback, or intuition when data is sparse.

3

Score Confidence

Assess how confident you are in your impact estimate, again 1–If you're flying blind, score lower; if A/B test data or research backs you up, score higher.

4

Score Ease

Estimate the level of effort, cost, or time required, inverted to fit 1 (hard) to 10 (easy). Include dev time, design, approvals, and launch complexity.

5

Calculate ICE Score

Multiply the three scores (Impact × Confidence × Ease) to get a single number. It highlights high-value, low-effort bets.

6

Rank and Act

Order your list by ICE score descending, tackle the top items first, and revisit scores as new data comes in.

Frequently asked questions

Growth co-pilot turns your toughest product questions into clear, data-backed recommendations you can act on immediately.

What's the difference between ICE and RICE?

Both frameworks quantify priority, but RICE adds a Reach dimension, making it better when you know how many users each idea will touch. Choose ICE for speed, RICE for precision.

What's the difference between ICE and RICE?

Both frameworks quantify priority, but RICE adds a Reach dimension, making it better when you know how many users each idea will touch. Choose ICE for speed, RICE for precision.

How do I choose my scoring scale?

Stick to a simple 1–10 range to balance granularity and speed. If you need more rigor later, adjust to 1–5 or add custom weightings, but only after you've validated ICE end-to-end.

How do I choose my scoring scale?

Stick to a simple 1–10 range to balance granularity and speed. If you need more rigor later, adjust to 1–5 or add custom weightings, but only after you've validated ICE end-to-end.

Can I use ICE for non-product work?

Absolutely. ICE works for marketing campaigns, sales initiatives, operational improvements, anywhere you can estimate impact, confidence, and effort.

Can I use ICE for non-product work?

Absolutely. ICE works for marketing campaigns, sales initiatives, operational improvements, anywhere you can estimate impact, confidence, and effort.

How often should I update my ICE scores?

Re-score quarterly or after major learnings. As you collect real data from tests, update Confidence and Impact to keep your roadmap current.

How often should I update my ICE scores?

Re-score quarterly or after major learnings. As you collect real data from tests, update Confidence and Impact to keep your roadmap current.

What are common ICE pitfalls?

Watch out for bias in your Confidence scores and undervaluing cross-team dependencies in Ease. Mitigate this by using group scoring and factoring in hidden costs.

What are common ICE pitfalls?

Watch out for bias in your Confidence scores and undervaluing cross-team dependencies in Ease. Mitigate this by using group scoring and factoring in hidden costs.

You've ranked your roadmap with ICE. Now run your top bets through the CrackGrowth diagnostic to uncover hidden UX friction and supercharge your launch.