suspiciously precise floats, or,
how I got Claude's real limits
If you're reading this, you're probably aware that Claude's subscription plans are a much better deal than the API. But how much better exactly, and what are the actual limits? I pulled out the exact values through two unrounded floats and found some notable things. I'll explain how I did it later, but first, the results.
findings
The 20× plan is not as good of a deal as you might expect. On Anthropic's site, all mentions of "20× more usage*" have that pesky asterisk. It's doing a lot of work. The five-hour session limits really are 20× higher than in Pro, but the real question is, how much work can you get out of it? The answer is: only twice as much per week as the 5× plan.
On the other hand, the 5× plan gives you great value for money. It overdelivers on what it promises pretty significantly. It's the sweet spot of the pricing table. You get a six times higher session limit than Pro (not five), and more than eight times the weekly limit (more than the eponymous five).
| Tier | Credits/5h | Credits/week |
|---|---|---|
| TierPro | Credits/5h550,000 (1×) | Credits/week5,000,000 (1×) |
| TierMax 5× | Credits/5h3,300,000 (6×) | Credits/week41,666,700 (8.33×) |
| TierMax 20× | Credits/5h11,000,000 (20×) | Credits/week83,333,300 (16.67×) |
Compared to API pricing, all plans come out looking fantastic. The value estimates in the table are lower bounds, since caching makes the effective API-equivalent even more favorable (as I'll explain in a moment). In any case, if you can use plan pricing instead of the API, go for it.
| Tier | Price | Credits/month | Opus-rate tokens | Equivalent API cost |
|---|---|---|---|---|
| TierPro | Price$20 | Credits/month21.7M | Opus-rate tokens32.5M in or 6.5M out | Equivalent API cost$163 (8.1×) |
| TierMax 5× | Price$100 | Credits/month180.6M | Opus-rate tokens270.9M in or 54.2M out | Equivalent API cost$1,354 (13.5×) |
| TierMax 20× | Price$200 | Credits/month361.1M | Opus-rate tokens541.7M in or 108.3M out | Equivalent API cost$2,708 (13.5×) |
There's one thing that's not in this table that's very important.
Cache reads. They're entirely free.
This makes the math even more stacked in favor of the plans. In an agentic loop (e.g. Claude Code), the model makes dozens of tool calls per turn. After every tool call, the model is invoked again. Cache read of the entire context. The API charges 10% for every read; subscriptions charge nothing. This adds up fast, as we'll see in a second.
Cache writes are also discounted, they cost 1.25×/2×1 the input price in the API, while on the plan they're charged the regular input price. Every chat turn gets written to cache before it can be read, so this matters as well.
credits
So what are these credits I keep talking about?
They're the unit used internally to keep track of your plan usage. "Credits" is my arbitrary name for it, these values don't appear directly in any API field so there's no obvious word for them. I think "credits" sounds fine.
How do we get from credits to token counts? Here's the formula:
credits_used = ceil(input_tokens × input_rate + output_tokens × output_rate)
credits_used =
ceil(input_tokens × input_rate
+ output_tokens × output_rate)
...and the values you plug into it:
| Model | Input credits/token | Output credits/token |
|---|---|---|
| ModelHaiku | In credits/token2/15 = 0.133... | Out credits/token10/15 = 2/3 = 0.666... |
| ModelSonnet | In credits/token6/15 = 2/5 = 0.4 | Out credits/token30/15 = 2 |
| ModelOpus | In credits/token10/15 = 2/3 = 0.666... | Out credits/token50/15 = 10/3 = 3.333... |
The specific values are pretty arbitrary-looking, but the ratios between them mirror API pricing: output costs 5× input, you'll pay 5× more for Opus than Haiku, etc.
Let's try it out on some practical numbers.
We'll start with the realistic worst case: caching is enabled, but the cache is cold. (The true worst case is caching disabled entirely, but that's rare.)
Cold cache (100K cache write + 1K output)
Subscription credits
ceil(100K × 2/3 + 1K × 10/3) = 70,000
API cost
Cache write: 100K × $5/M × 1.25 = $0.625
Output: 1K × $25/M = $0.025
Total: $0.650
Max 5× weekly
floor(41,666,700 / 70,000) = 595 req/wk
595 × $0.650 = $386.75/week
$386.75 × 52/12 = $1,676/mo
You're paying $100/mo -> 16.8× value
This is already great value, and it's the real-world baseline (even with caching off, it was still ~13×). Once you're in a loop and the cache is warm, it gets a lot better:
Warm cache (100K cache read + 1K cache write + 1K output)
Notice we only count the 1K new tokens on the subscription side. On the API side, the 100K cache read still costs 10% of input.
Subscription credits
ceil(1K × 2/3 + 1K × 10/3) = 4,000
API cost
Cache read: 100K × $5/M × 0.1 = $0.05
Cache write: 1K × $5/M × 1.25 = $0.00625
Output: 1K × $25/M = $0.025
Total: $0.08125
Max 5× weekly
floor(41,666,700 / 4,000) = 10,416 req/wk
10,416 × $0.08125 = $846.30/week
$846.30 × 52/12 = $3,667/mo
You're paying $100/mo -> 36.7× value
Over thirty-six times more value than the API.
Okay, enough of the takeaways. I promised suspiciously precise floats.
forensics
How did I get all these numbers?
Last fall, a new tab appeared on the Claude.ai settings page. The usage tab, showing your remaining limits as two progress bars.2 Very soon after, I found myself flipping back and forth between my Claude chats and that page. Especially if your chats are long (and uncached, but that's a different story), those limits can run out quick.
I decided to make an extension.3 First, I looked at how the usage page itself was implemented. Pretty straightforward, a /usage endpoint returning a tiny JSON snippet with the numbers rounded to the nearest percentage point. That was enough for me, since what I really wanted was an easier way to view those numbers.
But I kept digging, and soon found something interesting. On a Max 5× account, the SSE responses from the generation endpoint had usage values as unrounded doubles: 0.16327272727272726.
Suspiciously specific. Almost looks like some kind of fraction converted to decimal. Can we recover the underlying fraction and get the real limits? Turns out we can.
step 1: bucket
When a real number becomes a float, it rounds to the nearest representable value. That float represents ALL rationals in a tiny interval [L, U) that would round to it. The width of that interval is ~10−17 for values in the 0–1 range we're working with.4
The original fraction (before it became a float) must lie in that [L, U). We want to get the simplest fraction from that interval.
Why the simplest? Any decimal fraction can be converted to a common one, but that doesn't give us any more info (in this example: 16327272727272726
But wait, suppose the source fraction is 2/10, we would recover 1/5! Won't getting the simplest fractions give us false positives? With one sample, yes. This is why we get multiple samples and later compute the lowest common denominator. If the true denominator were 10, we'd sometimes recover 5 or 2 (because everything gets simplified), but we'd never recover a denominator that doesn't divide 10. So the LCM can only grow, it can't overshoot the real limit. After a few samples, the chance of the real denominator being higher becomes vanishingly small.
step 2: the fancy math thing
So back to the bucket, how do we find the simplest fraction in it?
The Stern-Brocot tree is a binary search over ALL positive fractions, ordered by value, but constructed so that simpler fractions are found first. Starting from 0/1 and 1/0 (infinity), each step narrows toward the target interval. Here's an example for finding the fraction for 0.4:
In our use-case, we're not aiming for the exact number, but instead a very very small interval (~10−17). The process looks essentially identical either way.
Back to our original 0.16327272727272726. The first in-bucket hit is a very small-denominator fraction (often the minimum-denominator one):
step 00: left=0/1 right=∞ mediant=1 -> mediant ≥ U, move left
step 01: left=0/1 right=1/1 mediant=1/2 -> mediant ≥ U, move left
step 02: left=0/1 right=1/2 mediant=1/3 -> mediant ≥ U, move left
step 03: left=0/1 right=1/3 mediant=1/4 -> mediant ≥ U, move left
step 04: left=0/1 right=1/4 mediant=1/5 -> mediant ≥ U, move left
step 05: left=0/1 right=1/5 mediant=1/6 -> mediant ≥ U, move left
step 06: left=0/1 right=1/6 mediant=1/7 -> mediant < L, move right
step 07: left=1/7 right=1/6 mediant=2/13 -> mediant < L, move right
step 08: left=2/13 right=1/6 mediant=3/19 -> mediant < L, move right
... 57 more steps ...
step 09: left=3/19 right=1/6 mediant=4/25 -> mediant < L, move right
step 10: left=4/25 right=1/6 mediant=5/31 -> mediant < L, move right
step 11: left=5/31 right=1/6 mediant=6/37 -> mediant < L, move right
step 12: left=6/37 right=1/6 mediant=7/43 -> mediant < L, move right
step 13: left=7/43 right=1/6 mediant=8/49 -> mediant < L, move right
step 14: left=8/49 right=1/6 mediant=9/55 -> mediant ≥ U, move left
step 15: left=8/49 right=9/55 mediant=17/104 -> mediant ≥ U, move left
step 16: left=8/49 right=17/104 mediant=25/153 -> mediant ≥ U, move left
step 17: left=8/49 right=25/153 mediant=33/202 -> mediant ≥ U, move left
step 18: left=8/49 right=33/202 mediant=41/251 -> mediant ≥ U, move left
step 19: left=8/49 right=41/251 mediant=49/300 -> mediant ≥ U, move left
step 20: left=8/49 right=49/300 mediant=57/349 -> mediant ≥ U, move left
step 21: left=8/49 right=57/349 mediant=65/398 -> mediant ≥ U, move left
step 22: left=8/49 right=65/398 mediant=73/447 -> mediant ≥ U, move left
step 23: left=8/49 right=73/447 mediant=81/496 -> mediant ≥ U, move left
step 24: left=8/49 right=81/496 mediant=89/545 -> mediant ≥ U, move left
step 25: left=8/49 right=89/545 mediant=97/594 -> mediant ≥ U, move left
step 26: left=8/49 right=97/594 mediant=105/643 -> mediant ≥ U, move left
step 27: left=8/49 right=105/643 mediant=113/692 -> mediant ≥ U, move left
step 28: left=8/49 right=113/692 mediant=121/741 -> mediant ≥ U, move left
step 29: left=8/49 right=121/741 mediant=129/790 -> mediant ≥ U, move left
step 30: left=8/49 right=129/790 mediant=137/839 -> mediant ≥ U, move left
step 31: left=8/49 right=137/839 mediant=145/888 -> mediant ≥ U, move left
step 32: left=8/49 right=145/888 mediant=153/937 -> mediant ≥ U, move left
step 33: left=8/49 right=153/937 mediant=161/986 -> mediant ≥ U, move left
step 34: left=8/49 right=161/986 mediant=169/1035 -> mediant ≥ U, move left
step 35: left=8/49 right=169/1035 mediant=177/1084 -> mediant ≥ U, move left
step 36: left=8/49 right=177/1084 mediant=185/1133 -> mediant ≥ U, move left
step 37: left=8/49 right=185/1133 mediant=193/1182 -> mediant ≥ U, move left
step 38: left=8/49 right=193/1182 mediant=201/1231 -> mediant ≥ U, move left
step 39: left=8/49 right=201/1231 mediant=209/1280 -> mediant ≥ U, move left
step 40: left=8/49 right=209/1280 mediant=217/1329 -> mediant ≥ U, move left
step 41: left=8/49 right=217/1329 mediant=225/1378 -> mediant ≥ U, move left
step 42: left=8/49 right=225/1378 mediant=233/1427 -> mediant ≥ U, move left
step 43: left=8/49 right=233/1427 mediant=241/1476 -> mediant ≥ U, move left
step 44: left=8/49 right=241/1476 mediant=249/1525 -> mediant ≥ U, move left
step 45: left=8/49 right=249/1525 mediant=257/1574 -> mediant ≥ U, move left
step 46: left=8/49 right=257/1574 mediant=265/1623 -> mediant ≥ U, move left
step 47: left=8/49 right=265/1623 mediant=273/1672 -> mediant ≥ U, move left
step 48: left=8/49 right=273/1672 mediant=281/1721 -> mediant ≥ U, move left
step 49: left=8/49 right=281/1721 mediant=289/1770 -> mediant ≥ U, move left
step 50: left=8/49 right=289/1770 mediant=297/1819 -> mediant ≥ U, move left
step 51: left=8/49 right=297/1819 mediant=305/1868 -> mediant ≥ U, move left
step 52: left=8/49 right=305/1868 mediant=313/1917 -> mediant ≥ U, move left
step 53: left=8/49 right=313/1917 mediant=321/1966 -> mediant ≥ U, move left
step 54: left=8/49 right=321/1966 mediant=329/2015 -> mediant ≥ U, move left
step 55: left=8/49 right=329/2015 mediant=337/2064 -> mediant ≥ U, move left
step 56: left=8/49 right=337/2064 mediant=345/2113 -> mediant ≥ U, move left
step 57: left=8/49 right=345/2113 mediant=353/2162 -> mediant ≥ U, move left
step 58: left=8/49 right=353/2162 mediant=361/2211 -> mediant ≥ U, move left
step 59: left=8/49 right=361/2211 mediant=369/2260 -> mediant ≥ U, move left
step 60: left=8/49 right=369/2260 mediant=377/2309 -> mediant ≥ U, move left
step 61: left=8/49 right=377/2309 mediant=385/2358 -> mediant ≥ U, move left
step 62: left=8/49 right=385/2358 mediant=393/2407 -> mediant ≥ U, move left
step 63: left=8/49 right=393/2407 mediant=401/2456 -> mediant ≥ U, move left
step 64: left=8/49 right=401/2456 mediant=409/2505 -> mediant ≥ U, move left
step 65: left=8/49 right=409/2505 mediant=417/2554 -> mediant ≥ U, move left
step 66: left=8/49 right=417/2554 mediant=425/2603 -> mediant ≥ U, move left
step 67: left=8/49 right=425/2603 mediant=433/2652 -> mediant ≥ U, move left
step 68: left=8/49 right=433/2652 mediant=441/2701 -> mediant ≥ U, move left
step 69: left=8/49 right=441/2701 mediant=449/2750 -> HIT
Round-trip check: 449/2750 prints as 0.16327272727272726 ✓
step 3: lowest common denominator
Each sample gives a denominator. The true limit D must be divisible by all of them. Example: 449/2750 and 11401/75000 both scale to a denominator of 3,300,000.
Why does the limit have to be divisible by all of them? Because the utilization number is literally "used / limit". When we recover a fraction from the float, we get it simplified. So the denominator we recover can only be a divisor of the real limit.
So you take a few samples, collect the denominators, and take their LCM. At first it will jump around. Then it stops. Once it stops across a bunch of different usage amounts, that LCM is your limit. You can still get unlucky, but after a few samples the odds get really small.
step 4: Feynman method5
Lastly, how did I get the credit-token formulas and model multipliers? A whole bunch of manual data collection, then automated data collection after I modded my extension to save that data as I chatted. I put them all in a table and stared at them a lot. Asked Claude. Asked GPT. Came up with hypotheses, tested them, and ultimately ended up with the tables and formula above. I wish I had more to say on this but it was a bit of a chaotic process and I didn't keep notes on it, the important part is that I've validated the final numbers and they check out exactly.
conclusion
Side channels are everywhere. I don't think anyone at Anthropic expected to leak their exact pricing table just by forgetting to round two numbers.
You should get a plan if you can. Claude Code on API pricing just doesn't make financial sense compared to the plans for most people. The main exception is if you're forced onto the API for organizational reasons (enterprise/team setups, procurement, etc.), in which case the comparison is less relevant.
If you care about Claude's usage limits (and you're at the bottom of an article that extensively explains them, so I assume you do), try my Claude Counter extension. It shows you the cache timer, the usage bars right in the composer box (with full precision) and more.
As of writing, the floats remain unrounded and suspiciously precise. I expect if this post gets any attention, that might not last very long. I'll be a little sad about it because it'll make my extension slightly worse. (I'll have to rely on the /usage endpoint, the same one the official usage page uses, which is rounded)