Rendered at 12:12:20 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
hank2000 11 hours ago [-]
Yall want to see instant? Check out chatjimmy.ai blow your mind. I’m not affiliated.
But the things it unlocks in a product I’m building are mind blowing. Millisecond inference even on much older models will change the whole game. Enough to run inference on every. Single. API call. Without notable disruption. This sh*t is wild.
sunnybeetroot 6 hours ago [-]
Do you have more information on this? I thought groq was fast but this is insane.
GPT-5.3-instant wasn't even close to instant. Even with lowest effort it's like 3-4x best case TTFT compared to GPT-4.1
I know, I know.. But they are the one's labeling them "instant". There is a real need for a refresh on the datacenter workhorse that is GPT-4.1
Also, how TF are you going to have an "instant" model release and not mention the latency characteristics at all?
simianwords 17 hours ago [-]
I wonder what's the difference between this and GPT 5.5 thinking with zero thinking effort? Interesting product decision to have different models.
pants2 17 hours ago [-]
Good question. I find that GPT-5.5 thinking is very good at not thinking for simple questions, so much so that I've never had the need to use the instant model even for quick Q&A.
I'm assuming the instant model, then, is an entirely different smaller model mainly serving the free tier of ChatGPT.
simianwords 17 hours ago [-]
It is an entirely free model, but it is also the model that most users (even paid) interact with until the router pushes the thinking.
pants2 17 hours ago [-]
Good point. I feel like this does a disservice to ChatGPT -- IIRC even the free tier of Claude points you to Sonnet 4.6 by default, which is magnitudes better than 5.3-instant which has been the default in ChatGPT.
Hence most users will immediately think Claude is smarter, even if their best models are on par.
simianwords 17 hours ago [-]
then again I think the free sonnet 4.6 only allows ~5 requests a day but GPT allows more than 50
while_true_ 16 hours ago [-]
Correct. I have the $20/month plan and I just checked, the default is 5.3-instant. I can manually switch it to Thinking is 5.5. I also have it set to auto-switch.
jflskajfsd 18 hours ago [-]
Big increase in intelligence at the cheapest price
I don't see price listed anywhere, do you? This isn't even on their models page yet.
BoxedEmpathy 15 hours ago [-]
Is this available on the API? I didn't see instant. I see chat?
timpera 15 hours ago [-]
> GPT‑5.5 Instant is rolling out starting today to all ChatGPT users, replacing GPT‑5.3 Instant as the default model, and in the API as chat-latest.
tngranados 18 hours ago [-]
Looks like it gives more readable answer, hopefully it does, the regular free ChatGPT modal right now is insufferable.
OutOfHere 15 hours ago [-]
Why can't they be more consistent about releasing the Insant and Thinking models at the same time for each version number? Why all this duplicative drama?
phainopepla2 12 hours ago [-]
It's probably a modified version of the thinking model. If that's the case, releasing them at the same time would mean delaying the thinking model's release.
dude250711 18 hours ago [-]
Nice, something actually usable and at an affordable price.
But the things it unlocks in a product I’m building are mind blowing. Millisecond inference even on much older models will change the whole game. Enough to run inference on every. Single. API call. Without notable disruption. This sh*t is wild.
EDIT: it’s this company https://taalas.com/products/
I know, I know.. But they are the one's labeling them "instant". There is a real need for a refresh on the datacenter workhorse that is GPT-4.1
Also, how TF are you going to have an "instant" model release and not mention the latency characteristics at all?
I'm assuming the instant model, then, is an entirely different smaller model mainly serving the free tier of ChatGPT.
Hence most users will immediately think Claude is smarter, even if their best models are on par.