Previous rumors I heard were about Mythos being the first 10T model and distilling it down to 1T would be blowing GPT 5.4-high out of the water.
I cannot help but agree that OpenAI killing Sora sounds desparate. The latest GPT models waste absurd amount of time and token to thinking. It looks like OpenAI is stagnating and their only last resort is investing more tokens into thinking.
I actually like GPT's thinking. Today, while reviewing, I found that Opus Max hallucinated spec elements on me last week despite being fed the spec in question (as a .txt - love ietf for that). GPT didn't spot that error in its own review despite all that thinking, fwiw, BUT, this happened to me twice with Opus in 4 weeks now.
I haven't had GPT hallucinate in a while now. So there may be something to their (extremely costly) approach.
I'm not saying their approach of increasing thinking is bad. It's quite good. My speculation is that it means OpenAI was stagnating at a deeper level under the hood. I may or may not be proven wrong in a few weeks.
Bottom line, the outcome really matters more than the speed, also because since it's unlikely that any among us plebs is licensed to run GPT or Claude on our own hardware, we're renting GPU ticks: just run multiple CC / codex in parallel.
In fact, my poor overburdened soul would be happier to not get queue-stressed by the review load, lol. Slow down, bots, slow down.
Previous rumors I heard were about Mythos being the first 10T model and distilling it down to 1T would be blowing GPT 5.4-high out of the water.
I cannot help but agree that OpenAI killing Sora sounds desparate. The latest GPT models waste absurd amount of time and token to thinking. It looks like OpenAI is stagnating and their only last resort is investing more tokens into thinking.
I actually like GPT's thinking. Today, while reviewing, I found that Opus Max hallucinated spec elements on me last week despite being fed the spec in question (as a .txt - love ietf for that). GPT didn't spot that error in its own review despite all that thinking, fwiw, BUT, this happened to me twice with Opus in 4 weeks now.
I haven't had GPT hallucinate in a while now. So there may be something to their (extremely costly) approach.
I'm not saying their approach of increasing thinking is bad. It's quite good. My speculation is that it means OpenAI was stagnating at a deeper level under the hood. I may or may not be proven wrong in a few weeks.
Bottom line, the outcome really matters more than the speed, also because since it's unlikely that any among us plebs is licensed to run GPT or Claude on our own hardware, we're renting GPU ticks: just run multiple CC / codex in parallel.
In fact, my poor overburdened soul would be happier to not get queue-stressed by the review load, lol. Slow down, bots, slow down.
Really I got the vibe on CC has the marketshare but that a lot of serious developers with "hard" problems prefer Codex 5.4.
Seems to be a 2 horse race right now... both are threatening the release of a game-changing new model...
I use both now. And GLM-5 as a third. But I'm still not happy.
Maybe in 6 months they will meet your standards.. things seem to be really accelerating, I wish they would slow down :/
The bots should slow down.
The hype should slow down.
But can they please hurry the fuck up delivering on those promises that they will take all the jobs? I'm stressed. Lol.
https://twiiit.com/birdabo/status/2038257206815301877