The piece isn't an independent analysis as the author has an obvious interest in Zitron being wrong. In fact, the piece closes off with a nice marketing self-plug. But that aside, the author doesn't actually refute Zitron's points. One of the main argument is "the comparison with Netflix is wrong", which doesn't prove anything in itself; and then tries to show that inference is profitable. Though just as in their baker analogy, you must factor in all other costs, including training new models. Worthless marketing plug.
tudorizer 10 hours ago [-]
Well ... what is independent analysis? The author is not a reporter, but someone who understands business principles.
I think you got the causality the other way around here.
kingkongjaffa 2 days ago [-]
Interesting read!
This stood out to me:
> ChatGPT 5 and ChatGPT OSS are here with the purpose of profitability
This is economically good, but it's also a signal that their capacity to moonshot is stalling either through lack of funding or lack of innovation. They're now pivoting to a more sustainable model.
Models have seen diminishing returns over the last 2 generations of model: GPT3.5 to 4o to 5.
Doubling parameter size does not double model ability/quality.
In the long term models will become commodities that can be interchanged with competitors and open source models, there's no moat, it's not likely anyone is going to sustainably have a hugely better model than the next company.
Claude Code is already showing that you can win in a niche with specialization.
I expect 3 things:
1. We won't see massive jumps on model performance again for a while without new techniques.
2. Model makers will specialize in specific use cases like claude code
3. Moonshot projects like stargate will not have outsized returns, the step change from o3/o4 models to whatever comes next will not be groundbreaking. Partly because of diminishing returns and partly because the average person is bad at explaining what they want an LLM to do.
brenoca 2 days ago [-]
I agree 100%, very valid and fair points.
Regarding moonshot projects, I expect those to exist in light of a potential breakthrough in technique. Kind of connecting point 3 and 1 that you made.
I predict that we will see a breakthrough like we saw on transformers in 5 years from now, due to the new interest, capital (financial and human) being dedicated to this cause.
I think the best OpenAI can do is to make their product a cash cow, by reducing cost and focusing on moonshot breakthroughs to stay ahead of the curve.
Keeping in mind that it was almost 3 decades between CNNs and transformer techniques. CNNs came out in 1989 while transformers in 2017. I expect this kind of window to dramatically shorten with the renewed interest in the field.
tudorizer 2 days ago [-]
> because the average person is bad at explaining what they want an LLM to do
Agreed. It's the saving grace for most platform which integrate LLMs even right now. Eg. v0 narrows the scope of general purpose LLMs and offers educated guides.
It's a good analysis, but I am not sure why you are spending time doing this. People who care about your company (investors, users, partners etc) are probably sufficiently familiar with AI to disregard shallow analyses like Ed Zitron's one. You know the saying: a fool can throw a stone in a pond and 100 wise men can't take it out. It's not worth spending time debunking these pieces.
dazamarquez 18 hours ago [-]
I would say this article is very shallow. Zitron criticizes what he calls the AI bubble from multiple angles, it's not just "they will never be profitable" — and I agree this would be a wild claim. Even in the worst-case scenario where AI is a giant con, as Zitron paints it, they might just become profitable if they can con enough people. I also don't expect people with a stake in any of this to read Zitron's posts and immediately stop doing what they're doing. That would be silly. I don't think Zitron writes for them, and that what he writes needs "debunking". For how I see it, Zitron mainly advocates for a more critic journalism. Regardless of whether he's right or wrong, he does attempt to critically report on AI.
tudorizer 10 hours ago [-]
Right, but critiquing with the right perspective is important. Statements about making a loss must contain the entire economic picture, otherwise they simply aren't true at some point.
A business can't be scrutinized unless the units of economics are understood.
tudorizer 2 days ago [-]
Most likely tickling an own itch. Also to validate/invalidate if sanity wasn't lost.
Plus, Ed's articles have been circulated in some investment groups and nobody expressed a clear counter-point.
I think you got the causality the other way around here.
This stood out to me:
> ChatGPT 5 and ChatGPT OSS are here with the purpose of profitability
This is economically good, but it's also a signal that their capacity to moonshot is stalling either through lack of funding or lack of innovation. They're now pivoting to a more sustainable model.
Models have seen diminishing returns over the last 2 generations of model: GPT3.5 to 4o to 5.
Doubling parameter size does not double model ability/quality.
In the long term models will become commodities that can be interchanged with competitors and open source models, there's no moat, it's not likely anyone is going to sustainably have a hugely better model than the next company.
Claude Code is already showing that you can win in a niche with specialization.
I expect 3 things:
1. We won't see massive jumps on model performance again for a while without new techniques. 2. Model makers will specialize in specific use cases like claude code 3. Moonshot projects like stargate will not have outsized returns, the step change from o3/o4 models to whatever comes next will not be groundbreaking. Partly because of diminishing returns and partly because the average person is bad at explaining what they want an LLM to do.
Regarding moonshot projects, I expect those to exist in light of a potential breakthrough in technique. Kind of connecting point 3 and 1 that you made.
I predict that we will see a breakthrough like we saw on transformers in 5 years from now, due to the new interest, capital (financial and human) being dedicated to this cause.
I think the best OpenAI can do is to make their product a cash cow, by reducing cost and focusing on moonshot breakthroughs to stay ahead of the curve.
Keeping in mind that it was almost 3 decades between CNNs and transformer techniques. CNNs came out in 1989 while transformers in 2017. I expect this kind of window to dramatically shorten with the renewed interest in the field.
Agreed. It's the saving grace for most platform which integrate LLMs even right now. Eg. v0 narrows the scope of general purpose LLMs and offers educated guides.
A business can't be scrutinized unless the units of economics are understood.
Plus, Ed's articles have been circulated in some investment groups and nobody expressed a clear counter-point.
PS. I wasn't familiar with that saying.