Confirming Ed Zitron's careful analysis of the situation with SoftBank, Microsoft, and OpenAI: https://www.wheresyoured.at/optimistic-cowardice/.
"Outside of NVIDIA, nobody is making any profit off of generative AI, and once that narrative fully takes hold, I fear a cascade of events that gores a hole in the side of the stock market and leads to tens of thousands of people losing their jobs."
You'll note that the prospect story mentions Zitron multiple times.
There is a move towards software again, as ime a lot of the "AI" oriented investment was around hardware [0].
In my opinion, it was a side effect of how hardware became sexy again as an investment story, though in action, the fabless players like Nvidia and the diversified players like Broadcom that could take full advantage of that "AI" and "Hardware" story because of better margins.
I've also been very open about my opinions that companies that sell a foundational model cannot build a strong long term moat, as an overtrained or specialized models will outcompete foundational models in PoCs and domain specific applications, and the actual hardware needed isn't that exhorbitant as DeepSeek has shown (that said I don't think they spent $6m - it was probably in the $20-50m range, but even then fairly reasonable for most organizations).
I do worry about the viability of OpenAI in particular. So much of its talent went to other firms which then built up amazing capaibilities like Anthropic with Claude. And then they also have the threat of OpenSource models like DeepSeek v3.1 and soon DeepSeek R2 while at the same time OpenAI is raising its prices to absurd levels. I guess they are trying to be the Apple of the AI world... maybe...
That said, I expect protectionist policies will be enabled by the US government to protect them and also X.AI/Grok from foreign competition, in particular Chinese.
> worry about the viability of OpenAI in particular [...] they also have the threat of OpenSource models
It's a real shame OpenAI didn't succeed with their core and most fundamental mission of being open and improving humanity as a whole, where new FOSS models would have been seen as a success, not as a competitor or threat. We're all worse off because of it.
Well, I mean, if we take what they themselves said (at the time) as truth, then that was their sincere mission:
> OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact.
Of course, they could have been lying at that point, in order to gain interest I suppose , but they were pretty outspoken about that idea at least in the beginning.
>Of course, they could have been lying at that point, in order to gain interest I suppose , but they were pretty outspoken about that idea at least in the beginning.
I don't see how those notions contradict each other when it's all to get funding.
Or, to put it another way, you presume that investors want to hear how they will make money, as opposed to wanting to hear that OpenAI can 'talk the talk'
> Of course, they could have been lying at that point, in order to gain interest I suppose , but they were pretty outspoken about that idea at least in the beginning.
We're talking about Sam Altman here. Of course he was lying from the beginning, just as he's lying to his investors now about the future of his company. I think most of us are just confused why the world seems to have perpetual amnesia when it comes to Sam Altman—he can do and say whatever he wants to get ahead and a year later people are still taking him at his word.
I don't mean to engage in conspiratorial thinking but I believe in this industry we have to assume dishonesty. It's just so many orders of magnitude easier to run a con than actually do the thing being promised.
Some of the parties involved are known for long-cons, shameless backstabbing, and generally ruthless self-interest.
So I would wouldn't be surprised if the "open" and "non-profit" was just a thin PR veneer from the start.
It would also explain how values consistent with their supposed mission seem to have been zero consideration in hiring. (Given that, during what looked like a coup, almost all of those hires lined up to effectively say they would discard the values in exchange for more money.)
In some ways it resembles the drug industry. Heavy investment in what looks like a promising line of development only to have it flop like 4.5 with marginal improvements.
Although I dislike the pricing, OpenAI do sit on some of the best models/processes. o1 Pro mode is a literal beast even compared to newer R1 and Sonnet 3.7. I'm not sure I'd call it 10x better (than R1 specifically) perhaps, but it certainly is better (and slower).
Anthropic didn’t build shit, it’s got the same closed output rules as ClosedAI and xAI and Perplexity and the Gemini API. Send them your precious questions and code and what do you get back? Output you with a prohibition era policy telling you not to use it to compete with their thing that does everything. That’s such a dumb deal, I immediately think people are dumb when I hear them mention using closed output AI services. Government protections for explicitly anticompetitive services? What a joke!
> Anthropic didn’t build shit, it’s got the same closed output rules as ClosedAI and xAI and Perplexity and the Gemini API
Anthropic was founded by ex-OpenAI employees and they built an effective competitor to OpenAI as evidenced by their valuation, e.g. +$60B USD. If building a company with a +$60B valuation is considered shit, well, I guess I want to know what you built that is better?
It also seems OpenAI is struggling to scale. I'm a premium subscriber and the site is totally unusable for multiple hours every week. Now that Claude has web search I may switch away from OpenAI permanently.
It is not only the downtime but they have much more restrictive chat usage limits. That coupled with historically not having enough GPUs to service load has made it sometimes difficult to use. On that last point, it has gotten better in recent weeks.
This is also happening to Anthropic. The adoption of AI is accelerating right now, or at least token usage is accelerating is with agentic workflows coming on online.
When did they raise prices? I don't recall them ever raising prices.
> That said, I expect protectionist policies will be enabled by the US government to protect them and also X.AI/Grok from foreign competition, in particular Chinese
People love to say this but its hard to imagine which American or European businesses would be actively using models that are being run and hosted within mainland China. The risks are too great. Protectionist policies can be entirely ignored.
> I am referring to their new models having absurdly high prices. GPT-4.5 is $75/150 and o1 is $15/60, whereas GPT-4o is $2.5/10.
That is a new product not a price increase.
> The models are open source so one can run them locally.
Again, not sure how any policies will stop opensource code from being run. At any rate those models still don't compare to O1 Pro and the full tool suite.
> Again, not sure how any policies will stop opensource code from being run.
There is already a bill in circulation that would ban the "import" of Chinese artificial intelligence technology and you can be sure that this means you won't be able to download the DeepSeek app or download its model:
> and you can be sure that this means you won't be able to download the DeepSeek app or download its model
The first part is clear, but my thesis is that no Western company would be downloading a DeepSeek app in the first place.
The second part is much more ambiguous, and unfortunately, the bill lacks specificity. I’d be surprised if it made it very far, and even more surprised if any of the hyperscalers supported it. It doesn’t do much to protect their moa because again, no one in the Western world is going to run a Chinese-hosted model anyway. And without clearer language, it could end up blocking the use of foreign research that could actually strengthen U.S. companies.
> The second part is much more ambiguous, and unfortunately, the bill lacks specificity.
OpenAI wants to ban the use of DeepSeek models: "As with Huawei, there is significant risk in building on top of DeepSeek models in critical infrastructure and other high-risk use cases given the potential that DeepSeek could be compelled by the CCP to manipulate its models to cause harm." from: https://cdn.openai.com/global-affairs/ostp-rfi/ec680b75-d539...
> I’d be surprised if it made it very far, and even more surprised if any of the hyperscalers supported it
Worried? Their mission is to make sure that AI benefits all of humanity. Surely they must be thrilled that there is a ton of competition undercutting their prices and eating their market share. I bet Sam Altman is popping champagne as we speak.
> We are building safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome.
> Our mission is to ensure that artificial general intelligence—AI systems that are generally smarter than humans—benefits all of humanity.
We appreciate China for its strong push toward open-source AI. Without models like DeepSeek and Qwen, the U.S. was set to dominate AI with closed-source systems, charging tens of billions in rent every month while deciding who gets access based on politics.
"Hey, Eritrea, you're authoritarian—you can't use our democratic AI until you democratize."
"Hey, Saudi Arabia and Qatar, you're not authoritarian—you can have our AI."
Once again, thank you, Chairman Xi, for saving us from this nonsense.
Just an anecdote, my father when he was younger used to be a great driver. He would drive all around Europe on our vacations, in capital cities (e.g. Paris) with nothing but a tourist map (mother holding the map) and his eyes and ears.
Places I wouldn't really like to drive myself today because of the difficulty (Italy etc.).
For many years he's been using GPS to assist him, first TomTom then iPhone and now his new car has CarPlay.
He's a much worse driver today, constantly distracted, constantly looking at the screen instead of his surroundings and actually learning, using his brain.
I myself like to ride MTB in the woods, I know all the trails and small features in the back of my head. I'm aware of my surroundings and it just sticks without any effort over the years.
I have friends I ride with who use a trail map on their phone or bike computer, many years later they still don't know the trails because they're always looking at their devices.
Sometimes we stop to "have a look" where the fork in the trail goes, drives me mad when it interrupts the flow of the ride.
Do you have more details on which aspects of Epictetus' philosophy are related to hating to outsource mental work? I'm not familiar enough with stoicism. Thanks
(I could ask chatgpt, I ask you instead =p)
PS: Claude 3.5 Haiku take on it: "For Epictetus, the process of thinking is more important than the result. By outsourcing mental work, people surrender their most valuable tool for self-improvement - their own mind."
I'm usually skeptical of doomer articles about new technology like this one, but reluctantly find myself agreeing with a lot of it. While AI is a great tool with many possibilities, I don't see the alignment of what many of these new AI startups are selling.
It makes my work more productive, yes. But it often slows me down too. Knowing when to push back on the response you get is often difficult to get right.
This quote in particular:
>Surveys confirm that for many workers, AI tools like ChatGPT reduce their productivity by increasing the volume of content and steps needed to complete a given task, and by frequently introducing errors that have to be checked and corrected.
This sort of mirrors my work as a SWE. It does increase productivity and can reduce lead times for task completion. But requires a lot of checking and pushback.
There's a large gap between increased productivity in the right field in the right hands vs copying and pasting a solution everywhere so companies don't need workers.
And that's really what most of these AI firms are selling. A solution to automate most workers out of existence.
I'm sure I've read somewhere - and it annoys me immensely that I can't recall the source - that SWEs perceive they are more productive with AI, but the measurements say they aren't.
This seems right, intuitively, but I'd love to see a source.
I've noticed that when I detect chatbot-style code in a coworker's PR, I often find subtle problems. But it's harder for me to spot the problems in code I got out of a chatbot because I am primed by the very activity of writing the prompt to see what I desire in the output.
This is my observation. Not a scientific measurement by any means but from what I can see it isn't speeding anyone up
If I had to guess, people feel more productive because they are doing less of the work they are used to and more review / testing, but to reach the same level of confidence the review takes much longer
And the people who are not doing thorough review are producing absolute garbage, and are basically clueless
Tangentially related, I feel that SWEs who claim that they are more productive with AI haven't actually demonstrated with real examples of how they are actually more productive.
Nobody I follow (including some prominent bloggers and YouTubers) claiming productivity increase is recording or detailing any workflow or showing real world, non-hobby (scalable, maintainable, readable, secure, etc.) workflows of how to do it. It's like everyone who "knows what they are doing" is hiding what the secret sauce for a competitive edge or that they are all just mediocre SWEs hyping AI up and lying because it makes them more money.
Even real SWEs in large companies I know can't really seem to tell me how their productivity is increasing, and when you dig deeper it always seem to be well-scoped and well-understood problems (which is great, but doesn't match the level of hype and productivity increase that everyone else is claiming) -- and they still have to be very careful with reviewing (for now).
It's almost like AI makes SWE brains go mush and forget about logic and data.
I'm a "data scientist", but I have absolutely improved my productivity in the last year or so by conversing with LLM chatbots to work through tough problems, get ideas, figure out project plans, etc. I can see the effect in my list of completed projects, the overall speed isn't that much higher, but the quality has definitely gone up, because I'm able to work through things more quickly and get to good solutions faster, so I can spend the more time iterating on good ideas and less time trying figure out which ideas are even good.
For programming, meh, it helps when I'm really tired and don't want to read documentation. Can't imagine using it in a serious capacity for writing code except in a huge codebase, where I might want it to explain to me how things fit together in order to make some change or fix a bug.
In fairness, it's also still a NEW technology in the scale of tech. For comparison, it takes years after a gaming console is released for teams to optimize and squeeze every last ounce of performance out of the hardware.
We're just getting started with AI, and we're still "stuck" in the chat interfaces because of the storming success of ChatGPT a few years ago. Cursor, GitHub Copilot etc. are cool but they're still "launch titles" to continue my analogy from above.
New models are still coming out (but slowing down) with increased capabilities, context windows, etc. and I'm sure the killer app is still waiting to be unearthed. In the meantime, I'm having a lot of fun building my hobby code. Collectively, we're going to morph that into something more scalable and enterprisey, it's just a matter of time.
Interestingly, this is the conclusion reached by the major militaries, Axis and Allied alike, at the end of extensive experiments with amphetamines in WWII. They certainly made pilots and soldiers feel more confident, engaged and attentive, but the quality of the output was at best unchanged and at worst markedly inferior.
Depending on how they're used. That's a pretty big caveat. You can't replace sleep with them and expect the same performance. They're still quite useful though.
I can believe that, with personal experience from a non-AI tool! A few years back, I wrote a puzzle solving tool (semaphore decoder) that felt faster than using a lookup table manually, but was actually very similar in time.
Regardless of the speed, it certainly felt easier because I didn't have to think as hard, and maybe that extra freshness would improve productivity for later tasks. I wonder if there's any effect like that for AI coding tools - it makes you happier to be less tired.
I perceive quite the opposite. Rarely do I see it producing workable solutions and it often just creates noise. What’s worse is the mistakes it makes sometimes are nuanced, and not the kind of mistakes a human coder would make, causing me to waste a lot of time finding the mistake. I think it’s more useful to get ideas from, or treat it like a trainer when learning a new language, but code generation seems really poor to me still. The only ones I see arguing that its not the case are junior coders making slop apps that do nothing all that interesting.
basically you "pay cognitively" up front (building an understanding from/while doing) or later (when you have to troubleshoot something in a largely LLM-generated tangle.
Basically it moves from--"oh yeah I wrote something with this schema earlier" to "I saw some DB code fly out around an hour ago; maybe it's there. Where was it? `grep models ./src` wait, was it in `db` and other silly stuff like that.
No free lunches or whatever.
I'm not an extensive LLM user for programming and remain mostly agnostic on overall uses for development (sure a brand-new React thing and you're sailing, but a huge old crusty codebase, even in a language well-represented in the training set is a LOT less promising IME)
However there's use cases where I'd straight up not do whatever until the latest minute possible that I use LLMs for now: Cloudformation, various utility bash scripts, simple AWS Lambda functions and other things I consider annoying chores. For me, these cases alone have been an unambiguous victory.
> Surveys confirm that for many workers, AI tools like ChatGPT reduce their productivity
I'm pretty suspicious of that survey (the one that always gets cited as proof that Copilot makes developers less productive, which then inevitably gets used to argue that all generative AI makes developers left productive): https://resources.uplevelteam.com/gen-ai-for-coding
If I was running a company like https://uplevelteam.com/ that sells developer productivity metrics software, one of the smartest marketing moves I could make would be to put out a report that makes a bold, contrarian claim about a hot topic like AI coding productivity. Guaranteed to get a ton of press coverage from that.
> Metrics were evaluated prior to implementation of Copilot from January 9 through April 9, 2023 versus after implementation from January 8 through April 7, 2024. This time period was selected to remove the effects of seasonality.
> Data on Copilot access was provided to Uplevel Data Labs across several enterprise engineering customers for a total of 351 developers in the TEST group (with Copilot access) and 434 in the CONTROL group (without Copilot access). The developers in the CONTROL group were similar to those in the TEST group in terms of role, working days, and PR volume in each period.
I said it in a confusing way, but I do believe it increases productivity, at least for me.
But it's hazy and hard to measure. I am very rarely stuck on hard problems like I was 3+ years ago. But I lose time in other ways. I've never measured it so it's really just a feeling.
I am also skeptical of a company selling something to sponsor or put out a report directly related to what they're selling.
If I add in all the time I spend reading about AI now, or wading through AI slop while researching something, any productivity gains I may see from actually using AI effectively are more than cancelled out.
Interestingly, AI tooling existing in its current form may be making people collectively less productive, even if individually it might make one somewhat more productive at very specific tasks.
Deskilling labour in order to endroute around conflict in the workplace is as old as capitalism. It's not just about automating to reduce expenses, but also about reducing both bargaining power and the "specialness" of the worker.
Uppity Google employees raising a stink and not doing their job because of CEOs sexually harassing employees? Or passing around petitions about making weapons or unethical AI? Sharing their compensation package #s to improve bargaining position or deal with perceived injustices around salary differentials?
They only get away with that because they have bargaining power that management would dearly like them not to have.
The aggressive pivot to "AI" LLM tools is the desperate move of a managerial class sick of our uppity shit.
This has been my complaint about AI from the beginning, and it hasn't gotten better. In the time spend I figuring out how to explain to the AI what I need it to do, I can just sit down and figure it out. AI, for me, has never been a useful assistant for writing code.
Where AI has really improved my productivity is in acting like a colleague I can talk to. "I'm going to start working on X: how would you approach it? what should I know that isn't obvious from the beginning?" or "I am thinking about using Y approach for X problem but I don't see a lot of literature about it. Is there a reason it's not more common?".
These "chats" only take 10-30 minutes and have already led me to learn a bunch of new things, and helps keep me moving on projects where in the past I'd have spent 2-3x as long working through ideas, searching for literature, and figuring things out.
The combination of "every textbook and journal article in existence" with "more or less understands every topic in its training data" is incredibly powerful, especially for people who either didn't do a lot of formal school training or haven't had time to keep up with new developments.
Beginners can benefit from this kind of interaction too, they'll just be talking about simpler topics (which the bot would do even better with) instead of esoterica.
Had to review submissions to a conference. You had to pry open a thick rind of words, to get seeds of meaning spread all over the place, and then reconstruct the points being made. Wordy, complex and tiring to analyze.
Dumping it into ChatGPT to get answers was an act of frustration, and the output made you more frustrated. It gave you more words, but didn’t help with actual meaning, unless you just gave in and assumed it was accurate.
It’s making the job of verification harder, and the job of content creation easier. This is not to society’s larger benefit, since the more challenging job is verification.
I shudder to think what is happening with teachers and college at this point.
Personally I think there will be things that gen-AI is useful for, such as rapid education for beginners and learning or performance feedback mechanisms. Those use cases are promising, and still in development. Hopefully they'll be cost-effective as well.
Trying to reduce complex desicions to "iq" is by itself a marker for a lack of knowledge and a desperate attempt to make yourself feel superior to others.
IQ != intelligence
Anyway, aside from that - if it's not a bubble, there should be a sustainable business model, meaning, one which doesn't lose you ten times the money you make. So, where is it? Which company, besides nvidia, has actually created a sustainable business model based on AI? Where is its moat?
DeepSeek showed us that right now, OpenAI doesn't have any moat to speak of and can be essentially copied. Not exactly great for future profits.
>Trying to reduce complex desicions to "iq" is by itself a marker for a lack of knowledge and a desperate attempt to make yourself feel superior to others.
>IQ != intelligence
Pot, meet kettle.
IQ might not be actually measuring intelligence, whatever that might mean, but it's highly correlated with various things that are generally agreed to be indicators of intelligence, eg. educational attainment or performance on standardized tests. For something as woolly as "intelligence", IQ is pretty much as close as we can get, without trying to claim "it can't be measured exactly so we're not going to even trying to quantify it". Moreover it's pretty obvious that the parent commenter is using "iq" as a shorthand for intelligence, not referring to the results of a test that has to be administered by a trained professional and almost nobody knows the actual value of.
The commenter you replied to made a bad take, but you're basically doing the very thing you're trying to decry, by trying to viciously attack him with accusations of "a lack of knowledge and a desperate attempt to make yourself feel superior to others".
Intelligence is wooly. Making a quotient for it doesn't make it less so.
Using IQ in this fashion, as with many initialisms, can be a means of obfuscating biases, ambiguating, dog whistling, covering one's ass, preening, and discounting nuance. Yes that all is quite pedantic. But I still judge when I see people use them as crutches of (in)articulate communication.
If you have something to say, say it clearly and precisely and embrace the nuance.
My wife and I both work on and with "AI" tools and have for some time. It sure looks like a hype bubble. The stuff's... OK, in limited ways, but total shit if you start trying to use it to revolutionize anything, even in small ways. This has not meaningfully changed in the last couple years, it looks like the whole approach is just slowly converging on a local maximum that's nowhere near what the hype-meisters (Altman) were "warning" (lying, to hype their product) us about.
It continues to be true that the spaces for which it's best-suited to having a great effect on productivity are harmful ones, like spam and scams.
You could instead provide an argument of why it is not a bubble. For example, we are perhaps one breakthrough away from something approximating AGI, etc.
Though the irony is that I myself don't follow it, so a man is full of contradictions. He wants to be shown as something when in reality he is something completely else.
still loads of money to be made in being the company hosting models on your fleet of GPUs. open source models and training paradigms definitely have undercut the proprietary model moat, but you need a good chunk of compute to run these models and not everyone has or wants that compute themselves
This journalist, Ed Zitron, is very skeptical of AI and his arguments border on polemic. But I find his perspective interesting - essentially, that very few players in the AI space are able to figure out a profitable business model:
What does Zitron define as "AI" companies though? (Actually curious).
A lot of "AI" investments in public markets were basically a bundle of fabless players (Nvidia), data center adjacent players (Broadcom), and some applications, whereas private/VC investment was bifurcated between foundational model players (OpenAI, Anthropic), application or AIOps oriented players (ScaleAI, Harness, etc), and rebranded hardware vendors (Cerebras, CoreWeave, Groq, Lambda).
If by AI he means foundational model companies, then I absolutely agree that there is a need to be bearish about their valuations, because they can constantly be outcompeted by domain specific or specialized models.
Furthermore, a lot of the recent "AI" IPO hype are functionally fabless hardware companies like CoreWeave or Cerebras who were previously riding high because of the previous boom in semiconductor and chip investments.
I've been fairly open about this opinion at my day job.
I personally think there is more risk on the commodity compute side because the projected multiples for a Cerebras or CoreWeave are imo unrealistic in such a margins driven market, and there was a DC funding boom that has started lagging over the past couple months, as there appears to be a bit of an overcapacity issue in compute - imo very similar to that in the mid-2000s with servers and networking players like Dell, EMC, Cisco, etc.
That said, I have a similar opinion about foundational model companies as well.
Ed Zitron is trying to be a wittier Kara Swisher, which means there is near zero signal in the anti-tech noise. Which is a shame because he has the talent to be something else.
Would it not be the opposite? I always had the impression Swisher was quite boosterist, very enthusiastic about the industry. I don't know her work too well though
I‘ve only ever saw work of her that was the left-populist anti Big Tech fare that the whole legacy media had picked up in the 2010s, but it might not be representative.
A big problem is to what extent do future business models disrupt the public perception that AI is "objective" or unbiased, a perception that is often itself the product of these companies' own marketing. When the very results of a prompt to an agent/LLM, such as ChatGPT, are suspected of profit/corporate interests over accuracy, why would anyone use it over the dozen other players in the field? This is precisely the thing that has soured google search results.
The end result of this wave looks increasingly like will get us an open web blogspam apocalypse, better search / information retrieval, better autocomplete for coders. All useful (well useful to bloggers/spammers at least), not trillions of dollars in value generated though.
Until a new architecture / approach takes root at least.
The software industry as a whole generates trillions of dollars every year. The current state of AI makes coders significantly more efficient and it’s only getting better.
It’s easily worth trillions with just normal speculation.
There is a lot of data to support this, most of it showing 20-50% efficiency gains. For a multi-trillion dollar industry that equates to potentially over a trillion dollars in efficiency in a single year.
This isn't slowing down either, if anything its accelerating with reasoning models. Which means it likely under-valued actually. Lets not forget those numbers are just for software!
It's looking a lot like it'll mostly be a tiny bits of value scattered around all over, amounting to quite a bit all together, but little in the way of acute disruption of... anything. Aside from ruining some things, as you note.
The supposed Jobs quip about Dropbox being a "feature, not a company" comes to mind.
Question to me really is how much value will there be for the big players creating these models. And well how much value for smaller players using some open source model to sell some project. Which the customers get something out of and hopefully even the end users...
Maybe the extraction will be with AWS, Azure and GCP... After all hosting is most realistic place to generate costs that must be paid.
"better search / information retrieval" - this is only temporary until people figure out either the SEO equivalent to get into models or AI agents start charging for product placement etc as they will absolutely need to add to their revenue model.
Its the equivalent of all new tech that comes out of the valley -- amazing for early adopters etc then as it starts having to move past the user growth phase and into the monetizing phase the product loses its lustre and create space for new competitors/tech following the same paradigm. Rinse and repeat.
It's really hard to accurately assess the possibilities granted by LLMs, because they just feel like they have so much potential.
But ultimately I think Satya Nadella is right. We can speculate about the potential of these technologies all we want, but they are now here. If they are of value, then they'll start to significantly move the needle on GDP, beyond just the companies focused on creating them.
> It's really hard to accurately assess the possibilities granted by LLMs, because they just feel like they have so much potential.
I agree, but this feeling isn't anything new! This was the verbatim argument people used when insisting that blockchain was going to be a transformative technology. Bitcoin increased in value so dramatically and so quickly that it just felt like there had to be something valuable in the underlying technology, and corporations collectively through billions at it because they wanted to be the ones to exploit it first.
It's very easy to look back and say "well of course it's different this time," but the exuberance for blockchain at the time really was very close to current AI hype levels. It's quite possible that AI becomes a useful tool in some relatively niche scenarios like software development while never delivering on the promise of replacing large swathes of skilled labor.
>> It's really hard to accurately assess the possibilities granted by LLMs, because they just feel like they have so much potential.
> I agree, but this feeling isn't anything new! This was the verbatim argument people used when insisting that blockchain was going to be a transformative technology.
Not saying you're doing this, just a random thought: it's funny seeing how much effort is spent trying to predict the trajectory of LLMs by comparing them to the trajectories of other technologies. Someone will say that AI is like VR or blockchain, and then someone else will chime in that we're actually in the early days of something like the internet, etc, etc..
It's like, imagine I wanted to know if my idea was any good. Thinking about the idea itself is hard, instead I'll explain it to you and look at your reaction. Oh no, that's the face of someone who just had NFTs explained to them, I should halt immediately.
Of course is totally different this time. We are solving a lot of problems right now with llms, and pushing forward a lot of stagnant areas like in computer vision.
So, yeah, this time is different. The chat bots are just the tip of the iceberg
I worry about the cultural shift in Tech to "what have you done for me lately" over patient innovation. Due to no more ZIRP, due to a shift to very top-down management, narcissistic CEO bros, and the new focus to please investors over all else... There's little appetite for actual innovation, which would require IMO a different culture and much more trust between management and employees. So instead, there's top-down AI death marches to "innovate" because that's the current trend.
But who is DEFINING the trend? Who is actually trying to stand out and do something different?
There's glimmers of hope in tiny bootstrapped startups now. That seems to be the sweet spot of not needing to obsess about investor sentiment, and instead focus on being lean and having a small team with the trust to actually try new things. Though this time with a focus early profitability where they can dictate terms to investors, not the other way around.
If I look at Nvidia stock from mid-June of last year, or the IYW index (Apple, Microsoft, Facebook, Google) - NVDA is down 10%, IYW is down maybe 2-3%. It doesn't feel like I'm in the middle of a huge bubble like, say, the beginning of 2000.
> But the reaction to Stargate was muted as Silicon Valley had turned its attention west. A new generative AI model called DeepSeek R1, released by the Chinese hedge fund High-Flyer, sent a threatening tremor through the balance sheets and investment portfolios
You gotta love how this paragraph reads like an unfolding battle scene from a Tolkien novel.
I can't say for all possible implementations, but IMO (from industry experience) the content and consumer-focused benefits of AI/LLMs have been very much over-hyped. No one really wants to watch an AI-generated video of their favorite YouTuber, or pay for an AI-written newsletter. There is a ceiling to the direct usefulness of AI in the media industry, and the successful content creators will continue to be personality-driven and not anonymously generic.
Whether that also applies to B2B is a different question.
That might be true, but GenAI for banner optimization is getting huge investments in BigTech. The company that's not doing it will likely bite the dust seriously. You may say it's a sad state of affair, but this will pull hard on the tech you're talking about (albeit, with much less romantic outcomes).
What do you mean by banner optimization specifically? Ad images and video thumbnails, etc?
In that case, I don't disagree – genAI is still useful for this "side" work like thumbnails, idea generation, and so on. But I don't think it will be used much for content directly, as has been suggested by a ton of big tech companies.
I think AI is a bubble in the same sense that airline industry are.
Airline industry are notoriously hard to be profitable. (I heard it from the intelligent investor book)
So just because something is useful doesn't necessarily means that its profitable yet the VC's are funding it expecting such profitability without any signs of true profitability till now.
I mean, yes AI is a profitable, but most of the profitability doesn't come from real use case, but rather
the majority of the profitability comes from the (just slap AI sticker to increase your company valuation), and that's satisfying the VC right now. But they want returns as well.
And by definition if their returns is that a bigger fool / bigger VC is going to fund the AI company at a higher evaluation without too much profitability / very little profitability. Then THAT IS BUBBLE.
But being a bubble doesn't mean it doesn't have its use cases. AI is going to be useful, its just not going to be THAAT profitable, and the current profits are a bubble / show the characteristics of it.
That's a great article and a lot of the comments seem to resonate with the article. But somehow this is disappearing from the front page faster than anything else, it's hard not to think that "this is bad for business, so it must go"...
In the last few years we have seen unprecedented progress in AI. Relatively recently, LLMs like ChatGPT were regarded as pure science fiction. Current text-to-image models? Unthinkable. And then people still try to argue that it is just a bubble. People have the concerning tendency not to learn from evidence they previously judged as being extremely unlikely. The evidence is now clearly indicating that humanity is on the cusp of developing superhuman general intelligence. The remaining time is probably measured in years rather than decades or centuries.
"It's evident that while AI presents transformative potential, the surrounding financial speculation warrants caution. The challenge lies in distinguishing between genuine technological advancements and market hype. As the industry evolves, a balanced approach that values innovation while remaining vigilant about speculative investments will be crucial to navigate the AI landscape effectively."
{comment by ChatGPT after reading the article and all the comments here}
I think the biggest evidence of the bubble can be seen in job postings. When there is little diversity in skills being asked for that is a very dangerous position for a sector. Imagine a town where every job posting was related to the horseshoe industry with Henry Ford approaching in the horizon. Not only would there be little available for new work after a disruption, there is little available for you at the moment to get experience in some different framework.
To say nothing at all about what the white house is doing which makes everything more precarious as companies get flighty due to economic instability.
I tend to agree with the article, but I do wonder if the operating costs of AI companies will decrease if they incorporate the more efficient methods of R1 and stop building so many fucking data centers.
I also expect AI to incorporate ads at some point, once they exit the dreamy phase that early tech products always go through. I know Sam says he doesn't want to, but they only have so much runway. Eventually they will rationalize their ads as fundamentally different - a consumer assistant, if you will.
I generally agree. I think we're in a hype cycle and there will be a correction. I'm hopeful that people will be more realistic about it being a tool and not a panacea. For OpenAI, though, they have to talk up AGI and Universal Basic Compute because of the capital they've raised and their unrealistic valuation.
Curious to see how open weight international models eat into OpenAI's first-mover moat. Until hardware is cheaper and more commodity—which requires crossing the CUDA moat—open weight will be a fun toy, but not a serious production commodity.
The dot com bubble from around 1995 to 2000 was huge. I started working in 1997 and we thought it would just keep going but then the bubble popped. Thousands of people lost jobs. Stock market dropped way down. But that doesn't mean that the Internet was useless.
I see AI like that. In my view there is absolutely an AI bubble and it does have real world uses but it is way over hyped right now. I say that as someone working on AI chips.
It requires only a little bit of imagination to figure which industries are most likely to be disrupted by AI. If you calculate the size of those industries, the upside is huge. Let's pick transportation and healthcare: we already have Waymo and probably in the future Tesla offering competitive autonomous drivers, and our current LLMs already surpass doctors in diagnosis accuracy. So why, I ask, why would you possibly think those industries won't be disrupted? Do you really think people will stick with mediocre doctors if they can use AI? Or that they will pay a premium for a human driver? Come on.
cratermoon | 9 months ago
You'll note that the prospect story mentions Zitron multiple times.
bfrog | 9 months ago
alephnerd | 9 months ago
In my opinion, it was a side effect of how hardware became sexy again as an investment story, though in action, the fabless players like Nvidia and the diversified players like Broadcom that could take full advantage of that "AI" and "Hardware" story because of better margins.
I've also been very open about my opinions that companies that sell a foundational model cannot build a strong long term moat, as an overtrained or specialized models will outcompete foundational models in PoCs and domain specific applications, and the actual hardware needed isn't that exhorbitant as DeepSeek has shown (that said I don't think they spent $6m - it was probably in the $20-50m range, but even then fairly reasonable for most organizations).
[0] - https://www.reuters.com/technology/artificial-intelligence/u...
bhouston | 9 months ago
That said, I expect protectionist policies will be enabled by the US government to protect them and also X.AI/Grok from foreign competition, in particular Chinese.
JesseTG | 9 months ago
danans | 9 months ago
At the extreme end, some members of the administration has already floated ideas like prosecuting users who download Chinese models.
JesseTG | 9 months ago
diggan | 9 months ago
It's a real shame OpenAI didn't succeed with their core and most fundamental mission of being open and improving humanity as a whole, where new FOSS models would have been seen as a success, not as a competitor or threat. We're all worse off because of it.
aiono | 9 months ago
You frame it like they sincerely had this mission at all. Which I doubt seriously. Why would anyone who funded them have such aim?
diggan | 9 months ago
> OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact.
https://openai.com/index/introducing-openai/
Of course, they could have been lying at that point, in order to gain interest I suppose , but they were pretty outspoken about that idea at least in the beginning.
freejazz | 9 months ago
I don't see how those notions contradict each other when it's all to get funding.
bloppe | 9 months ago
freejazz | 9 months ago
Or, to put it another way, you presume that investors want to hear how they will make money, as opposed to wanting to hear that OpenAI can 'talk the talk'
kjkjadksj | 9 months ago
danans | 9 months ago
In general, you should never take anything a corporation or an extremely wealthy person says as sincere.
Like an LLM, what they say corresponds occasionally with "beliefs" but don't confuse that with holding sincere beliefs. It's mostly transactional.
The few exceptions to this rule - who are rarely the most wealthy/powerful - are generally considered traitors to their class.
lolinder | 9 months ago
We're talking about Sam Altman here. Of course he was lying from the beginning, just as he's lying to his investors now about the future of his company. I think most of us are just confused why the world seems to have perpetual amnesia when it comes to Sam Altman—he can do and say whatever he wants to get ahead and a year later people are still taking him at his word.
jbreckmckye | 9 months ago
simonw | 9 months ago
Those mission statements (ostensibly) have teeth!
dboreham | 9 months ago
neilv | 9 months ago
So I would wouldn't be surprised if the "open" and "non-profit" was just a thin PR veneer from the start.
It would also explain how values consistent with their supposed mission seem to have been zero consideration in hiring. (Given that, during what looked like a coup, almost all of those hires lined up to effectively say they would discard the values in exchange for more money.)
y1n0 | 9 months ago
xnx | 9 months ago
When customers don't know how to differentiate on quality, they use price as a signal.
diggan | 9 months ago
i_love_retros | 9 months ago
radicalbyte | 9 months ago
They're looking more like Alta Vista.
bionhoward | 9 months ago
timcobb | 9 months ago
bhouston | 9 months ago
Anthropic was founded by ex-OpenAI employees and they built an effective competitor to OpenAI as evidenced by their valuation, e.g. +$60B USD. If building a company with a +$60B valuation is considered shit, well, I guess I want to know what you built that is better?
bpt3 | 9 months ago
See anything related to NFTs as an example, or WeWork immediately sprang to mind as another one.
Did Adam Neumann build anything noteworthy, or was he charismatic and connected enough to dupe many high net worth individuals and investment firms?
root_axis | 9 months ago
infecto | 9 months ago
root_axis | 9 months ago
infecto | 9 months ago
bhouston | 9 months ago
tw1984 | 9 months ago
iOS and MacOS are apple's moat, OpenAI has Altman's big mouth and all those nonsense from that big mouth.
jjulius | 9 months ago
infecto | 9 months ago
When did they raise prices? I don't recall them ever raising prices.
> That said, I expect protectionist policies will be enabled by the US government to protect them and also X.AI/Grok from foreign competition, in particular Chinese
People love to say this but its hard to imagine which American or European businesses would be actively using models that are being run and hosted within mainland China. The risks are too great. Protectionist policies can be entirely ignored.
bhouston | 9 months ago
I am referring to their new models having absurdly high prices. GPT-4.5 is $75/150 and o1 is $15/60, whereas GPT-4o is $2.5/10.
> models that are being run and hosted within mainland China.
The models are open source so one can run them locally.
infecto | 9 months ago
That is a new product not a price increase.
> The models are open source so one can run them locally.
Again, not sure how any policies will stop opensource code from being run. At any rate those models still don't compare to O1 Pro and the full tool suite.
bhouston | 9 months ago
There is already a bill in circulation that would ban the "import" of Chinese artificial intelligence technology and you can be sure that this means you won't be able to download the DeepSeek app or download its model:
https://www.hawley.senate.gov/hawley-introduces-legislation-...
https://ca.finance.yahoo.com/news/deepseek-users-could-face-...
The stated penalties are up to "20 years in prison and a $100 million fine."
This bill likely is an early one but this is the direction that many want to head in.
infecto | 9 months ago
The first part is clear, but my thesis is that no Western company would be downloading a DeepSeek app in the first place.
The second part is much more ambiguous, and unfortunately, the bill lacks specificity. I’d be surprised if it made it very far, and even more surprised if any of the hyperscalers supported it. It doesn’t do much to protect their moa because again, no one in the Western world is going to run a Chinese-hosted model anyway. And without clearer language, it could end up blocking the use of foreign research that could actually strengthen U.S. companies.
bhouston | 9 months ago
OpenAI wants to ban the use of DeepSeek models: "As with Huawei, there is significant risk in building on top of DeepSeek models in critical infrastructure and other high-risk use cases given the potential that DeepSeek could be compelled by the CCP to manipulate its models to cause harm." from: https://cdn.openai.com/global-affairs/ostp-rfi/ec680b75-d539...
> I’d be surprised if it made it very far, and even more surprised if any of the hyperscalers supported it
OpenAI is calling for legislation:
https://techcrunch.com/2025/03/13/openai-calls-deepseek-stat...
> no one in the Western world is going to run a Chinese-hosted model anyway
AWS offers DeepSeek already:
https://aws.amazon.com/blogs/machine-learning/deepseek-r1-mo...
infecto | 9 months ago
> OpenAI is calling for legislation:
OpenAI is not a hyperscaler.
I am not disagreeing but I think (but could be wrong) that most of this is a moot point over the medium-term.
olalonde | 9 months ago
> We are building safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome.
> Our mission is to ensure that artificial general intelligence—AI systems that are generally smarter than humans—benefits all of humanity.
Source: https://openai.com/about/
elorant | 9 months ago
MaxPock | 9 months ago
"Hey, Eritrea, you're authoritarian—you can't use our democratic AI until you democratize." "Hey, Saudi Arabia and Qatar, you're not authoritarian—you can have our AI."
Once again, thank you, Chairman Xi, for saving us from this nonsense.
bionhoward | 9 months ago
apples_oranges | 9 months ago
nntwozz | 9 months ago
Places I wouldn't really like to drive myself today because of the difficulty (Italy etc.).
For many years he's been using GPS to assist him, first TomTom then iPhone and now his new car has CarPlay.
He's a much worse driver today, constantly distracted, constantly looking at the screen instead of his surroundings and actually learning, using his brain.
I myself like to ride MTB in the woods, I know all the trails and small features in the back of my head. I'm aware of my surroundings and it just sticks without any effort over the years.
I have friends I ride with who use a trail map on their phone or bike computer, many years later they still don't know the trails because they're always looking at their devices.
Sometimes we stop to "have a look" where the fork in the trail goes, drives me mad when it interrupts the flow of the ride.
This is what AI feels to me in many ways.
It's a double-edged sword.
aoeusnth1 | 9 months ago
dest | 9 months ago
(I could ask chatgpt, I ask you instead =p)
PS: Claude 3.5 Haiku take on it: "For Epictetus, the process of thinking is more important than the result. By outsourcing mental work, people surrender their most valuable tool for self-improvement - their own mind."
strict9 | 9 months ago
It makes my work more productive, yes. But it often slows me down too. Knowing when to push back on the response you get is often difficult to get right.
This quote in particular:
>Surveys confirm that for many workers, AI tools like ChatGPT reduce their productivity by increasing the volume of content and steps needed to complete a given task, and by frequently introducing errors that have to be checked and corrected.
This sort of mirrors my work as a SWE. It does increase productivity and can reduce lead times for task completion. But requires a lot of checking and pushback.
There's a large gap between increased productivity in the right field in the right hands vs copying and pasting a solution everywhere so companies don't need workers.
And that's really what most of these AI firms are selling. A solution to automate most workers out of existence.
jbreckmckye | 9 months ago
tartoran | 9 months ago
bwestergard | 9 months ago
I've noticed that when I detect chatbot-style code in a coworker's PR, I often find subtle problems. But it's harder for me to spot the problems in code I got out of a chatbot because I am primed by the very activity of writing the prompt to see what I desire in the output.
bluefirebrand | 9 months ago
If I had to guess, people feel more productive because they are doing less of the work they are used to and more review / testing, but to reach the same level of confidence the review takes much longer
And the people who are not doing thorough review are producing absolute garbage, and are basically clueless
h4ny | 9 months ago
Nobody I follow (including some prominent bloggers and YouTubers) claiming productivity increase is recording or detailing any workflow or showing real world, non-hobby (scalable, maintainable, readable, secure, etc.) workflows of how to do it. It's like everyone who "knows what they are doing" is hiding what the secret sauce for a competitive edge or that they are all just mediocre SWEs hyping AI up and lying because it makes them more money.
Even real SWEs in large companies I know can't really seem to tell me how their productivity is increasing, and when you dig deeper it always seem to be well-scoped and well-understood problems (which is great, but doesn't match the level of hype and productivity increase that everyone else is claiming) -- and they still have to be very careful with reviewing (for now).
It's almost like AI makes SWE brains go mush and forget about logic and data.
simonw | 9 months ago
nerdponx | 9 months ago
For programming, meh, it helps when I'm really tired and don't want to read documentation. Can't imagine using it in a serious capacity for writing code except in a huge codebase, where I might want it to explain to me how things fit together in order to make some change or fix a bug.
MattSayar | 9 months ago
We're just getting started with AI, and we're still "stuck" in the chat interfaces because of the storming success of ChatGPT a few years ago. Cursor, GitHub Copilot etc. are cool but they're still "launch titles" to continue my analogy from above.
New models are still coming out (but slowing down) with increased capabilities, context windows, etc. and I'm sure the killer app is still waiting to be unearthed. In the meantime, I'm having a lot of fun building my hobby code. Collectively, we're going to morph that into something more scalable and enterprisey, it's just a matter of time.
ohgr | 9 months ago
If the developer writes 6,000 lines of utter dog shit with AI that causes your customers to leave, well.
abalashov | 9 months ago
fc417fc802 | 9 months ago
bobbiechen | 9 months ago
Those notes: https://bobbiechen.com/blog/2020/5/28/the-making-of-semaphor...
Regardless of the speed, it certainly felt easier because I didn't have to think as hard, and maybe that extra freshness would improve productivity for later tasks. I wonder if there's any effect like that for AI coding tools - it makes you happier to be less tired.
spacemadness | 9 months ago
nyarlathotep_ | 9 months ago
basically you "pay cognitively" up front (building an understanding from/while doing) or later (when you have to troubleshoot something in a largely LLM-generated tangle.
Basically it moves from--"oh yeah I wrote something with this schema earlier" to "I saw some DB code fly out around an hour ago; maybe it's there. Where was it? `grep models ./src` wait, was it in `db` and other silly stuff like that.
No free lunches or whatever.
I'm not an extensive LLM user for programming and remain mostly agnostic on overall uses for development (sure a brand-new React thing and you're sailing, but a huge old crusty codebase, even in a language well-represented in the training set is a LOT less promising IME)
However there's use cases where I'd straight up not do whatever until the latest minute possible that I use LLMs for now: Cloudformation, various utility bash scripts, simple AWS Lambda functions and other things I consider annoying chores. For me, these cases alone have been an unambiguous victory.
simonw | 9 months ago
I'm pretty suspicious of that survey (the one that always gets cited as proof that Copilot makes developers less productive, which then inevitably gets used to argue that all generative AI makes developers left productive): https://resources.uplevelteam.com/gen-ai-for-coding
If I was running a company like https://uplevelteam.com/ that sells developer productivity metrics software, one of the smartest marketing moves I could make would be to put out a report that makes a bold, contrarian claim about a hot topic like AI coding productivity. Guaranteed to get a ton of press coverage from that.
Is the survey itself any good? I filled in the "request a copy" form and it's a two page infographic! Precious few confirmable details on how they actually ran it: https://static.simonwillison.net/static/2025/uplevel-genai-p...
Here's what they say about their methodology:
> Metrics were evaluated prior to implementation of Copilot from January 9 through April 9, 2023 versus after implementation from January 8 through April 7, 2024. This time period was selected to remove the effects of seasonality.
> Data on Copilot access was provided to Uplevel Data Labs across several enterprise engineering customers for a total of 351 developers in the TEST group (with Copilot access) and 434 in the CONTROL group (without Copilot access). The developers in the CONTROL group were similar to those in the TEST group in terms of role, working days, and PR volume in each period.
strict9 | 9 months ago
But it's hazy and hard to measure. I am very rarely stuck on hard problems like I was 3+ years ago. But I lose time in other ways. I've never measured it so it's really just a feeling.
I am also skeptical of a company selling something to sponsor or put out a report directly related to what they're selling.
pcthrowaway | 9 months ago
Interestingly, AI tooling existing in its current form may be making people collectively less productive, even if individually it might make one somewhat more productive at very specific tasks.
rightbyte | 9 months ago
I am stuck on easy problems. Either bugs that are obvious afterwards or cooperation problems. Almost never hard problems.
cmrdporcupine | 9 months ago
Uppity Google employees raising a stink and not doing their job because of CEOs sexually harassing employees? Or passing around petitions about making weapons or unethical AI? Sharing their compensation package #s to improve bargaining position or deal with perceived injustices around salary differentials?
They only get away with that because they have bargaining power that management would dearly like them not to have.
The aggressive pivot to "AI" LLM tools is the desperate move of a managerial class sick of our uppity shit.
nerdponx | 9 months ago
Where AI has really improved my productivity is in acting like a colleague I can talk to. "I'm going to start working on X: how would you approach it? what should I know that isn't obvious from the beginning?" or "I am thinking about using Y approach for X problem but I don't see a lot of literature about it. Is there a reason it's not more common?".
These "chats" only take 10-30 minutes and have already led me to learn a bunch of new things, and helps keep me moving on projects where in the past I'd have spent 2-3x as long working through ideas, searching for literature, and figuring things out.
The combination of "every textbook and journal article in existence" with "more or less understands every topic in its training data" is incredibly powerful, especially for people who either didn't do a lot of formal school training or haven't had time to keep up with new developments.
Beginners can benefit from this kind of interaction too, they'll just be talking about simpler topics (which the bot would do even better with) instead of esoterica.
jbreckmckye | 9 months ago
I don't need anything esoteric here, I just need a source for the essentials that isn't SEO spam and doesn't assume I'm an absolute moron.
intended | 9 months ago
Had to review submissions to a conference. You had to pry open a thick rind of words, to get seeds of meaning spread all over the place, and then reconstruct the points being made. Wordy, complex and tiring to analyze.
Dumping it into ChatGPT to get answers was an act of frustration, and the output made you more frustrated. It gave you more words, but didn’t help with actual meaning, unless you just gave in and assumed it was accurate.
It’s making the job of verification harder, and the job of content creation easier. This is not to society’s larger benefit, since the more challenging job is verification.
I shudder to think what is happening with teachers and college at this point.
formerphotoj | 9 months ago
Personally I think there will be things that gen-AI is useful for, such as rapid education for beginners and learning or performance feedback mechanisms. Those use cases are promising, and still in development. Hopefully they'll be cost-effective as well.
greatpostman | 9 months ago
lompad | 9 months ago
IQ != intelligence
Anyway, aside from that - if it's not a bubble, there should be a sustainable business model, meaning, one which doesn't lose you ten times the money you make. So, where is it? Which company, besides nvidia, has actually created a sustainable business model based on AI? Where is its moat?
DeepSeek showed us that right now, OpenAI doesn't have any moat to speak of and can be essentially copied. Not exactly great for future profits.
bluefirebrand | 9 months ago
NVidia's business model is not based on AI, is it? They are selling hardware. Their business success is "during a gold rush, sell shovels", isn't it?
gruez | 9 months ago
>IQ != intelligence
Pot, meet kettle.
IQ might not be actually measuring intelligence, whatever that might mean, but it's highly correlated with various things that are generally agreed to be indicators of intelligence, eg. educational attainment or performance on standardized tests. For something as woolly as "intelligence", IQ is pretty much as close as we can get, without trying to claim "it can't be measured exactly so we're not going to even trying to quantify it". Moreover it's pretty obvious that the parent commenter is using "iq" as a shorthand for intelligence, not referring to the results of a test that has to be administered by a trained professional and almost nobody knows the actual value of.
The commenter you replied to made a bad take, but you're basically doing the very thing you're trying to decry, by trying to viciously attack him with accusations of "a lack of knowledge and a desperate attempt to make yourself feel superior to others".
gausswho | 9 months ago
Using IQ in this fashion, as with many initialisms, can be a means of obfuscating biases, ambiguating, dog whistling, covering one's ass, preening, and discounting nuance. Yes that all is quite pedantic. But I still judge when I see people use them as crutches of (in)articulate communication.
If you have something to say, say it clearly and precisely and embrace the nuance.
alabastervlog | 9 months ago
It continues to be true that the spaces for which it's best-suited to having a great effect on productivity are harmful ones, like spam and scams.
rsynnott | 9 months ago
greesil | 9 months ago
kshri24 | 9 months ago
I have some Tulips to sell you
Imustaskforhelp | 9 months ago
I just feel like you are troll not gonna lie. And I don't even want to feed into your brittle ego.
It just feels that you have some hate that you carry.
Care to elaborate where this hate comes from? Did you have a bad day today or what.
Anyways hoping that you give the path of hate and accept the path of giving constructive criticism.
Embrace peace. Reduce Hate.
Imustaskforhelp | 9 months ago
ausbah | 9 months ago
jbreckmckye | 9 months ago
https://www.wheresyoured.at/core-incompetency/
alephnerd | 9 months ago
A lot of "AI" investments in public markets were basically a bundle of fabless players (Nvidia), data center adjacent players (Broadcom), and some applications, whereas private/VC investment was bifurcated between foundational model players (OpenAI, Anthropic), application or AIOps oriented players (ScaleAI, Harness, etc), and rebranded hardware vendors (Cerebras, CoreWeave, Groq, Lambda).
If by AI he means foundational model companies, then I absolutely agree that there is a need to be bearish about their valuations, because they can constantly be outcompeted by domain specific or specialized models.
Furthermore, a lot of the recent "AI" IPO hype are functionally fabless hardware companies like CoreWeave or Cerebras who were previously riding high because of the previous boom in semiconductor and chip investments.
I've been fairly open about this opinion at my day job.
jbreckmckye | 9 months ago
alephnerd | 9 months ago
I personally think there is more risk on the commodity compute side because the projected multiples for a Cerebras or CoreWeave are imo unrealistic in such a margins driven market, and there was a DC funding boom that has started lagging over the past couple months, as there appears to be a bit of an overcapacity issue in compute - imo very similar to that in the mid-2000s with servers and networking players like Dell, EMC, Cisco, etc.
That said, I have a similar opinion about foundational model companies as well.
MrBuddyCasino | 9 months ago
jbreckmckye | 9 months ago
MrBuddyCasino | 9 months ago
beezlebroxxxxxx | 9 months ago
formerphotoj | 9 months ago
Sort of surprised this hasn't been taken down, actually.
fullshark | 9 months ago
Until a new architecture / approach takes root at least.
mountainriver | 9 months ago
It’s easily worth trillions with just normal speculation.
jjulius | 9 months ago
Do you have any data that supports this assertion and the associated upward trend?
mountainriver | 9 months ago
https://github.blog/news-insights/research/research-quantify...
https://www.infoq.com/news/2024/09/copilot-developer-product...
https://www.mckinsey.com/capabilities/mckinsey-digital/our-i...
https://intuitive.cloud/blog/delve-into-the-depths-of-amazon...
https://www.hbs.edu/ris/Publication%20Files/24-013_d9b45b68-...
https://arxiv.org/pdf/2303.17125
This isn't slowing down either, if anything its accelerating with reasoning models. Which means it likely under-valued actually. Lets not forget those numbers are just for software!
alabastervlog | 9 months ago
The supposed Jobs quip about Dropbox being a "feature, not a company" comes to mind.
Ekaros | 9 months ago
Maybe the extraction will be with AWS, Azure and GCP... After all hosting is most realistic place to generate costs that must be paid.
boringg | 9 months ago
Its the equivalent of all new tech that comes out of the valley -- amazing for early adopters etc then as it starts having to move past the user growth phase and into the monetizing phase the product loses its lustre and create space for new competitors/tech following the same paradigm. Rinse and repeat.
lukev | 9 months ago
But ultimately I think Satya Nadella is right. We can speculate about the potential of these technologies all we want, but they are now here. If they are of value, then they'll start to significantly move the needle on GDP, beyond just the companies focused on creating them.
If they don't, then it's hype.
mjr00 | 9 months ago
I agree, but this feeling isn't anything new! This was the verbatim argument people used when insisting that blockchain was going to be a transformative technology. Bitcoin increased in value so dramatically and so quickly that it just felt like there had to be something valuable in the underlying technology, and corporations collectively through billions at it because they wanted to be the ones to exploit it first.
It's very easy to look back and say "well of course it's different this time," but the exuberance for blockchain at the time really was very close to current AI hype levels. It's quite possible that AI becomes a useful tool in some relatively niche scenarios like software development while never delivering on the promise of replacing large swathes of skilled labor.
hatefulmoron | 9 months ago
> I agree, but this feeling isn't anything new! This was the verbatim argument people used when insisting that blockchain was going to be a transformative technology.
Not saying you're doing this, just a random thought: it's funny seeing how much effort is spent trying to predict the trajectory of LLMs by comparing them to the trajectories of other technologies. Someone will say that AI is like VR or blockchain, and then someone else will chime in that we're actually in the early days of something like the internet, etc, etc..
It's like, imagine I wanted to know if my idea was any good. Thinking about the idea itself is hard, instead I'll explain it to you and look at your reaction. Oh no, that's the face of someone who just had NFTs explained to them, I should halt immediately.
Malcolmlisk | 9 months ago
So, yeah, this time is different. The chat bots are just the tip of the iceberg
softwaredoug | 9 months ago
But who is DEFINING the trend? Who is actually trying to stand out and do something different?
There's glimmers of hope in tiny bootstrapped startups now. That seems to be the sweet spot of not needing to obsess about investor sentiment, and instead focus on being lean and having a small team with the trust to actually try new things. Though this time with a focus early profitability where they can dictate terms to investors, not the other way around.
Ologn | 9 months ago
danans | 9 months ago
You gotta love how this paragraph reads like an unfolding battle scene from a Tolkien novel.
keiferski | 9 months ago
Whether that also applies to B2B is a different question.
whiplash451 | 9 months ago
keiferski | 9 months ago
In that case, I don't disagree – genAI is still useful for this "side" work like thumbnails, idea generation, and so on. But I don't think it will be used much for content directly, as has been suggested by a ton of big tech companies.
whiplash451 | 9 months ago
Imustaskforhelp | 9 months ago
Airline industry are notoriously hard to be profitable. (I heard it from the intelligent investor book)
So just because something is useful doesn't necessarily means that its profitable yet the VC's are funding it expecting such profitability without any signs of true profitability till now.
I mean, yes AI is a profitable, but most of the profitability doesn't come from real use case, but rather the majority of the profitability comes from the (just slap AI sticker to increase your company valuation), and that's satisfying the VC right now. But they want returns as well.
And by definition if their returns is that a bigger fool / bigger VC is going to fund the AI company at a higher evaluation without too much profitability / very little profitability. Then THAT IS BUBBLE.
But being a bubble doesn't mean it doesn't have its use cases. AI is going to be useful, its just not going to be THAAT profitable, and the current profits are a bubble / show the characteristics of it.
h4ny | 9 months ago
cubefox | 9 months ago
Nazzareno | 9 months ago
kjkjadksj | 9 months ago
To say nothing at all about what the white house is doing which makes everything more precarious as companies get flighty due to economic instability.
abalashov | 9 months ago
biophysboy | 9 months ago
I also expect AI to incorporate ads at some point, once they exit the dreamy phase that early tech products always go through. I know Sam says he doesn't want to, but they only have so much runway. Eventually they will rationalize their ads as fundamentally different - a consumer assistant, if you will.
Havoc | 9 months ago
I do think OpenAI is in deep trouble. They’re ahead but not nearly enough to justify their lofty position.
Spartan-S63 | 9 months ago
Curious to see how open weight international models eat into OpenAI's first-mover moat. Until hardware is cheaper and more commodity—which requires crossing the CUDA moat—open weight will be a fun toy, but not a serious production commodity.
lizknope | 9 months ago
I see AI like that. In my view there is absolutely an AI bubble and it does have real world uses but it is way over hyped right now. I say that as someone working on AI chips.
enraged_camel | 9 months ago
Tulips also had real life impact and utility: they looked and smelled good!
rafaelero | 9 months ago
tim333 | 9 months ago
Has anyone seen any mention of this? I couldn't find it googling.
pwndByDeath | 9 months ago