I don’t think that it’s actually true that ISPs are liable if they don’t take down users. I think they go along with it because (1) it’s easier than arguing (2) those users are using a lot of bandwidth and the piracy forms a handy excuse.
I don’t think that it’s actually true that ISPs are liable if they don’t take down users. I think they go along with it because (1) it’s easier than arguing (2) those users are using a lot of bandwidth and the piracy forms a handy excuse.
My guess would be that a lot of people have an understandable dislike of anything that singles out the Democrats as the reason for Trump, as opposed to blaming Trump, the spineless Republicans as a whole, the media, the voters, the Russians, the educational system, the legalized corruption of our government as a whole, or any number of other institutional or individual factors that might have shit the bed and let Hitler come to power in the US. Blaming Democrats is the only target out of those that also depresses support for the most possible likely bulwark against him in our fairly sad system, and often seems to come from suspect sources. But you’re not wrong.
Not true, they also started a bunch of limp wristed investigations with agonizing, unforgivable slowness, and pardoned a bunch of their friends when it inevitably didn’t work.
Yeah, and a bunch of people voted for him. They did that because a bunch of social media psyops convinced them of a bunch of nonsense. So I don’t think they want what he will do. But, he wouldn’t be wrong to conclude he has a mandate to break the law. I think if you asked the average Trump voter if Trump can break the law, they would say, “Of course.”
Hey, can you summarize the core argument of this for me? Specifically with an eye to its broader implications about the AI sector and the technology industry as it currently stands in its current form today.
I’ll analyze the core argument of this piece and its broader implications.
The article’s central argument is that DeepSeek’s recent breakthrough has exposed fundamental flaws in how Western tech companies, particularly OpenAI and Anthropic, have approached AI development. The author contends that these companies weren’t actually innovating but rather perpetuating what they call a “Rot Economy” - a system where companies burn through massive amounts of capital while creating an illusion of inevitable progress and necessary scale.
The key revelation isn’t just that DeepSeek created more efficient models, but that Western AI companies apparently never seriously tried to do so. The author argues this wasn’t because they couldn’t, but because they had no incentive to - they were operating in a system that rewarded spending and scale rather than efficiency and innovation. They had convinced investors and the market that the only path forward was through ever-larger models requiring ever-more expensive infrastructure.
The broader implications about the current state of the technology industry are particularly damning:
- The modern tech industry, particularly in AI, has become fundamentally anti-innovative. Rather than solving real problems efficiently, it’s oriented around maintaining narratives that justify continued capital accumulation.
- Silicon Valley has lost its “scrappy builder” spirit. Instead of finding clever solutions to constraints (as DeepSeek did), major companies simply throw money and resources at problems.
- The venture capital and tech media ecosystem has become complicit in this dynamic, uncritically accepting narratives about necessary scale and inevitable progress while failing to ask basic questions about efficiency and practical value.
The author suggests this represents a deeper crisis in American tech innovation - the industry has become dominated by what they call “management consultants that lead teams of engineers to do things based on vibes” rather than genuine technological problem-solving. This has created an environment where companies compete not on actual innovation or efficiency, but on their ability to raise and spend capital while maintaining compelling narratives about future potential.
The piece concludes that much of the current AI boom has been essentially a confidence game - not because the technology doesn’t work, but because the companies developing it have been more focused on maintaining narratives that justify their continued fundraising than on creating genuinely valuable and sustainable products. DeepSeek’s achievement matters not just because it created more efficient models, but because it demonstrated that the emperor had no clothes - that the massive infrastructure and spending of Western AI companies wasn’t actually necessary for technological progress.
This suggests a potential inflection point for the tech industry, where the narrative-driven, capital-intensive model of innovation may be reaching its limits, potentially forcing a return to more genuine technological problem-solving and efficiency-driven development.
My AI summarizer is superior to your AI summarizer. 😃
He thinks everyone in the tech industry is a moustache twirling villain and always ascribes malice where incompetence would do.
Here’s him talking about people from the tech industry:
Nevertheless, Thompson (who I, and a great deal of people in the tech industry, deeply respect)
Every single article I’ve read about Gomes’ tenure at Google spoke of a man deeply ingrained in the foundation of one of the most important technologies ever made, who had dedicated decades to maintaining a product with a — to quote Gomes himself — “guiding light of serving the user and using technology to do that.”
Back to quoting you:
There is very minimal evidence for literally EVERYTHING he writes about in this article. The whole talk of them working around the GPU restrictions also has incredibly minimal evidence and is just a rumour.
We flat out do not know how they trained Deepseek’s model.
Correct. We do not know the training data, which makes it silly to decide that it is definitely cribbed from OpenAI’s model. What we do know is how the code works, because it is open and they wrote a paper. What would you consider “evidence,” if not the actual code and then a highly detailed explanation from the authors about how it works, and then some independent testing and interpretation by known experts? Do you want it carved on a golden tablet or something?
I think I’m done with this conversation. You seem very committed to simply repeating your point of view at me. You’ve done that, so I think we can go our separate ways.
Look up the definition of the word cynical. It means, more or less, asserting that no one is motivated by sincere integrity. Accusing some specific people of lacking integrity, while holding up others as good examples of integrity that everyone should aspire to, is the opposite of cynicism.
He doesn’t address very much the idea that DeepSeek “distilled” their model from OpenAI’s model and others specifically because that is just a rumor with very minimal evidence for it.
OpenAI has reportedly found “evidence” that DeepSeek used OpenAI’s models to train its rivals, according to the Financial Times, although it failed to make any formal allegations, though it did say that using ChatGPT to train a competing model violates its terms of service. David Sacks, the investor and Trump Administration AI and Crypto czar, says “it’s possible” that this occurred, although he failed to provide evidence.
Personally, I genuinely want OpenAI to point a finger at DeepSeek and accuse it of IP theft, purely for the hypocrisy factor. This is a company that exists purely from the wholesale industrial larceny of content produced by individual creators and internet users, and now it’s worried about a rival pilfering its own goods?
Cry more, Altman, you nasty little worm.
The “rumors” you say he discusses about novel ways the Chinese researchers found to outperform OpenAI are based on an extremely detailed look at their paper and their code, as interpreted by experts. The thing you’re upset he doesn’t discuss is based on rumors. He doesn’t discuss it, except to note that it’s just a rumor but would be funny if it’s true, because he is not doing what you accuse him of.
If you’re upset that he was mean to Sam Altman, so much so that you simply don’t care if he also goes deep into a lot of important details and cares about integrity enough to hate a lot on people who don’t have it, then say so. The things you are accusing him of doing are not true, though, and pretty easy to disprove if you can look honestly at his work.
Wanting a better world, and holding up a light to the current one to show the differences between what could be and what is, is not at all what “cynical” means. “Cynical” is the opposite of what you mean. “Pessimistic” or “negative” is definitely more apt, yes.
Also:
Now, you’ve likely seen or heard that DeepSeek “trained its latest model for $5.6 million,” and I want to be clear that any and all mentions of this number are estimates. In fact, the provenance of the “$5.58 million” number appears to be a citation of a post made by NVIDIA engineer Jim Fan in an article from the South China Morning Post, which links to another article from the South China Morning Post, which simply states that “DeepSeek V3 comes with 671 billion parameters and was trained in around two months at a cost of US$5.58 million” with no additional citations of any kind. As such, take them with a pinch of salt.
While there are some that have estimated the cost (DeepSeek’s V3 model was allegedly trained using 2048 NVIDIA h800 GPUs, according to its paper), as Ben Thompson of Stratechery made clear, the “$5.5 million” number only covers the literal training costs of the official training run (and this is made fairly clear in the paper!) of V3, meaning that any costs related to prior research or experiments on how to build the model were left out.
While it’s safe to say that DeepSeek’s models are cheaper to train, the actual costs — especially as DeepSeek doesn’t share its training data, which some might argue means its models are not really open source — are a little harder to guess at. Nevertheless, Thompson (who I, and a great deal of people in the tech industry, deeply respect) lays out in detail how the specific way that DeepSeek describes training its models suggests that it was working around the constrained memory of the NVIDIA GPUs sold to China (where NVIDIA is prevented by US export controls from selling its most capable hardware over fears they’ll help advance the country’s military development):
Here’s the thing: a huge number of the innovations I explained above are about overcoming the lack of memory bandwidth implied in using H800s instead of H100s. Moreover, if you actually did the math on the previous question, you would realize that DeepSeek actually had an excess of computing; that’s because DeepSeek actually programmed 20 of the 132 processing units on each H800 specifically to manage cross-chip communications. This is actually impossible to do in CUDA. DeepSeek engineers had to drop down to PTX, a low-level instruction set for Nvidia GPUs that is basically like assembly language. This is an insane level of optimization that only makes sense using H800s.
Tell me: What should I be reading, instead, if I want to understand the details of this sort of thing, instead of that type of unhinged, pointless, totally uninformative rant about the tech industry?
Sorry! I was talking to Google, not to you, I have no issue with you at all. I should have phrased it in a more clear fashion. I’m just irritated, that’s all.
No they don’t. They operate in the United States government’s jurisdiction.
I know that Trump thinks that a consequence of that is that he can simply order them to do whatever he wants, and they have to do it, but that’s not true. Or, it’s only true to the extent that people around him start to see it that way, and act on it, and start to punish people who don’t do whatever he says, regardless of what the law or the rest of the United States government which is considerable has to say about it.
That’s one reason I think they’re cowards. Because the more that people start to mutually believe that everything Trump says, is law, the more other people will agree to it and start to act on it as if it’s the same way. We made all this up. We pretend there’s a “constitution” that means something, we pretend “congress” is something special, not just a bunch of old guys in a building who write weird incomprehensible documents, we pretend it all means something, when at the end of the day it’s just a bunch of monkeys who run around making special noises.
And we can start to pretend Trump is king, too, if we want to. I don’t want to, and I get irritated at people who start pretending that he is. That shit’s contagious.
You FUCKING COWARDS.
What the hell happened, man.
I get why you have to do it with China. You need to move on with your life, and you’re going to be caught up in endless ridiculous squabbles, and life is short.
You KNOW you don’t actually have to do it for Trump.
You’re just deciding to. It’s like in the movie, “How in the WORLD could this guy take the money, when he knows it’s wrong.”
Why in the world, man. What happened.
I wouldn’t be so sure. China is at the world’s forefront of automated techniques to be able to spy on and manipulate people through their own devices at massive scale. If they had some semi-workable technology to fingerprint individuals through their typing patterns, in conjunction with fingerprinting the devices they were using through other means, that would make perfect sense to me.
I don’t think it is especially a concern for Deepseek specifically, for reasons discussed elsewhere in the comments. That one particular aspect of the privacy issue is probably being overblown, when there are other adjacent privacy and security concerns that are a lot more pressing. Honestly, that one particular detail isn’t really proven simply because it’s in the privacy policy, and even if they are doing something like that, its inclusion or not in this particular privacy policy or this app isn’t the particularly notable part about it.
No, but they can manipulate the public’s perception of political reality to the point that someone gets elected who will bust your door down and kill you, because a bunch of people who don’t have time to make figuring out the news into a part-time job decided that that person would be able to make eggs cheaper and the other guy’s son was really into hookers or something, and also he was old and wasn’t “fixing the border.”
Just as a random example.
(To be clear, I don’t have any reason to think specifically that TikTok or China was involved in getting Trump elected. I’m just saying that allowing any adversary, whether that’s China or that’s the GOP’s social media psyop department, to have control over American’s social media landscape, will absolutely have an impact on you personally, and already has.)
Got it. Yeah, fair enough. What I was aiming to do, more or less, was ask for clarification, but I definitely see how it could come across as me trying to continue the argument when he was saying that he already agreed with me. I think you hopping into it with a big italic and bold wall of text on the thing that apparently all three of us already agree on only confused the issue further.
Anyway, sounds like we’re all on the same page. Cool.
I’m talking to someone in a privacy forum hosted outside of corporate social media who described reports about privacy violations committed by a privacy-invasive social media app as pearl-clutching and fearmongering.
I’m not sure what your deal is, here, but I’m not into it. I feel like what I said was pretty straightforward and you’re determined to gin up some kind of disagreement, where I’m supposed to say that corporate media’s reporting on privacy isn’t bad, or something.
Privacy good, corporate privacy invasion bad. Corporate media underreporting of privacy violations bad. Hopefully that makes sense, and we can agree on it. I’m not into whatever argument you’re attempting to create about it.
Well, you did say it was “pearl clutching” and “fearmongering.” My point is, they should be clutching pearls, and fear should be mongered. Arguably, at all the social media companies including TikTok.
I actually do agree that TikTok is worse, but it’s hardly the point. We can be alarmed about all of them, especially since the US ones are now in the hands of an overtly evil tyrannical government instead of merely the sociopathic profit-minded corporatocracy they were in before.
Yes. I also like how the alarming take on it is not “People are typing their passwords / medical histories / employer’s source code into ChatGPT and from there it goes straight into the training data not only to be stored forever in the corpus, but also sometimes, to be extracted at a later date by any yahoo who knows the way to tease it back out from ChatGPT via the right carefully crafted prompting!”
But instead it is “When you type things, they can see what you type! The keystrokes!”
Dude, why is this guy getting so upset about the suggestion that people should be alarmed both by TikTok and also by the malicious behavior of all the other social media companies? And that the media should report more on it? Why is he yelling so much at me for making what I thought was that fairly reasonable suggestion?
Folks like me have been voting Blue for 25 fucking years
Oh. Um… what? What does that… okay.
Edit: Oh, also, you were unnecessarily doing a bunch of obedience to the establishment if you’ve been voting blue for 25 years. Back in the Bill Clinton era, the parties really were practically indistinguishable, and there were other realistic options like Ralph Nader and Ron Paul on the table who were genuinely pretty good. They got creamed by FPTP, but right around the year 2000 was a time when almost anyone could see that the good options were not within either major party. Al Gore being a pretty obvious and rare exception. The calculus changed a lot with the last few elections, where the Republicans became such an objectively terrifying option that voting for the Democrat just so they wouldn’t get into office became a necessary strategy if you care about the country. In my opinion.
So, current precedent is that the ISPs do have to terminate, but there’s no penalty if they don’t. Is November recent enough that the ruling has actually had any impact? Did the Supreme Coury decide to take up the case or not yet? How much does it means that the ISPs “have to” terminate users, but there doesn’t seem to be a penalty if they don’t? Is the fact that there was no ruling until recently, confirmation that they were doing it voluntarily for their own reasons before November? Or were they doing it “voluntarily” because they didn’t want to defend lawsuits like this, except Cox which was refusing to do it apparently? I have no sure idea of the answer to any of those questions. That’s why I said “I think.” But, unlike some people on the internet, I don’t just make up some bullshit and then decide that’s what I “think” and go spouting off about it. I’m just relaying my best guess, reasons for it, and being honest about the fact that it’s a guess.