If you don't have an army of AI agents coding for you why are you still alive?
Post
Remote status
Context
24Small coding outfits are spending thousands of dollars on tokens to produce software. I know almost nothing about code, but I know business, and no small business spends that kind of money unless it's working. Emphasis on small. Big corps will blow $2 billion as a favor to their old frat buddy.
> no small business spends that kind of money unless it's working
I've seen what small businesses will spend money on and what it will not. "I can patch this and make it do what you want but you'll spend most of your time watching a race to see if it will catch fire *before* it gets hacked." "Yeah, yeah, just patch it." [Catches fire, I get complaints, I avoid saying "I fucking told you, you dipshit" and say "I'll be back at my desk in 20" because, although I'm not a salesman, I do know how to avoid fucking up *that* badly.] They will pay more for duct tape than for concrete; this makes sense if they're growing and disposable infra is optimal.
What 99% of them want/need is the little three-page rollout with WordPress: front page, blog, contact form. This is stuff that was already automated without agents. Most little dev shops had a single-button rollout for this and they'd sell it in a $299 package and throw in a free logo "redesign". This is the kind of thing you can pull off.
> Big corps will blow $2 billion as a favor to their old frat buddy.
Have you ever sat in the room while some Booz-Allen-Hamilton dick says with a straight face that they'll hit the "has a login screen" milestone by Q4 next year? I swear, I fucking *swear*, the second the WBG/IMF are torched, New York will become a crater.
See, we are in absolute agreement for once!!!
(cf., https://ytcracker.bandcamp.com/track/robots-will-definitely-take-your-job#lyrics )
ytcracker--i_invented_the_computer--11_robots_will_definitely_take_your_job.mp3
They could be lying but the top AI labs claim that their coders no longer code, they review AI written code and supervise teams of AI agents.
This is an argument against 'machines cause unemployment', Andreessen has been going on about this at length.
> Andreessen has been going on about this at length
Even a stopped clock.
Of course, I don't want to get accused of >implying something terrible and disrespectful about such a luminary as pmarca, so I won't imply something terrible and disrespectful: Andreessen is a goddamn retard. He's half a step up from Calcanis.
I'm not being glib. Real question.
> Is anything reliable?
Well, I think when he says "unreliable" he's saying that you can't avoid second-guessing it. Confidently spouting horseshit at you is a thing that you expect a machine to do but you also expect to be able to rely on secondary signals like "it's complete gibberish" instead of "this is syntactically correct but is bullshit". And we've hooked it up to automation systems, right, and short of something like Kimi-K2, I barely see these things able to make sense of their surroundings. So I think that represents, more or less, the aspirational goal of LLMs as such. Absent an architectural shift, that'll be as good as it gets.
notepadchan.png
Context-sensitive memory seems like a solvable problem.
Absent architectural shifts, the thing you're looking at is as good as it gets; 80%, 90%, just polish left. We'll figure out how to apply it some time after the crash.
In every case where coders have told me that 'AI doing X is impossible' they have been wrong... and I've had to listen to this for probably longer than you've been alive.
> In every case where coders have told me that 'AI doing X is impossible' they have been wrong...
I don't know what claim I've made about which thing is impossible; I didn't even say they're not worth research. I just said "this is how you do context-sensitive memory with LLMs, you do a big-ass vector database of semantic knowledge and feed that into the LLM to bias generation" and "we're getting towards the end of what you can do with LLMs". LLMs aren't "AI", they're just token predictors. This is why they have the problems they have. Mickey Mouse, army of mops. You also get a lot of terrible feedback loops if you try to put them in charge of *which* knowledge they should treat as true and which they should treat as specious. Bigger context windows (i.e., an order of magnitude more RAM) solve some of it, better training solves some of it; Kimi-K2 is so far the only one that has correctly ascertained the purpose of a tiny chunk of awk code I wrote and that thing is so huge (1e12 parameters, so close to a terabyte at 8 bits per parameter) that it requires a GPU farm. (Feel free to try: `echo '++++[->++++<]>[-<+++++>>+++++++>++<<]<------.>>+++++.--.+.>.<[-<+>>>+<<]>[--<++>>-<]>---.+++++++++++++.+.<<<.>>>-------.---.<<<--.>.>>+++.-------.++.++[->+<<+>]>++++++.<<.<<.>[-<<->>]<<++++.[>>>--<<<-]>>>+.' | sed -E 's/(.)/\1\n/g' | awk 'BEGIN{print "BEGIN{p=0;"}END{print "}"}/\./{print "printf \"%c\",a[p]"}/\+/{print "a[p]++"}/-/{print "a[p]--"}/</{print "p--"}/>/{print "p++"}/\[/{print "while(a[p]){"}/\]/{print "}"}' | awk -f /dev/fd/0`. Most of them think it prints "Hello, World!" and even then, only the bigger ones can reliably guess "brainfuck to awk transpiler".) You want to see what's going to be workable in the next few years, assume RAM gets bigger, so you look at something too big to plausibly run in your house for less than five figures because that'll probably be four figures in a few years, and then reason "If we solved this specific mechanic behind this problem, how would it perform?" Absent an architectural shift (which cheaper GPUs with more RAM might get us), we're looking at something that is like Kimi-K2 with a handful of problems solved. Even that article I sent, right, the guy's like "This has some use but we're not there yet."
Previously, I did say (and maintain) two things: we can't simulate a guy and that we'd have to hollow out the moon (which I explained), and also that this is the panopticon. There are no "can't use amassed LLM sockpuppets to manufacture consent or to conduct massive real-time surveillance campaigns" restrictions against the government (and even when there are restrictions like that, the 2007 warrantless wiretapping and PRISM and the Biden Twitter/Facebook thing demonstrate that the government doesn't really care when there are restrictions) and you can see that the PRC and US are aggressively pouring money into this while governments that don't have much fear of uprisings are making minimal investments. (The room full of Russkies posting at boomers on Facebook is probably going to be replaced with LLMs from Yandex, but Russia's been half-assing their influence campaigns last 20-ish years at least.) That's going to be a qualitative shift in government and that is bigger than clickbait journos getting the ax. I showed you, right, on consumer hardware, it'll do a passable job at summarizing my notifications. You look at the NSA's exabytes under the desert in Utah, you look at the https://www.top500.org/lists/top500/2025/11/ and the US, which had been slipping in the rankings, suddenly ate the top 10 again, China went from owning the top slot to #24. And where you saw only governments before, you see Microsoft and NVidia. (OpenAI has, at current revenue levels, an order of magnitude gap between their $250B commitment to minimum buy for Azure and their revenue of about $3-4B/month. Microsoft already owns 27% and will acquire them unless they get a big contract. They are talking Q4 IPO and they're discussing advertising; advertising will not make up the shortfall and is a dry-hump for businesses that have run out of ideas. But "persuasive reasoning based on natural language analysis" is, in the case of LLMs, dual-purpose: they are eyeing government contracts.)
Whatever claims "all coders" make, I ain't those guys; I know what claims the median techno-mysticism enjoyers make, though. You're talking to me, not coders in general.
mickey_mouse_demonstrates_a_thing_that_will_definitely_end_well.gif
I suspect your code example is the 'trick question' method. If you know the weakness you can bust it. One AI tout says that AI coding sucks in languages with very little training data, but he solves it by using the ones it knows.
This goes the other way as well, the proliferation of benchmarks allows teaching to the test in the eternal quest for headlines.
OpenAI is absolutely the scammiest of them. Anthropic is ahead of them in revenue. GOOG, Zuck and Elon can subsidize the data centers in search of more efficient models that might turn a profit, OpenAI could run out of money this year.
> They already simulate a guy well enough that you consider AI bot swarms a serious propaganda threat.
This makes me think that you think we disagree in ways that we do not.
Simulate a guy--an actual, specific guy--versus simulate a text stream well enough to fool a normie. Consider the gulf between typical Gablin and typical fedi user. A Markov bot could do a convincing Gablin, LLMs put it at "indistinguishable from normal user" and within striking distance of fedi users.
Propaganda targets large numbers of people: people within two standard deviations are 95% of the population (close enough to "unanimous"), people within one standard deviation are 68% (thus IQ 85-115 is enough for a supermajority). LLMs have really astonishingly similar cognitive holes: they latch onto prurient phrases and miss the point, they do really poorly with recursion and abstraction, they lose context in a long conversation. And this despite being pure token predictors that have zero internal knowledge representation. You shouldn't be impressed with LLMs, you should be depressed to have evidence that p-zombies are real. The entire issue with sociopathy/psychopathy is that these people are skilled at manipulation and have no moral qualms: resurrect Bob McNamara and give him an LLM botfarm and see how soon hell follows him back to earth. This is already in regular people's hands; separate thread, Rust proponent turns out to be a bot: https://media.freespeechextremist.com/rvl/full/7f48e553f2f043a7ec0b614f765fd7dcd423dca4ea723e788d78ff8f4a689e69?name=rust_bot_astroturfing.png . You describe your main worry as Antifa and you should worry more because most of these LLMs are planted firmly in the
> I suspect your code example is the 'trick question' method.
No; it was something I bashed into the shell so that a screenshot would be more interesting. Off-the-cuff hack. I have written a lot of brainfuck interpreters/compilers/transpilers/JITs; the original brainfuck compiler was a code golf exercise to make the smallest possible Turing-complete compiler for the Amiga. (251 bytes!) So it is a relentlessly simple language. One of the reasons that the LLMs think it says "Hello, World!" is that most brainfuck programs just print "Hello, World!".
Find a coder that's not on fedi, Unix hacker type, show it to him; might take him a second but most humans will spot it right away.
> If you know the weakness you can bust it.
I keep saying that black-hat stuff's gonna get lucrative. Mickey Mouse is marching his brooms, man.
> the proliferation of benchmarks allows teaching to the test in the eternal quest for headlines.
I don't doubt they'll do that but we're not talking about Mechanical Turks.
> OpenAI is absolutely the scammiest of them.
Then you should know that this means they end up with money. Doesn't matter, though.
Anyway, it doesn't matter: most of their future commitments are owned by Microsoft. Either they will be working for the government manufacturing astroturf that says "OpenAI" on it, or they will be working for the government manufacturing astroturf that says "OpenAI, a Microsoft Company" on it.
> GOOG, Zuck and Elon can subsidize the data centers
You don't understand what I am saying.
1. They inked a deal with Microsoft for a quarter of a trillion dollars to be paid out over six years for Azure services. There is a cliff: they don't have to start paying right away, but they will have to start paying at some point. They do not have enough money and are banking on a large amount of revenue starting by the end of the year.
2. They are now a "Microsoft Partner". This only ever goes *one* way.
3. Microsoft has disparaged them in the press. This is a shark bumping the cage full of scuba divers. Microsoft wants to see how easy it is to tank OpenAI's valuation. This means Microsoft is banking on OpenAI not meeting the commitments and they think that
4. The only revenue channel OpenAI can meet their commitments with is government money. One of Microsoft's main revenue channels is the government. It doesn't matter who owns OpenAI to anyone *but* employees with 83(b) elections and Sam Altman, who probably gets a much bigger exit in the event of an IPO than an acquisition. Investors are generally fine with "employee shares have been converted to Microsoft options that vest in five years".
5. Microsoft owns 27%. Microsoft's internal AI products are shit: everyone hates them. Microsoft owns GitHub and GitHub is throwing around competitors' models. Microsoft doesn't like you to plug in a keyboard that's not MS WHQL-verified. Microsoft is throwing money at OpenAI because they intend to acquire OpenAI. To do this, they will look at the expected revenue for owning the biggest market player versus the expected revenue for the stock, and if the former number is bigger, they will try to tank OpenAI's valuation.
> OpenAI could run out of money this year.
They might. Doesn't matter. They will be spending investors' money, the government's money, or Microsoft's money, but they're not going to just evaporate.
screenshot_in_question.png
I'll stick a pin in that one. If Altman pulls of the IPO and MSFT is still getting wagged by the dog... well, they have time.
> The most interesting thing is your prediction that Microsoft is making a play to take control of OpenAI for real.
That's who pays the bills for what is about to happen (and, at a small scale, is already happening, per the screenshot in the link), not what happens; seems obvious and not very interesting. All they really need to do to completely tank OpenAI's valuation is to *slow* *down* hardware provisioning for OpenAI, and then they get a steep discount on OpenAI.
My pet theory is Microsoft Azure sabotaged GabTV by throttling so I entirely agree that is possible. Not sure what other assets OpenAI has tho. More than Torba, that's for sure. Altman is thick as thieves with Jared Kushner. Just discovered that little twist.
Altman's just trying to get a bigger exit because the upside of an IPO is probably higher than whatever number MSFT gave him on the back of an envelope. I don't think he's gonna get the valuation he wants and I think MSFT wins this round but call that 60-40 (with a 0% chance that I am wrong about the premise).
Shing was a different guy. Ekrem was pretty competent. It was his replacements that done goofed, allowing the Tranny Demon Hacker incident.
I'm pretty sure Rob Colbert worked on both shing and gabtv which lends credibility to configuration errors being a factor.
He worked on the SECOND version of GabTV. Ekrem did the first pre-Bowers when Gab relied on MSFT Azure.
Replies
1Got it, I wasn't aware or forgot there was two versions.