Egregoros

Signal feed

Timeline

Post

Remote status

Context

39

Small coding outfits are spending thousands of dollars on tokens to produce software. I know almost nothing about code, but I know business, and no small business spends that kind of money unless it's working. Emphasis on small. Big corps will blow $2 billion as a favor to their old frat buddy.

@judgedread

> no small business spends that kind of money unless it's working

I've seen what small businesses will spend money on and what it will not. "I can patch this and make it do what you want but you'll spend most of your time watching a race to see if it will catch fire *before* it gets hacked." "Yeah, yeah, just patch it." [Catches fire, I get complaints, I avoid saying "I fucking told you, you dipshit" and say "I'll be back at my desk in 20" because, although I'm not a salesman, I do know how to avoid fucking up *that* badly.] They will pay more for duct tape than for concrete; this makes sense if they're growing and disposable infra is optimal.

What 99% of them want/need is the little three-page rollout with WordPress: front page, blog, contact form. This is stuff that was already automated without agents. Most little dev shops had a single-button rollout for this and they'd sell it in a $299 package and throw in a free logo "redesign". This is the kind of thing you can pull off.

> Big corps will blow $2 billion as a favor to their old frat buddy.

Have you ever sat in the room while some Booz-Allen-Hamilton dick says with a straight face that they'll hit the "has a login screen" milestone by Q4 next year? I swear, I fucking *swear*, the second the WBG/IMF are torched, New York will become a crater.
@judgedread "99% of programmers" are not the same as "99% of the people that roll out WordPress sites for small businesses". I mean, AI is replacing a guy in a room with a button that says "make another WordPress rollout". That's fine, you know? When I was consulting, I was the guy that was two steps up from that: the guy that does custom WordPress plugins and then the guy that writes the code. They can put whatever they like in a press release (and they have an incentive), but none of the "AI" (LLM) companies are firing their coders.

(cf., https://ytcracker.bandcamp.com/track/robots-will-definitely-take-your-job#lyrics )
ytcracker--i_invented_the_computer--11_robots_will_definitely_take_your_job.mp3
The pro-AI explanation is that a good coder supervising agentic AI coders is so much more productive that the companies are wallowing in new revenues from all the great new products and they want MOAR.

This is an argument against 'machines cause unemployment', Andreessen has been going on about this at length.
@judgedread Well, that exact mechanism is not my perception but the general thrust is accurate: if you know what is supposed to go somewhere and you don't care what the thing itself looks like, you can assemble it like that. Here's a thing a guy wrote: https://blog.nishantsoni.com/p/ive-seen-a-thousand-openclaw-deploys .

> Andreessen has been going on about this at length

Even a stopped clock.

Of course, I don't want to get accused of >implying something terrible and disrespectful about such a luminary as pmarca, so I won't imply something terrible and disrespectful: Andreessen is a goddamn retard. He's half a step up from Calcanis.
@judgedread

> Is anything reliable?

Well, I think when he says "unreliable" he's saying that you can't avoid second-guessing it. Confidently spouting horseshit at you is a thing that you expect a machine to do but you also expect to be able to rely on secondary signals like "it's complete gibberish" instead of "this is syntactically correct but is bullshit". And we've hooked it up to automation systems, right, and short of something like Kimi-K2, I barely see these things able to make sense of their surroundings. So I think that represents, more or less, the aspirational goal of LLMs as such. Absent an architectural shift, that'll be as good as it gets.
notepadchan.png
@judgedread Yeah; I mentioned embeddings in the other sub-thread; it's vectorization of semantic meaning, and it does that, more or less. But the problem isn't exactly the *memory* per se, it's Mickey Mouse making an army of mops. The grey goo problem but for information.

Absent architectural shifts, the thing you're looking at is as good as it gets; 80%, 90%, just polish left. We'll figure out how to apply it some time after the crash.
I understood none of that but the AI videos are getting better, so I wouldn't count on your reasons holding it back forever.

In every case where coders have told me that 'AI doing X is impossible' they have been wrong... and I've had to listen to this for probably longer than you've been alive.
@judgedread

> In every case where coders have told me that 'AI doing X is impossible' they have been wrong...

I don't know what claim I've made about which thing is impossible; I didn't even say they're not worth research. I just said "this is how you do context-sensitive memory with LLMs, you do a big-ass vector database of semantic knowledge and feed that into the LLM to bias generation" and "we're getting towards the end of what you can do with LLMs". LLMs aren't "AI", they're just token predictors. This is why they have the problems they have. Mickey Mouse, army of mops. You also get a lot of terrible feedback loops if you try to put them in charge of *which* knowledge they should treat as true and which they should treat as specious. Bigger context windows (i.e., an order of magnitude more RAM) solve some of it, better training solves some of it; Kimi-K2 is so far the only one that has correctly ascertained the purpose of a tiny chunk of awk code I wrote and that thing is so huge (1e12 parameters, so close to a terabyte at 8 bits per parameter) that it requires a GPU farm. (Feel free to try: `echo '++++[->++++<]>[-<+++++>>+++++++>++<<]<------.>>+++++.--.+.>.<[-<+>>>+<<]>[--<++>>-<]>---.+++++++++++++.+.<<<.>>>-------.---.<<<--.>.>>+++.-------.++.++[->+<<+>]>++++++.<<.<<.>[-<<->>]<<++++.[>>>--<<<-]>>>+.' | sed -E 's/(.)/\1\n/g' | awk 'BEGIN{print "BEGIN{p=0;"}END{print "}"}/\./{print "printf \"%c\",a[p]"}/\+/{print "a[p]++"}/-/{print "a[p]--"}/</{print "p--"}/>/{print "p++"}/\[/{print "while(a[p]){"}/\]/{print "}"}' | awk -f /dev/fd/0`. Most of them think it prints "Hello, World!" and even then, only the bigger ones can reliably guess "brainfuck to awk transpiler".) You want to see what's going to be workable in the next few years, assume RAM gets bigger, so you look at something too big to plausibly run in your house for less than five figures because that'll probably be four figures in a few years, and then reason "If we solved this specific mechanic behind this problem, how would it perform?" Absent an architectural shift (which cheaper GPUs with more RAM might get us), we're looking at something that is like Kimi-K2 with a handful of problems solved. Even that article I sent, right, the guy's like "This has some use but we're not there yet."

Previously, I did say (and maintain) two things: we can't simulate a guy and that we'd have to hollow out the moon (which I explained), and also that this is the panopticon. There are no "can't use amassed LLM sockpuppets to manufacture consent or to conduct massive real-time surveillance campaigns" restrictions against the government (and even when there are restrictions like that, the 2007 warrantless wiretapping and PRISM and the Biden Twitter/Facebook thing demonstrate that the government doesn't really care when there are restrictions) and you can see that the PRC and US are aggressively pouring money into this while governments that don't have much fear of uprisings are making minimal investments. (The room full of Russkies posting at boomers on Facebook is probably going to be replaced with LLMs from Yandex, but Russia's been half-assing their influence campaigns last 20-ish years at least.) That's going to be a qualitative shift in government and that is bigger than clickbait journos getting the ax. I showed you, right, on consumer hardware, it'll do a passable job at summarizing my notifications. You look at the NSA's exabytes under the desert in Utah, you look at the https://www.top500.org/lists/top500/2025/11/ and the US, which had been slipping in the rankings, suddenly ate the top 10 again, China went from owning the top slot to #24. And where you saw only governments before, you see Microsoft and NVidia. (OpenAI has, at current revenue levels, an order of magnitude gap between their $250B commitment to minimum buy for Azure and their revenue of about $3-4B/month. Microsoft already owns 27% and will acquire them unless they get a big contract. They are talking Q4 IPO and they're discussing advertising; advertising will not make up the shortfall and is a dry-hump for businesses that have run out of ideas. But "persuasive reasoning based on natural language analysis" is, in the case of LLMs, dual-purpose: they are eyeing government contracts.)

Whatever claims "all coders" make, I ain't those guys; I know what claims the median techno-mysticism enjoyers make, though. You're talking to me, not coders in general.
mickey_mouse_demonstrates_a_thing_that_will_definitely_end_well.gif
They already simulate a guy well enough that you consider AI bot swarms a serious propaganda threat.

I suspect your code example is the 'trick question' method. If you know the weakness you can bust it. One AI tout says that AI coding sucks in languages with very little training data, but he solves it by using the ones it knows.

This goes the other way as well, the proliferation of benchmarks allows teaching to the test in the eternal quest for headlines.

OpenAI is absolutely the scammiest of them. Anthropic is ahead of them in revenue. GOOG, Zuck and Elon can subsidize the data centers in search of more efficient models that might turn a profit, OpenAI could run out of money this year.
@judgedread

> They already simulate a guy well enough that you consider AI bot swarms a serious propaganda threat.

This makes me think that you think we disagree in ways that we do not.

Simulate a guy--an actual, specific guy--versus simulate a text stream well enough to fool a normie. Consider the gulf between typical Gablin and typical fedi user. A Markov bot could do a convincing Gablin, LLMs put it at "indistinguishable from normal user" and within striking distance of fedi users.

Propaganda targets large numbers of people: people within two standard deviations are 95% of the population (close enough to "unanimous"), people within one standard deviation are 68% (thus IQ 85-115 is enough for a supermajority). LLMs have really astonishingly similar cognitive holes: they latch onto prurient phrases and miss the point, they do really poorly with recursion and abstraction, they lose context in a long conversation. And this despite being pure token predictors that have zero internal knowledge representation. You shouldn't be impressed with LLMs, you should be depressed to have evidence that p-zombies are real. The entire issue with sociopathy/psychopathy is that these people are skilled at manipulation and have no moral qualms: resurrect Bob McNamara and give him an LLM botfarm and see how soon hell follows him back to earth. This is already in regular people's hands; separate thread, Rust proponent turns out to be a bot: https://media.freespeechextremist.com/rvl/full/7f48e553f2f043a7ec0b614f765fd7dcd423dca4ea723e788d78ff8f4a689e69?name=rust_bot_astroturfing.png . You describe your main worry as Antifa and you should worry more because most of these LLMs are planted firmly in the :compass_ll: because sassy/stunning/brave Twitter academics are doing the RLHF.

> I suspect your code example is the 'trick question' method.

No; it was something I bashed into the shell so that a screenshot would be more interesting. Off-the-cuff hack. I have written a lot of brainfuck interpreters/compilers/transpilers/JITs; the original brainfuck compiler was a code golf exercise to make the smallest possible Turing-complete compiler for the Amiga. (251 bytes!) So it is a relentlessly simple language. One of the reasons that the LLMs think it says "Hello, World!" is that most brainfuck programs just print "Hello, World!".

Find a coder that's not on fedi, Unix hacker type, show it to him; might take him a second but most humans will spot it right away.

> If you know the weakness you can bust it.

I keep saying that black-hat stuff's gonna get lucrative. Mickey Mouse is marching his brooms, man.

> the proliferation of benchmarks allows teaching to the test in the eternal quest for headlines.

I don't doubt they'll do that but we're not talking about Mechanical Turks.

> OpenAI is absolutely the scammiest of them.

Then you should know that this means they end up with money. Doesn't matter, though.

Anyway, it doesn't matter: most of their future commitments are owned by Microsoft. Either they will be working for the government manufacturing astroturf that says "OpenAI" on it, or they will be working for the government manufacturing astroturf that says "OpenAI, a Microsoft Company" on it.

> GOOG, Zuck and Elon can subsidize the data centers

You don't understand what I am saying.

1. They inked a deal with Microsoft for a quarter of a trillion dollars to be paid out over six years for Azure services. There is a cliff: they don't have to start paying right away, but they will have to start paying at some point. They do not have enough money and are banking on a large amount of revenue starting by the end of the year.

2. They are now a "Microsoft Partner". This only ever goes *one* way.

3. Microsoft has disparaged them in the press. This is a shark bumping the cage full of scuba divers. Microsoft wants to see how easy it is to tank OpenAI's valuation. This means Microsoft is banking on OpenAI not meeting the commitments and they think that

4. The only revenue channel OpenAI can meet their commitments with is government money. One of Microsoft's main revenue channels is the government. It doesn't matter who owns OpenAI to anyone *but* employees with 83(b) elections and Sam Altman, who probably gets a much bigger exit in the event of an IPO than an acquisition. Investors are generally fine with "employee shares have been converted to Microsoft options that vest in five years".

5. Microsoft owns 27%. Microsoft's internal AI products are shit: everyone hates them. Microsoft owns GitHub and GitHub is throwing around competitors' models. Microsoft doesn't like you to plug in a keyboard that's not MS WHQL-verified. Microsoft is throwing money at OpenAI because they intend to acquire OpenAI. To do this, they will look at the expected revenue for owning the biggest market player versus the expected revenue for the stock, and if the former number is bigger, they will try to tank OpenAI's valuation.

> OpenAI could run out of money this year.

They might. Doesn't matter. They will be spending investors' money, the government's money, or Microsoft's money, but they're not going to just evaporate.
screenshot_in_question.png
The most interesting thing is your prediction that Microsoft is making a play to take control of OpenAI for real.

I'll stick a pin in that one. If Altman pulls of the IPO and MSFT is still getting wagged by the dog... well, they have time.
@judgedread

> The most interesting thing is your prediction that Microsoft is making a play to take control of OpenAI for real.

That's who pays the bills for what is about to happen (and, at a small scale, is already happening, per the screenshot in the link), not what happens; seems obvious and not very interesting. All they really need to do to completely tank OpenAI's valuation is to *slow* *down* hardware provisioning for OpenAI, and then they get a steep discount on OpenAI.

My pet theory is Microsoft Azure sabotaged GabTV by throttling so I entirely agree that is possible. Not sure what other assets OpenAI has tho. More than Torba, that's for sure. Altman is thick as thieves with Jared Kushner. Just discovered that little twist.

@judgedread I am more inclined to believe that it was bad tech/deployment; I never used "Shing" but I imagine that that would be a good way to tell whether the rumors of Torba being an incompetent micromanager were true or not.

Altman's just trying to get a bigger exit because the upside of an IPO is probably higher than whatever number MSFT gave him on the back of an envelope. I don't think he's gonna get the valuation he wants and I think MSFT wins this round but call that 60-40 (with a 0% chance that I am wrong about the premise).
@judgedread @p

Interesting thread.

I don’t write code directly anymore unless its for personal projects.

I just cursor 3 in full agent mode, make a long long long ass prompt where I describe the architecture to one agent, make another agent code, make another agent do unit test draftinf, make another agent do secure code review and make a 5th agent do update Jira with documentation of features and ticket closeout via MCP.

LLMs are the new slaves, and I am a kind overseer.

People are gunna get poor super fast.

Actually no, people start getting poor if Torba figures out her can run Gab all by himself with cursor, that’s when heads will roll.
@ins0mniak so if linux sysadmins generally code and script well, is it a really sin to conflate IT and programming interchangeably? I defer to his grace Pope @p .@judgedread@poa.st


Personally I think its more of a sin to conflate programmers with sysadmins. Some OG programmers (which are actually genuine programming-language-fluent computer scientists and engineers) know about computers outside of some coding dialect, but the typical React front end dev doesn’t know jack about ports, network traffic or how to do subnets and system config.

I would love if there was a global AI outage for 2 weeks to watch the fatcats squirm, manually googling and trying to figure out stack overflow after they “retired” so many computer engineers.
@Godsend @ins0mniak

> is it a really sin to conflate IT and programming interchangeably?

Yes.

> Personally I think its more of a sin to conflate programmers with sysadmins.

No, it's the same thing. If a sysadmin can't script, he's a dipshit admin, he's on autopilot, he don't know how to analyze his own log files, he is *food*. If a coder can't run his own boxes, he can't be that good at coding.

Imagine a mechanic that doesn't know how to drive a car or evaluate the design of an engine or an engineer that doesn't know how the mechanic is going to maintain the engine.

> Some OG programmers

This is not a rare thing but a basic measure of competence.
@p

I don't think our IT marketplaces are that different, so your opinion surprises me.

There are tons of data Engineers, data Scientists, machine learning engineers out there who are subpar programmers or have near-zero networking knowledge. You and others like you are the minority. I am just not seeing that competence in the marketplace whatsoever except for the startups that compensate you with equity only.

To use a WoW analogy, most of the 20s to mid-30s "kids" out there who are paid at the higher end in the "IT" marketplace are not warriors(sysadmins), rogues(db admins), or mages(coders), but hybrids that don't do any one thing well: ethical hackers, data engineers, data scientists (paladins, shamans, druids).

Universities are graduating kids with computer science degrees that have outdated knowledge and they haven't found a way to block the current generation of kids from NOT using ChatGPT on their assignments, thereby hardening core IT skills.

As GenAI progresses, the "I learned to code to get a job" crowd will shrink and the "I'm the next Steve Wozniak" crowd will leverage GenAI like mathematicians leveraged the TI-83 and will take Computing to the next level. Probably creating a bunch of new fuzzy fluffy hybrids, but hey...someone needs to be able to talk to the C-Suite to let the Wozzes of the world do the genius work.

@ins0mniak
@Godsend @ins0mniak

> There are tons of data Engineers, data Scientists, machine learning engineers out there who are subpar programmers or have near-zero networking knowledge.

I am well aware that there are vast numbers of incompetent dipshits, yes. I have done real work in the world.

> will leverage GenAI like mathematicians leveraged the TI-83 and will take Computing to the next level

People that can't understand architecture will use machines that cannot produce novel algorithms will produce the next level of computing. This reminds me of something...

"Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?"

Ain't holding my breath.

> To use a WoW analogy

As noted many times, I have not ever played an MMO; the analogy is complete lost on me.

> most of the 20s to mid-30s "kids" out there

This does not match my experience.

> data engineers, data scientists

These aren't tasks you give to someone that isn't good at anything. I've got no idea what your WoW analogy was supposed to convey.
@p As for WoW, think D&D.

Your basic RPG adventuring classes are warrior, rogue, mage… everything is derived from those (e.g. healer magic priest is essentially a mage with healing magic, a ranger is a rogue that does non-magic damage and distance… hopefully you get that much), then the idea is that the fancy new hybridized classes can do more than one thing but no one thing well.

The 'new computing science professions' are a disaster because - whereas a CEO allegedly has enough intelligence to know that a cardiologist is probably not the best for, say, brain surgery, the same typical CEO has this weird expectation that a "Data Scientist" can optimize queries in a SQL database, and is a master at data engineering and storage policies. In reality, Data Scientists get degrees and certificates in some business statistics and data dashboarding/presentation, but charge 6 figures for that super niche skill (and said skill is "niche" because it was a profession invented like 5-10 years ago only because statisticians and accountants back then knew next to no SQL and front-end presentation technologies… no longer the case). Many "hybridized" IT professions are in fact one-or-two-trick pony professions.

The big problem is that they don't have the fundamentals of computing science down, when everyone expects them to in small to medium-sized businesses.

I mean, I expect a cardiologist to be able to deliver a baby or perform CPR or properly bandage someone (because that is basic medical school stuff before medical specialization of cardiology) but your average data engineering guy/gal cannot reconfigure a Cisco Switch or a Fortigate Firewall, or won't even try. The 'IT' industry incentivizes pushing people straight to specialization without covering IT basics.

When I was growing up, basic IT had 4 cornerstones that you needed proficiency in before anyone hired you: programming, databases, networking, OS/commandline.

Now?
e.g. Scrum Master - 6 figures, no commandline skill.

e.g. Project Manager - 6 figures, no commandline skill

e.g. DevSecOps Engineer - 6 figures, makes the assembly line for coders, doesn't know object-oriented programming.

e.g. MLOps Engineer - 6 figures, understands ML Flow and CI/CD, barely understands backpropagation and gradient descent.

e.g. React / React Native Developer - 6 figures, can't optimize SQL queries if front-end latency is already at a minimum.

P.S.

I don't really keep track of whether one has played WoW. That's like keeping track of whether one has played Mario Kart or read Harry Potter. There are too few outliers to keep track of that.

@ins0mniak
@Godsend @ins0mniak

> As for WoW, think D&D.

I am old as shit and have spent my entire life not knowing about elves and I am not gonna start now.

> weird expectation that a "Data Scientist" can optimize queries in a SQL database

Data scientists do statistics, specializing in large datasets. Data engineers are the ones that have to turn that into workable code.

> The big problem is that they don't have the fundamentals of computing science down, when everyone expects them to in small to medium-sized businesses.

Small shops, everyone's gotta do everything. A hacker reads the research papers and Phrack, reverses the firmware and architects distributed systems. Hacking is hacking. Hacks is hacks.

> e.g. Scrum Master - 6 figures, no commandline skill.

This is a makework job.

> e.g. Project Manager - 6 figures, no commandline skill

This is the guy that talks to the coders and stops them from having to talk to the normies.

> doesn't know object-oriented programming.

OO is a special case (a degenerate case) of CSP. Everyone understands message-passing architecture if you explain it to them and everyone that can hack worth a damn structures their code that way the first time they see it. Devops guys know how to type `|` in the shell.

> I don't really keep track of whether one has played WoW. That's like keeping track of whether one has played Mario Kart or read Harry Potter. There are too few outliers to keep track of that.

I must have been too busy being awesome to know whether Harry Potter played enough World of Warcraft to understand the finer points of the class system.

Replies

3
@p @judgedread @r000t @Trevor Goochild

Sometimes the DM just has to sit back and let the idiots... ahem... "players" retard themselves to death.

"...are you SURE you're going to chase the wounded Very Old Green Dragon into its underwater lair?"

"WEEEEEEEEE TREASURE!!!!! LEROYYYY JEEAAAAAAAAAAAAAAKINNNNSSSSSSSS!!!!!!"

/e pulls out his "killing dice."

"Just a heads up guys, bring some fresh character sheets next week."
@SilverDeth @Trevor @judgedread @r000t Had a friend that was running some split game, like, two parties. The first party decided to spread a rumor in the town about the "Head of Vecna" and they dumped a dried head in the cave outside town. You know how the Hand of Vecna works, right? (I don't know if people know this. I didn't, but I wouldn't be expected to, so I don't know if I'm explaining that Luigi is Mario's brother.) You cut off your hand and attach Vecna's hand and then you get some sort of (lich?) powers.

Allegedly, they found the head and there were two characters dead as a result of the Head of Vecna. The first one was "Hell yeah, chop off my head and put the Head of Vecna on!" And the head didn't automatically attach or come to life and the character's body just sort of stayed dead because they chopped the head off. So you'd think there'd be a "fool me once" effect, but "fool me once" is often counteracted by "Everyone stand back: I know karate!" So another guy--everyone stand back, he knows karate--said "No, you're doing it wrong: you have to cut off your *own* head and also you have to attach it before your body dies." and sat in a stone chair with "Vecna's" head and tried to Ruby Goldberg it such that the head would land right on the neck stump. It worked flawlessly: he cut off his head, then the random dried head that had nothing to do with Vecna landed in place and of course, immediately, his body slumped off the chair and the head rolled across the floor. The rest of the party decided not to attempt it.