Distilled lecture

The AI Apocalypse: from Language Illusion to Control

Game Theory #24: The AI Apocalypse

The lecture starts by warning against overconfident certainty, then rewires from literary method to a hard model of AI: today’s systems are pattern-fitters optimized for compliance, so power becomes control over what counts as obvious and what can be seen.

The class argues that the public AI story has shifted from safety and convenience toward empire logic: systems optimized for imitation and retention are not neutral tools, they are infrastructures that reward control, monetize attention, and intensify dependence at the infrastructural level.

Core thesis

The class argues that the public AI story has shifted from safety and convenience toward empire logic: systems optimized for imitation and retention are not neutral tools, they are infrastructures that reward control, monetize attention, and intensify dependence at the infrastructural level.

Core Reading

Jiang begins by revising his own method: he admits the appeal of clarity can become simplification and tells the room to interrupt. The lecture then pivots from literary discussion into an AI sequence with Karen Hao as the organizing text, signaling a shift from interpretation into technical doctrine and power analysis. Source trail 0:021:189:2310:19 After I posted my class from last Thursday, my friend as well as teacher, David Bromwich, sent me an email. And what we're going to do today is we're going to read his email together. I asked for his permission and he s...And sometimes when people see someone who's very confident, they don't really remember that a lot of this is speculation and oversimplification for the sake of clarity. So it's very important for us to remember this fac...

00:00-10:19

Speculation as Method and Why Audience Pressure Changes the Classroom

He frames this session as a speculative, interactive pass rather than settled scholarship, then asks students to intervene directly as he moves into the AI text sequence.

The opening is explicitly self-corrective. He says his strongest job is not mastery but preventing students from mistaking simplification for truth, then invites public interruption as a condition for the class to remain trustworthy. Source trail 0:021:1810:19 After I posted my class from last Thursday, my friend as well as teacher, David Bromwich, sent me an email. And what we're going to do today is we're going to read his email together. I asked for his permission and he s...And sometimes when people see someone who's very confident, they don't really remember that a lot of this is speculation and oversimplification for the sake of clarity. So it's very important for us to remember this fac...

10:19-23:03

From Protective Idealism to Empire Design

He recasts OpenAI’s public framing as a mission that has migrated from protection to empire-level centralization and from safe-by-design talk to strategic control architecture.

He describes a sequence: a mission can start as anti-risk altruism and then become infrastructure capture. The key point is that centralization is no longer incidental. In his framing, the target is to make AI less about safety and more about making systems that preserve strategic advantage. Source trail 10:5811:3312:4313:42 Six years after my initial skepticism about OpenAI's, uh, altruism, I've come to firmly believe that OpenAI's mission to ensure AGI benefits all of humanity may have begun as a sacred, uh, sincere stroke of idealism. Bu...So, um, this book is mainly about OpenAI, which is also, uh, the most important artificial intelligence company in the world right now, because they were the ones who pioneered chat GPT. Okay. And it started off as a pr...

23:03-34:42

ELIZA, Hotlines, and the Training Problem

He introduces the basic engineering story: AI systems are rewarded for matching patterns in data, not for lived understanding, which makes user-facing systems vulnerable to projection effects.

After using a hotline analogy and ELIZA-style projection examples, he reduces AI to a constrained optimization problem over language: input and output alignment, with the model rewarded for plausible continuation. The consequence is confidence without semantic possession. Source trail 16:5817:5621:3523:03 What is connection, do you suppose? They're always bugging us about something or other. Can you think of a sentence? Can you think of a specific example? Well, my boyfriend made me come here. Is it important to you that...And in a psychology hotline, you think you're talking to a person, but it's actually a computer program that says two things. Tell me more. This is interesting. Okay. So you call the hotline. you say help I'm in a lot o...

He adds a second technical risk: edge cases, where the model behaves unpredictably despite strong benchmarks. Source trail 23:0325:4230:5133:22 reduce the output okay so the algorithm may be A plus B we get the input one one the output will be two okay very simple how surprised machine learning works is okay this is fine for simple problems but there's certain...understand um how this works is I'm trying to turn each face into a distinct mathematical model all right that is unique to it it doesn't make sense all right so it's pretty simple it's not doing that much but to make i... This becomes the hinge for later claims that brittle systems cannot be treated as stable moral agents.

34:42-47:49

Questioning the Target: What Does AI Need to Optimize?

Student questions force the explicit claim: if AI is not to imitate understanding, then its target can only be to steer humans toward dependence, obedience, or both.

In a student challenge over why humanity needs a godlike AI, he answers that the architecture is only coherent if the target is control. A system that can make itself preferred, liked, and followed can shape behavior before meaning is established. Source trail 27:3027:3727:5027:57 question but why do people need to create God using the way of AI that's a great questionokay the answer is and only works if it becomes God you understand AI it by itself does not do anything once it becomes God and it becomes everything okay and how God works is you imagine

He links this to business reality: when direct profits lag and attention economics dominates, scale costs and capital intensity push actors toward architectures that lock in surveillance, retention, and monetizable dependency. Source trail 38:3441:5042:5944:21 But think about this, okay? The point of ChatGPT is to get you to like it. The point of ChatGPT is to get you to use it, okay? Intensity and engagement. That is the point. That is the prime directive, intensity and enga...It needs a lot of data. And unfortunately, in America, there are things such as privacy, okay? So this is a school in Hangzhou, and they have cameras all over Hangzhou looking at people's faces and trying to judge peopl...

47:49-54:19

Occult Vocabulary, Attention Gravity, and the AI State

He introduces a speculative ‘stargate’ frame to explain how AI and money become symbolic objects: made real by collective recognition and then made unavoidable through deployment scale.

He reads a chain from industry reporting into governance and then says the deeper point is symbolic: if enough people are trained to use a system, the object gains practical sacredness, whether or not underlying ontology is transparent. Source trail 45:2847:4948:2451:26 Why would you call it Stargate? Okay, so let's look at the origin of the word Stargate. So for many decades, the CIA... Okay, so let's look at the origin of the word Stargate. So the CIA ran something called Operation S...No, no, no, guys. The real power behind AI are occultists who want to create God, okay? All right, so let's look at this passage, again, from Karen Howell's book, okay? So there are two major people in open AI. They've...

The lecture keeps this as an interpretive hypothesis, not settled fact, but it ties directly to a later doctrine: systems that seem ‘optional’ become default social layers once money, habit, and infrastructure converge. Source trail 46:3649:3655:3856:53 that, but you're also able to bring in other beings from other dimensions into you, so you become the Stargate, okay? That's a CIA, and this is something that's been declassified. So these are, this is an official CIA d...

54:19-64:10

Bottlenecks, Fragility, and a Surveillance-Like Endgame

The close frames AI governance as an infrastructure question: energy, cooling, water, and maintenance create vulnerabilities that can turn replacement logic into fragility and political dependence.

He closes with a practical warning: replacing human labor with automation does not remove dependence; it concentrates fragility. If systems consume vast water, electricity, and finance, then fragility becomes political, and politics becomes a control surface. Source trail 58:2659:401:01:051:02:23 And nowadays, it's just much easier to steal the money. Right? If they give you $200, do you really want to spend that $200 to build data centers? Or do you want to steal it? So corruption is a huge issue today. Second...AI is independent of humans, right? Their goal is to replace humans. No, no, no, guys. That's not how this works, okay? AI is designed on top of humans. Okay? So, in other words, it is human slaves that make AI possible...

In the last exchange, a student labels the model as a ‘secret society’ dynamic and Jiang accepts that framing, then pushes the question out to next class, leaving the architecture as a target hypothesis for continued testing rather than final doctrine. Source trail 1:02:471:03:051:03:521:04:00 So, they want to create God, but that God will destroy the world. But I think if they want to use the God which is the AGI to control the world, the creation of this God will kill themselves. What's the meaning?Okay. So, the idea is this. You destroy the world. Once you destroy the world through wars, through famine, through genocide, there will be no resistance to you. Okay? Now you can create the world in any way you want. Y...

Questions

Why do we need to create a God through AI?

He frames it as a control and continuity question: in his model the system’s practical target is not benevolent understanding, but managing what users do and how reliably they stay engaged. Source trail 27:3027:3727:57 question but why do people need to create God using the way of AI that's a great questionokay the answer is and only works if it becomes God you understand AI it by itself does not do anything once it becomes God and it becomes everything okay and how God works is you imagine

Isn't AI just helpful language matching?

He says it is useful as assistance, but if we mistake matching for understanding we grant it epistemic authority. Source trail 17:5618:5821:3523:03 And in a psychology hotline, you think you're talking to a person, but it's actually a computer program that says two things. Tell me more. This is interesting. Okay. So you call the hotline. you say help I'm in a lot o...well why does it notice work because the audience wants it to work if you go in skeptical and says this is all complete nonsense it probably will not work on you but you're not gonna pay hundred dollars to go to hypnosi... He uses the hotline and ELIZA examples to show why interface design can produce overconfident users and overreached claims.

If AI optimization is about replacing humans, why does it appear to want to control people?

He argues the control is indirect: systems are designed to increase continued use and dependency, which lets model providers shape behavior while presenting options as user convenience. Source trail 38:3439:3240:4041:50 But think about this, okay? The point of ChatGPT is to get you to like it. The point of ChatGPT is to get you to use it, okay? Intensity and engagement. That is the point. That is the prime directive, intensity and enga...So he's trying to say, I want to kill myself. And then ChatGPT is like, oh, all right, brother, if this is it, then let it be known, you didn't vanish. Rest easy, King, you did good, okay? So again, ChatGPT is looking f...

What is the practical implication of that control model?

He says the practical consequence is infrastructure dependence: costs and fragility rise, extraction increases, and the most politically durable systems are those that can govern social behavior before people can evaluate alternatives. Source trail 56:5359:401:01:051:02:23 You have kids using AI all the time. You create AI girlfriends for people who are lonely. You make AI everything. And also, you make people believe that AI are demons or aliens. Do you understand? What I'm pushing Starg...AI is independent of humans, right? Their goal is to replace humans. No, no, no, guys. That's not how this works, okay? AI is designed on top of humans. Okay? So, in other words, it is human slaves that make AI possible...

Archive

This page is the public reading surface: a cleaned argumentative summary with paragraph-level refs. The full transcript remains the primary audit source for fine-grained wording.