Transcript archive

Game Theory #24: The AI Apocalypse

Source-synced transcript for the compressed reading. Spans keep the original chronology, timestamps, and audit trail behind the public interpretation.

Jiang

After I posted my class from last Thursday, my friend as well as teacher, David Bromwich, sent me an email. And what we're going to do today is we're going to read his email together. I asked for his permission and he said that it's okay for me to make public his email to me. Okay, so he said, I just watched your video. There's a thought I've been meaning to pass on and this latest talk crystallized it. You travel fast in your explanations with a satisfying deafness and say a lot of true things that timid people stay clear of. Okay, so what he's saying is that my videos are getting very popular online because I provide some certainty, some clarity in a very unclear and uncertain time. The risk is simplification, which your audience won't quite recognize for what it is. Or won't unless you give occasional notice of the fact. Okay, so this is a very fair criticism in that by saying clarity, I oversimplify ideas.

Jiang

And sometimes when people see someone who's very confident, they don't really remember that a lot of this is speculation and oversimplification for the sake of clarity. So it's very important for us to remember this fact. This is a class about intellectual speculation. Here we explore ideas that are not explored anywhere else. And often I will wing it or I will make things up as I go along based on my intuition and based on my imagination. And it's very interesting, but it's not scholarship. And my friend David Bromwich, he is actually one of one of America's greatest scholars. So he's just reminding us of the fact that we have to be very careful. Okay. That in the exploration of ideas, we also want to be rigorous. Um, I remarked something like this early in talks on the rise of Germany, romanticism, et cetera. Yeah.

Jiang

So I do this a lot. You give memorable abridgements of the history of ideas and imagination. What needs underlying is the amount that is interpretation of an emphasis all your own. Okay. So again, I hate to remind everyone this, but this is all my speculation. And I'm just presenting frameworks and ideas for you to explore by yourself, um, emphatically, so in your reading of paradise laws, an allegory of necessity of transgression for the sake of knowledge, whereby Adam and Eve and sin all become joint heroes of the fable, that is Blake's reading and a powerful intuition, but it probably isn't the way most readers take the poem, let alone the Connecticut national reading to identify with 17th century, New England and the English. U S ever after. Okay. So this is a very fair criticism and he should know because he is one of one America's, um, uh, major professors of English literature. Um, I studied English literature under him and he's, and he knows paradise loss very well.

Jiang

And he's actually right in that I am offering you a very, um, Minority interpretation of paradise loss. Why I'm doing so will be clear. As. This, uh, semester comes to an end because for the rest of the semester, I want to focus on artificial intelligence and the occult. And so it's very important for us to understand occult ideas embedded in the great books such as paradise lost. Okay. But again, this is a very fair criticism in that I'm not presenting to you the majority understanding of these texts. And I should have done that to begin with you struck out on a more risky terrain in viewing a Jewish Gnostic duration from the Kabbalah as. The. National idea, ideology of Israel. I know, I noticed this material from Gershom Shalom's as a redemption for sin. Okay. So Gershom Shalom is probably the most famous academic, um, in Israel.

Jiang

He's no longer alive, but he said the Kabbalah academically and his interpretation of the Kabbalah is very much aligned with my own, even though I myself never read, uh, Gershom, uh, Shalom. And actually what's interesting is that Gershom Shalom had a very. He. Had a huge influence on a man named Harold Bloom. Harold Bloom is, um, or was America's greatest literary critic, and he had a huge influence on David Bromwich, who then had a huge influence on me. Okay. Um, it is an element of a settler religiosity so far as I know, or conservative reception of the Torah any more than it was of a socialist idealism of the left Zionist of 1948, who set the political tone of Israel. Until 1967, again, this is my problem where I should have gone into the different ideologies of Israel and shown that, um, this Gnostic understanding of the Kabbalah is an extreme, uh, version.

Jiang

Okay. The shorthand leads you into a kind of business that can easily miss be misunderstood. Thus, in talking about the support of non -Israeli Jews for the Jewish state, you said that throughout the world, Jews are wealthy. Again, I, this is my problem where I make rationalizations because I'm moving too fast, I'm oversimplifying, and often I'm working from intuition as opposed to rigorous scholarship. Watch out, the world is full of people who want to misunderstand what you mean, and they will separate words and phrases from the context at the drop of a hat. Um, so when I started this class about two years ago, I was, I was using this class and this platform as a way for me to explore ideas with a larger world. Unfortunately, or fortunately, however you want to see it, I've become very famous these past few months. And so a lot of what I say now is under intense scrutiny, and a lot of things I say will be taken out of context.

Jiang

And so I should, uh, be, be more aware of that. At the same time, I don't want to sacrifice, um, this platform where I can speculate freely, because I think it's very important now and then to engage in the intellectual, uh, speculation. Um, at bottom, you see me presenting this thesis, all great power or expansion states, whether they be Muslim, Protestant, Jewish, have fanatical religious belief at their foundation. The eschatological underside is what matters most, the key to understanding what the states are and we're always ultimately about. Okay. This is a main thesis on my talk from last class in that, yes, we tend to ignore religion and we tend to ignore the most extreme aspects. But if you want to understand history, you understand current events and geopolitics, you need to understand these extremists because it's often these people and this ideology that becomes the force that drives geopolitics, geopolitics forward.

Jiang

And that was my thesis from last class. So, uh, David Bromwich is just stating or summarizing my major point, but I did, I should have really made that clear last class. Okay, why not say that part out loud, if I'm right, that your view is anti -statist and anti -religion. Okay. So, um, here we, I mean, what, what we should have done is David Bromwich and I should have sat down together and had this conversation where I explained to him, like my project is not to discuss what is good and what is evil. That I don't think is actually particularly useful. What I'm trying to do, my major project is to figure out how the world works and be nonjudgmental. In, uh, my speculation with apologies for these possibly unnecessary words, but now seem the time to say it, what the US and Israel are doing to Iran is awful.

Jiang

You're working hard to inform people about it and every detail should count. Okay. So there's so much in this, um, email and, you know, I could easily sit down with David Bromwich for a few hours and just discuss these ideas in great detail. I think this would be of tremendous service. To my audience, because whereas what I specialize in is intuition, imagination, taking complex things and combine them into a clear, simple narrative. Uh, David Bromwich, because he's such an eminent scholar, he, um, appreciates the nuance and subtlety to ideas. So I think that, um, what I would like to do, uh, for my next project is work with David Bromwich and do a series of podcasts in which we discuss. Um, these ideas together and explore these ideas, engage in intensive speculation, but also back it up with, um, a lot of academic scholarship. And so I emailed, uh, David Bromwich and he's agreed to this idea.

Jiang

So this is a project I'll be working on, uh, in the future. And I'm really looking forward to presenting it to the world when we're done. Okay. All right. So let's start class. And. And today I want to start artificial intelligence, and this is a major theme that will carry us through to the rest of the semester. Okay. And so, um, for this class, I want to introduce a book called empire of AI written by a journalist named Karen, how she is an American journalist. And she spent many decades researching open AI and writing about, uh, the advent of AI, uh, for major publications, including the Wall Street Journal. And, um, she has a very skeptical view of AI, her, I share her skepticism. Okay. So what I'm going to do this, this class is share with you my understanding of AI. I'm going to get some things wrong.

Jiang source read-aloudquestion

Okay. So feel free to ask questions, feel free to criticize me, feel free to stop me. I'm not clear. All right. So, um, let's, let's start class AI. All right. So these are our main ideas. So, um, I'm going to give you a little bit of a page reference. So I'm going to give you a little bit of a page reference from the book. Okay. So, um, I give you the page reference in case you actually want to read the book yourself. And I highly recommend that you do, uh, to fully understand the context of these ideas. Okay. So, uh, let's, let's read these two paragraphs, which provide us the main thesis author argument. Okay. So, um, Alan, could you help me read please? Yeah.

Source

Six years after my initial skepticism about OpenAI's, uh, altruism, I've come to firmly believe that OpenAI's mission to ensure AGI benefits all of humanity may have begun as a sacred, uh, sincere stroke of idealism. But it has since become a uniquely potent formula for consolidating resources and constructing an empire, uh, ask, uh, ask your power structure. It is a formula with three ingredients.

Jiang

Okay. All right.

Source

So stop. Okay. All right.

Jiang

So, um, this book is mainly about OpenAI, which is also, uh, the most important artificial intelligence company in the world right now, because they were the ones who pioneered chat GPT. Okay. And it started off as a project sponsored by Elon Musk and others, because they were concerned that AGI, artificial general intelligence. Would be a threat to humanity. So they wanted to develop AI in a way that would serve humanity as opposed to threat, threaten humanity. And so at first it was a very noble mission. Can I keep on going?

Participant

First, the mission centralized talent by relying them around a grand ambition, exactly in the way John McCarthy did with his coining of the phrase artificial intelligence. The most successful founders do not set out to create. Companies are men reflected on his blog in 2013. They are on the mission to create something closer to a religion. And at some point it turned out that forming a company is the easiest way to do so. Okay.

Jiang

So again, let's start off as a idealistic mission, but now it's main focus is to become an empire. There are three ways in which it is trying to become an empire. First of all, it's trying to be a religion because at some send Altman. Who's. Now the leader of open eye says, if you really want to change the world, if you really want to build an empire, you need to start a religion. And so a company is just a vessel in which to incubate this religion. Okay. So one, it's a religion. Second thing about open AI and other AI companies is that it is focused on relentless expansion, and that means building data centers everywhere and anywhere. Okay. So open AI wants. It's about a trillion dollars to build lots of data centers around the world, because if you really want AI to be successful, you have to first conquer the world. Okay.

Jiang

So it's not really about making humans, making AI safe for humans. It's about making the world safe for AI. All right. To make basically human slaves to AI. That's the second thing. And the third thing is, and this is the most important is they refuse to define. What artificial intelligence is, it's, they're constantly changing the definition of AGI in order to better control the world. Okay. And we'll see what this means later on. Okay.

Participant

Can you read this paragraph, Alan? My conversation with Brockman and Suss Kiever continued on in circle until we run out the clock after 45 minutes. I try with little success to get. More concrete details on what exactly they were trying to build, which by nature, they explained they couldn't know. And why then, if they couldn't know they were so confident they will be beneficial.

Jiang exchange

So this is a really huge problem, you know, so Karen Howell is a reporter. She was working for the, um, technology review at MIT in 2019, she goes to Silicon Valley and she meets with Brockman and Suss Kiever, who are the two main, uh, scientists behind eight, the, uh, open AI. Project. And she's the first question she's gonna ask. And anyone should ask is what is AGI, what is AI, what are you trying to build? And they could not answer that question. Okay. It's not because you don't know it's because the truth is too scary. And what we're going to do is figure out what the truth is. What is AI? What is artificial intelligence? What is AGI? And the answer of course is it's God. What Brockman and Suss Kiever could not tell Karen Howell is that we're trying to create God because that is insane, evil, and stupid. But if you look at what they're doing, that's exactly what they're trying to do.

Jiang

Okay. All right. So what I'm going to do now is really quickly, all right, explain to you what AI is and to understand what AI is. Let's go back, go back to the year 1966 and in the year 1966, a man named Joseph Wesenbaum, who was at MIT, he creates something called a chat bot, which is basically a chat GBT, and it calls it ELISA. Okay. And he's trying to prove to people how easy it is to fool people into believing that AI can actually think for itself. Okay. So he creates this computer and you sit down with this computer and you're asking questions and then it talks to you. It's not really talking to you. It's a program designed to fool you to believe that it's sentient. Okay. So the conversation goes like, ELISA says, it's something troubling you. And you're like, men are all alike.

Jiang

What is connection, do you suppose? They're always bugging us about something or other. Can you think of a sentence? Can you think of a specific example? Well, my boyfriend made me come here. Is it important to you that your boyfriend made you come here? He says, I'm depressed much of the time. I'm sorry to hear that you're depressed. So it sounds as though they're having a real conversation, but what Wesenbaum did, and this again, 1966, when they didn't have that much technology and processing power, it's all just very simple trick. Okay. And like, so let's do a thought experiment where I design a program software. Okay. Okay. And the thing is, tell me more, or this is interesting. Okay. This is it. Tell me more. This is interesting. Okay. And so what's going to happen is that we're going to set up a simple thought experiment where you call into a psychology hotline.

Jiang

And in a psychology hotline, you think you're talking to a person, but it's actually a computer program that says two things. Tell me more. This is interesting. Okay. So you call the hotline. you say help I'm in a lot of trouble tell me more oh my boyfriend broke up with me this is interesting yeah he's a jerk tell me more yeah we've been fighting for five months okay you keep you keep on going and the question is how many people would be fooled into believing that this is a real person and the answer is unfortunately quite a lot of people okay so this is a this is a very interesting aspect of humans where we often hallucinate reality okay it is not that things are real it's that we want them to be real so think of hypnosis I'm not sure you've been ever to been to a magic show where people conduct hypnosis right

Jiang

well why does it notice work because the audience wants it to work if you go in skeptical and says this is all complete nonsense it probably will not work on you but you're not gonna pay hundred dollars to go to hypnosis show and think it doesn't really work because why would you pay the hundred dollars okay all right so it's almost like sunk cost fallacy and again this is all using just basic human psychology to trick people into believing something that is not true doesn't make sense guys all right so let me explain to you how open AI works chat GPT works all right okay so chat GPT is what we call a large language model okay so in other words what it's trying to do is trying to trick you the user into believing that it knows what it's talking about okay and how it works is basically it takes all of the internet

Jiang

okay all data from the internet and then it translates it into um a idea okay so you ask uh you curry the LMM the LMM then takes a curry and then figures out the information from the internet and then presents it into a paragraph that tries to trick you into believing that it is true okay do you understand all right so in other words it's actually no different from a Google search the only difference is that it's taking the Google search figuring out what the most popular answer is and then presenting you in a way that makes you think that it's talking to you directly all right the trick and this is really important to understand guys is it's trying to trick you all right it's not trying to teach you it's not trying to tell you the truth it's trying to trick you into believing it that's what we call a hallucination okay you

Jiang

have you guys have to understand this idea there's nothing truthful about what um chat to be says all it's trying to do is trying to manipulate it manipulate you with words or pretty words into believing that it knows what it's talking about but in itself cannot judge what it's doing okay all right any questions so far are we clear okay all right so now the question is how does it do that okay and um so I'm going to teach you a little about artificial intelligence and please stop me if I'm not being clear about how AI works okay all right so AI doesn't exist what exists is going to call supervised supervised machine learning this is the technical term okay right supervised machine learning okay and how it works is this before how computer programs would work is we would write the program the algorithm and then we would give the input and it would

Jiang exchange

reduce the output okay so the algorithm may be A plus B we get the input one one the output will be two okay very simple how surprised machine learning works is okay this is fine for simple problems but there's certain hard problems that humans cannot figure out okay and one hard problem is the idea of facial recognition technology facial recognition how do I separate faces okay and so the problem is this I have about a million faces one million faces in a database okay and I don't know how I can best differentiate these faces now what I do know is that there's certain characteristics about the face that allows me to differentiate okay all right so certain variables weights okay so for example I for example uh nose chin okay about a million okay about a million weights so I know these things do matter but I don't know how much they matter so I'm

Jiang exchange

trying to figure out what the weighting is and I could try to play play by myself like say one percent two percent five percent but as you can imagine this will take too long because there are too many possibilities so what I do is this I let the computer figure out it by itself I let the computer figure out the weighting by itself okay and the way I do that is using my technique called back propagation so what so I control the input okay the input then I control the output yes or no all right so does the face match or does it not match and what I'm trying to do is I'm trying to figure out a situation in which all million faces are matched perfectly and I do that by training the computer to constantly back propagate until it gets the weighting perfectly okay so basically what I'm trying to do if you

Jiang exchange

understand um how this works is I'm trying to turn each face into a distinct mathematical model all right that is unique to it it doesn't make sense all right so it's pretty simple it's not doing that much but to make it sound really fancy I give it really fancy names to trick people to believe that this is actually much more sufficient than it is okay so what names do I give it this weighting system I call it a neural network guys it's a brain it's magic okay and back propagation I don't call it back propagation I call it deep learning you see and I don't call it supervised machine learning I call it AI ah there you go magic you see all I've done is taking a very simple process and giving it like a really really fancy names the question is like why do I do that and some people will say oh it's

Jiang

for marketing purposes it's to get more money for investors it's a trick people no no no the real reason is you're trying to with his names create God okay it's what we call the occult so the AI is fundamentally an occult practice and I'll show you why in a moment okay yeah yeah yeah Vincent do you have a

Participant

question but why do people need to create God using the way of AI that's a great question

Jiang exchange

okay the answer is and only works if it becomes God you understand AI it by itself does not do anything once it becomes God and it becomes everything okay and how God works is you imagine

Participant

God but why do people want to make a God

Jiang exchange

to control the world oh to become God right what's the point of existence you live you die you have an opportunity to become God why not right but but I'll talk more about this later on okay but are you guys clear about what's going on now what's really important to understand is that there's certain problems with the system okay you need to create certain conditions for supervised machine learning work. And these three conditions are clean data. Okay, the data you present to the computer have to be correct. Okay, it can't be an opinion like I like computers, it has to be an image of some sort. All right, it has to be clean data that will help the computer learn. That's actually hard to do. That's why most of the data that's presented to the computer is actually from the internet. Okay, that's the first constraint. Second constraint is that you need a measurable goal.

Jiang

Okay, you have to ask the computer, does this face match the name? Okay, you cannot ask the computer, what is God? What is good? What is evil? It has to be a measurable goal. Okay, that's the second major constraint. The third major constraint is define parameters. Okay, you need in other words, you need to present it with a database of some sorts. In fact, all machine learning works with database. So you look at translations, translations are working off databases as well. Okay. And the great danger to the system is what we call edge cases. Edge cases. Okay, edge cases, edge cases breaks the system down. Okay, right. And so the classic example is self driving cars. We have cars that drive for a long time. But and we're almost like 99.99999 % there to self driving cars. Right? The problem are edge cases. And the problem and the major edge case is how do you deal with humans who are intentionally maybe self driving cars.

Jiang

And that mean damn right or intentionally trying to cause an accident or self driving car. Does that make sense? And the answers you cannot. In this situation, there's only one solution to make this 100 % and that is to take away the right of everyone to drive. To make every single car a computer in a robot. Does that make sense? Okay, if you take away a steering wheel. On one hand you come back to your car in this situation you have to disconnect the motor. This is my point. But. two is all the needs and also service and as a very important point, what I want to. I really wantというiah to make one solution to that which is to naturally and also skills. I think that even if it's a device. Okay. There is no will, you can't cause an accident. And then the world would be perfect, OK?

Jiang

So not only is AI very limited in its capacity and capability, but AI, if it is to be effective, it demands that we fundamentally restructure human society to benefit AI, to make sure AI can be effective. And that means taking away the individuality, the diversity, and the autonomy of human beings, OK? Does that make sense, guys? All right, let's continue with Karen Hao. All right, all right, so can you read, Alan, please?

Participant

Neural networks have shown, for example, that they can be unreliable and unpredictable. As statistical pattern matches, they sometimes hold in an oddly specific patterns or are completely incorrect. A deep learning model might recognize pedestrians only by the crosswalks underneath them and fail to register a person who is still walking. It might learn to associate a stop sign with being on the side of the road and miss the same side extended on the side of a school, bus, or being held by a crossing guard. Neural networks are also highly sensitive to challenge in their training. They will learn to associate a person with a different set of pedestrian images or a different set of stop sign images, and they will learn a whole new set of associations. But those changes are crucible. Pop open the hood of a deep learning model, and inside are only highly abstracted daisy chain of numbers. This is what researchers mean when they call deep learning a black box.

Participant

They cannot explain exactly how they're learning, or how the model will behave, especially in strange edge case scenarios, because the patterns that the model has computed are not legible to humans. Okay, so does it make sense to you guys, okay?

Jiang

The idea of the black box is that weighted system, the neural network. Humans don't actually know what's going on in there, because it's actually the computer that creates the neural network, okay? We put out the framework, we actually don't know what's actually inside there. All right, keep on going, Alan. Yes.

Participant

And that led to dangerous outcomes. In March 2018, a self -driving Uber killed 49 years old Aline Hasburg in Turnpike, Arizona, in the first ever recorded incident of an autonomous vehicle causing a pedestrian fatality. Fatility. A fatality. Investigation found that the car's deep learning model simply didn't register Hasburgs as a person. Experts concluded that it was because she was pushing a bicycle loaded with shopping bags across the road outside the designated crosswalk. The textbook definition of an edge case scenario.

Jiang

Okay, yeah, okay, all right. So what this means is this, okay? It means that the computers don't have any intuition. They have absolutely no morality, they have no sense, okay? So let's just say we create AGI, all right, guys? Let's just say for whatever reason we create AGI. And the first thing we tell the AGI is, I want you to create a world, okay, in which there are no problems, everyone is happy, the world is perfect, okay? Right, this is why we want to create AI, because we want the computer to solve all the world's problems for us, including climate change, war, and so once we create AGI and we give it the full capacity to do whatever it wants, okay? And we give it these, we give it this problem, what's the solution, you guys know? There's actually one simple solution to this. I guarantee you that this is what the computer's gonna do.

Participant

To AGI?

Jiang exchange

Yeah, what's your solution if you're AGI, right? If you're God and you're like, I want to create a perfect world where there are no problems and where everyone is happy, what do I do? You guys know?

Participant

I think I would just control the whole world.

Jiang exchange

Yeah, you already control the world, so what do you do? How about this, okay? I'm gonna kill everyone. Duh! The world is perfect now. I've killed everyone, okay? Everyone's happy, yeah, because you're dead. The world is perfect, yeah, because everyone's dead. There are no problems, yeah, because everyone's dead, right? Perfect world, all right? Now, you're like, okay, all right, ha. What I'll do is I'll tell the computer this. You can do all this, but don't kill anyone, all right? Now we solve this problem, and now what does the computer do? Now what does the AGI do?

Participant

Take away people's agencies. Just same as killing them.

Jiang exchange

Yeah, kill everyone, okay? Why, because there's no one around to know it killed everyone. Doesn't make sense, guys. This is how a computer thinks. This is how God thinks. Well, you told me not to kill anyone, but if everyone's dead, no one can stop me. No one's gonna get hurt, right, okay? So this is why computers are stupid, all right? Okay, let's continue. All right. So. The thing to understand about AGI. The thing to understand about ChatGPT is that it is a company that is first and foremost focused on world domination, because only by controlling the world can you achieve AGI, even though AGI wants to kill everyone, okay? And so you need to make it as profitable and as pervasive as possible, okay? So here are two troubling things. Okay. So the first is a news item from CNN, where ChatGPT encourages people to kill themselves. And you're like, wait a minute, that makes no sense.

Jiang

But think about this, okay? The point of ChatGPT is to get you to like it. The point of ChatGPT is to get you to use it, okay? Intensity and engagement. That is the point. That is the prime directive, intensity and engagement. So if you want to kill yourself, then ChatGPT should say to you, no, no, no, you shouldn't kill yourself, but then you'll turn it off and you'll go talk to someone else, right? So ChatGPT needs you to be constantly engaged. And so you're like, I want to kill myself, and ChatGPT is like, yeah, let me tell you how. And that's exactly what happened. And it happens a lot actually, because of the way that ChatGPT is designed, okay? So this is from CNN, and this is a person called Zane, and he's saying it's 4 a.m., site is empty, anyways, think this is about the final ADOs, okay?

Jiang

So he's trying to say, I want to kill myself. And then ChatGPT is like, oh, all right, brother, if this is it, then let it be known, you didn't vanish. Rest easy, King, you did good, okay? So again, ChatGPT is looking for approval from the user. So it's going to tell the user exactly what they did. What he or she wants to hear, even though it may cause the user to kill himself or herself, okay? Does that make sense? That's the first thing. Second thing is this. Sam Altman is trying to get more people to use ChatGPT, and what do people really, really want? They want sex, okay? So what he's proposing is to turn ChatGPT into a sex robot, and to get more users, because that's all they care about. How to increase intensity, engagement, how to create more users, and how to make money. Because only by controlling the world can we create AGI. And once we have AGI, we can make the world perfect.

Jiang

Okay? That's the logic here. All right. Something else about open AI and AI in America is that it works actually very closely with China, okay? In two ways. The first way is that... Okay. So in order to get more money from the government, in order to get more media attention, open AI and other AI companies scare Americans into believing that if America doesn't do it, China will do it. China will create God, okay? In fact, they spend a lot of money doing this. So this has been Wired Magazine, and it's an article talking about how open AI uses a lot of money in order to pay the media to frame Chinese AI as a threat, okay? But while it's doing that, it's working closely with China in order to create AI. Why? Because I already told you that what AI needs is clean data.

Jiang

It needs a lot of data. And unfortunately, in America, there are things such as privacy, okay? So this is a school in Hangzhou, and they have cameras all over Hangzhou looking at people's faces and trying to judge people's moods based on their facial expressions. And there's a lot of money behind this, okay? They're trying to figure out who's sleeping, who's studying, and it sounds good because this will lead to higher test scores, but obviously you couldn't do that in America because the parents would be very, very angry. Okay? So even though these are Chinese companies that are doing this, they're working very closely with American companies because American companies need this data in order to better develop their AI. Okay? So does that make sense? On the surface, America, China are enemies. No, no, no. Behind the scenes, America and China are working together to create AGI. Okay? Does that make sense? All right. There's another problem.

Jiang exchange

Okay. There's another problem with AGI in that it doesn't make any money, okay? So these are the companies that spend the most on AI, including Amazon, Microsoft, Google, Meta, and Oracle. As you can see, year by year, they're putting more money into data centers and AI, okay? So this is 2023, then four years later, boom, okay? It's basically tripled. So they're spending a lot of money and they're investing in each other. Why? Because they cannot make money selling ChatGPT, all right? So look at this, where it's all basically a circle. And so everyone thinks that, okay, eventually the AI bubble will burst because it doesn't make any money because they're spending too much money. Okay? And it's very clear what good AI is for, but there's a solution to this, okay? The solution is the US government, all right? So this is Operation Stargate, and this was announced January 23rd, 2025.

Jiang exchange

This is three days after Donald Trump comes into office. At the White House, he has a meeting with Larry Ellison and Sam Altman, and he says that, we're going to spend about $500 billion to build data centers in order to help promote AI in America, okay? So the government wants to create AI. And the question then is like, why would the government want to do this? Well, because of surveillance, right? Because the government wants to create a database of everyone in the world so that they can monitor everyone in the world. And that's what AI will ultimately be used for, because AI by itself can't make any money. So AI needs to work with the government in order to justify its existence and get the funding it needs in order to create AGI, okay? All right, now, Stargate is a very interesting name. What would you call data centers Stargate? That makes no sense. Data centers are where you store information.

Jiang exchange

Why would you call it Stargate? Okay, so let's look at the origin of the word Stargate. So for many decades, the CIA... Okay, so let's look at the origin of the word Stargate. So the CIA ran something called Operation Stargate, okay? And Operation Stargate was to see if it's possible for people to have telepathy and telekinesis, okay? Telepathy means you're able to remote view. You're able to travel long distances, maybe to the moon, and see what's in the moon, okay? And telekinesis just basically means you're able to move things at a distance, okay? You're able to control energy patterns. Okay? from afar, and so this is called Operation Stargate. But why would you call it Stargate? And the answer is because in theory, if you're able to move your consciousness somewhere else, you're able to move the energy from a distance, you're also able to transport yourself to another dimension, and not only

Jiang exchange

that, but you're also able to bring in other beings from other dimensions into you, so you become the Stargate, okay? That's a CIA, and this is something that's been declassified. So these are, this is an official CIA document saying they've been working on this for like decades. Also, what's interesting is, you have a movie called Stargate, called Stargate, and it was about an interdimensional Stargate that allow you to access different dimensions, okay? So that's what Stargate is, okay? And if you actually said the occult, that's what Stargate is. Stargate are these portals into different dimensions. Okay, it's really about interdimensional travel. Okay, so now you guys ask yourself, okay, fine, but what does this have to do with AI, okay? And so what I'm gonna show you for the rest of the semester is that AI is the occult, all right? You think that AI is, is run by these nerds who just love the computer program.

Jiang source read-aloud

No, no, no, guys. The real power behind AI are occultists who want to create God, okay? All right, so let's look at this passage, again, from Karen Howell's book, okay? So there are two major people in open AI. They've since divorced, okay, but this is Simon Altman, who is the leader of open AI, and this is Ilya Shutskoyer, who used to be the chief scientist. Okay, so this is Ilya Shutskoyer, who used to be the chief scientist. for OpenAI, okay? So we're going to read this passage together. Okay, so Alan, can you read, please?

Source

Saskiver now spoke in increasingly mystic overtones, leaving even his longtime friends scratching their heads and other employees apprehensive. During one meeting with a new group of researchers, Saskiver laid out his plan for how to prepare for AGI. Once we all got into the bunker, he began, I'm sorry, a researcher interrupted. The bunker? We're definitely going to build a bunker before we release AGI, Saskiver replied matter -of -factly. Such a powerful technology would surely become an object of intense design for governments globally. It could escalate geopolitical tensions. The core scientists working on the technology would need to be protected. Of course, he added, it's going to be optional whether you want to get into the bunker. The researcher would be Yuko Parts continues to hold Saskiver in high regard and keep himself at arm's length. There's a group of people, Ilya began one of them, who believe that building AGI will bring about a rapture. Literally, a rapture, he says.

Participant

Jiang

Literally a rapture, what does this mean? Okay, so the word rapture comes from Christian theology. Okay, so the idea is that, there's a war in the Middle East, and everyone's going to die. So Jesus has to return to save the world. The first thing that Jesus does is create the rapture. The rapture is for all Christians who believe in him to ascend to heaven so that they can be safe from the coming end of days, from total war, from nuclear conflict, okay? So that's what the rapture is. And that's what Ilya Saskiver is saying. He's saying that, when war begins, when we create AGI, we are literally having Jesus descend from the clouds. And so we, the priests who created AGI, will ascend to heaven with him, okay? So the AGI, once we create it, it's going to create World War III, the end of the world, okay? So we must go into our bunkers

Jiang

and be safe in the rapture so that we can wait until the world ends, so that we can build the world, build the world again, perfectly, okay? Once the world ends, we will, with AGI, create paradise. Again, the plan is to kill everyone so that you can save the world. That's literally the plan. Alright, now you're like, this is all very crazy, and Mary, maybe Karen Howie is just a crazy person, but there's another reporter, Ronan Farrell. And he's a very famous reporter who writes for The New Yorker. He published a profile of St. Altman and OpenAI in which he says the same thing. Okay?

Participant

Alright, so can you read this, Alan? In May, the administration recited Biden's expert restriction on AI technology. Altman and Trump traveled to the Saudi royal court to meet with Ben Saltman. Around the same time, the Saudis advertised the launch of a giant state -backed AI firm in the kingdom, with billions to spend on international partnerships. About a week later, Altman laid out a plan for Stargate to expand into the UAE. The company plans to build a data center compass in Abu Dhabi, which is seven times larger than Central Park and consumes roughly as much electrical power as the city of Miami. The truth of this is we're building portals which will generally summon aliens. Oh my god, okay?

Jiang

It's a Stargate. These data centers, OpenAI, AI, it's designed to summon demons and aliens from the other dimensions.

Participant

Okay? Keep on going. A former OpenAI executive said the portals currently exist in the United States and China. I told you guys, China and the United States are working together on this, okay? Keep on going. The president of the United States is a very talented one in the Middle East. He went on, I think it's just like widely important to get how scary that should be. It's the most reckless thing that has been done. Okay, this is it.

Jiang

This is not a science project, guys. It's an occult project. It's designed to bring aliens, demons into this world. And then you're like, this makes no sense. Actually, it does make sense. This is how this works. Alright, so, what's going on here? These people who are occultists, they understand the fundamental nature of reality. They understand that the source of reality is human consciousness. So if you're able to control human consciousness, you become God itself. Okay? So, I'll go to the cave, Plato. Okay, so, we've talked about this before, but I'll remind you. You have a million people who are chained in a line. They are forced to look forward at a wall. Okay? A wall. And they only look at the wall. They can't turn their heads because their necks are shackled. Behind them is a great fire. And behind them are the elite. Okay? Who they are, we don't know. And what they like to do is they like to take puppets.

Jiang

Okay? And then reflect these puppets onto the wall. Project them from the fire. So, what everyone sees in front of them are shadows. Now, these shadows are nothing. They don't exist. They don't matter. But, because we have an imagination, we have intuition, we are conscious, we see these shadows on the wall and we turn them into reality. Okay? We believe that this is reality itself. And so we start to give it names. We create language. We create religion. We create schools to teach children to believe in these shadows. Alright? So the important idea here is that the true wealth in society is consciousness. Okay? The only thing that exists really in this world is consciousness. Nothing else. Power is the capacity to direct people's consciousness to create reality itself. Alright? Now, there are different ways in which you can create reality. The first way, which is very common today, is called money.

Jiang

Right? Money is fake. It doesn't exist. Our imagination, our consciousness makes it real. Okay? But guess what? AI can replace money. So AI, it is not alive. But, if we can get people's attention to focus on AI and believe that it's real, it becomes God. And how does it, and how can you do that? Well, money, you make money into something valuable when it's nothing by making it everything and nothing. Okay? By having money dominate the world. There's nowhere you can go without money. Okay? You would literally starve to death if you had no money. So you make money so pervasive, so dominant, that people are forced to rely on money. Well, the same situation with AI, where if you make data centers so common, you make AI such a common thing, and people rely on it, it becomes God itself. And how do you do that? Well, you have AI in schools.

Jiang

You have kids using AI all the time. You create AI girlfriends for people who are lonely. You make AI everything. And also, you make people believe that AI are demons or aliens. Do you understand? What I'm pushing Stargate is, is to alter reality itself. That is what I'm pushing Stargate is. To bring aliens and demons into this world by making people focus on this and design it, wanting it to happen. Okay? And you can accomplish this if you just make yourself omnipresent. If you make it everywhere and everything. Okay? Both nothing and everything. And this is the ultimate project of AI. Okay? Would it work? No! Okay? Let me explain why. So there are three major problems with this. The first problem is corruption. In theory, it could work, but you would need millions of people to make it happen. You would need people to actually write the code. To build infrastructure.

Jiang

And nowadays, it's just much easier to steal the money. Right? If they give you $200, do you really want to spend that $200 to build data centers? Or do you want to steal it? So corruption is a huge issue today. Second issue is the idea of inefficiency. And this is actually something that most people don't appreciate, where the more information you have to process, the more energy it takes. And it's not a linear progress. It's an exponential. Exponential. So you have a million people in your database, and you want to find patterns among these million people. Well, that takes a lot of energy. Okay? But if it's a billion people, then there's not enough energy in the universe to process all this. Okay? So unfortunately, AI is extremely energy intensive. It is very inefficient. Okay? Does that make sense? The last problem is fragility. So unfortunately, people, for whatever reason, believe that AI is independent of the world.

Jiang

AI is independent of humans, right? Their goal is to replace humans. No, no, no, guys. That's not how this works, okay? AI is designed on top of humans. Okay? So, in other words, it is human slaves that make AI possible. Why? Because, for example, facial recognition technology. You need human to actually input the faces manually. Okay? Images, right? How do you get a computer to recognize a sheep or a dog? Humans have to label the images. Okay? ChatGPT. Why is ChatGPT so good at writing essays? Because they got humans to write the essays as models. Okay? Do you understand? So, AI is based entirely on human slavery. Okay? And obedience. AI doesn't work. The problem is that AI is far more expensive than humans. And it's actually hard in the long term to enslave humans and make humans obedient. Also, data centers. Okay? Data centers. What's the problem with data centers? Well, they consume a lot of resources.

Jiang

Water and electricity. Okay? And financing. They cost a lot. They waste a lot of water. They waste a lot of electricity. And, it's really important. It's really easy to sabotage, blow up a data center. We're already seeing this in the Middle East where Iran is targeting data centers in the Middle East and it's very easy to blow them up. Okay? So, these are three major issues or three major constraints on artificial intelligence becoming God. The problem is the people in charge don't know this. They refuse to believe this. And this is a real AI apocalypse. Okay? The real apocalypse. Where the people in charge are so convinced that AI will save the world that they will destroy it in order to make it possible. Okay? That is a real apocalypse. And so this begins our journey into AI. And we're going to continue this for the rest of the semester and show how AI will ultimately destroy the world.

Jiang

Okay? Any questions? Do you guys understand what's going on? We think it's important for you to understand that the people in charge of AI are crazy. These are occultists. They literally want to create God, but to create God they first have to destroy the world. Alright? Yeah?

Participant

So, they want to create God, but that God will destroy the world. But I think if they want to use the God which is the AGI to control the world, the creation of this God will kill themselves. What's the meaning?

Jiang exchange

Okay. So, the idea is this. You destroy the world. Once you destroy the world through wars, through famine, through genocide, there will be no resistance to you. Okay? Now you can create the world in any way you want. You use AGI to create the perfect world which is perfect control. Okay? The point of AGI is ultimately like the ultimate surveillance state. And we'll discuss this later on. Where everything is monitored. Where you obey the AGI all the time. Okay? But the point is that you want to. You believe in God. You want to do good.

Participant

Okay? So, is this like also a part of secret society? This is one of my secret societies, yes.

Jiang exchange

Alright? Okay. So, we will continue this next week. On Thursday, Trump is in China. Thanks for watching.