He defines a distinction between the AGI logic in this argument and moral agency, calling the first ‘stupid’ or blind, implying that model outputs should be interpreted as optimization outputs not shared intentions.
Topic brief
A Jiang Lens evidence brief for this topic, built from source tags, transcript matches, and linked source refs.
Personhood
He defines a distinction between the AGI logic in this argument and moral agency, calling the first ‘stupid’ or blind, implying that model outputs should be interpreted as optimization outputs not shared intentions.
Showing 4 evidence items
No matching evidence on this topic page.
Key Notes
Timestamped Evidence
"Yeah, you already control the world, so what do you do? How about this, okay? I'm gonna kill everyone. Duh! The world is perfect..."
"Yeah, kill everyone, okay? Why, because there's no one around to know it killed everyone. Doesn't make sense, guys. This is how a computer..."
Relevant Lectures And Readings
The lecture starts by warning against overconfident certainty, then rewires from literary method to a hard model of AI: today’s systems are pattern-fitters optimized for compliance, so power becomes control over what counts as...
Related Topics
How To Use And Cite This Page
This topic page is a discovery surface. For generated synthesis, cite the human-readable source reading or lens page. For Jiang-spoken claims, cite the transcript segment, source ref, and YouTube timestamp. Raw text and Markdown mirrors are fallback surfaces for tools that cannot read this HTML page.