Jiang claims chatbot interfaces rely on interaction design and language tricks that make users project understanding onto systems that simulate answers.
Topic brief
A Jiang Lens evidence brief for this topic, built from source tags, transcript matches, and linked source refs.
Eliza Effect
Jiang claims chatbot interfaces rely on interaction design and language tricks that make users project understanding onto systems that simulate answers.
Showing 5 evidence items
No matching evidence on this topic page.
Key Notes
Timestamped Evidence
"Okay. All right. So what I'm going to do now is really quickly, all right, explain to you what AI is and to understand..."
"And in a psychology hotline, you think you're talking to a person, but it's actually a computer program that says two things. Tell me..."
"well why does it notice work because the audience wants it to work if you go in skeptical and says this is all complete..."
Relevant Lectures And Readings
The lecture starts by warning against overconfident certainty, then rewires from literary method to a hard model of AI: today’s systems are pattern-fitters optimized for compliance, so power becomes control over what counts as...
Related Topics
How To Use And Cite This Page
This topic page is a discovery surface. For generated synthesis, cite the human-readable source reading or lens page. For Jiang-spoken claims, cite the transcript segment, source ref, and YouTube timestamp. Raw text and Markdown mirrors are fallback surfaces for tools that cannot read this HTML page.