Rare input distributions outside learned examples that trigger failure modes in learned systems. Rare, off-distribution situations where learned systems fail despite high average accuracy.
Topic brief
A Jiang Lens evidence brief for this topic, built from source tags, transcript matches, and linked source refs.
edge case
Rare input distributions outside learned examples that trigger failure modes in learned systems.
Showing 21 evidence items
No matching evidence on this topic page.
Key Notes
A scenario that can screw up the AI system because it falls outside planned model assumptions. An intentional or unplanned human action that the AI's model cannot prevent or account for.
He argues that self-driving cars cannot be fully solved because the system cannot plan for a human intentionally trying to crash into the car.
He says cars may have self-driving features but are not fully self-driving because they cannot prevent malicious or intentional human edge cases.
Answering Eric, he says the edge cases for psychohistory are great men who appear from outside normal forces and change history.
He says the second foundation solves psychohistory's edge-case problem by placing specialists behind the AI to observe history and correct the model.
Jiang speculates that historical edge cases or great men can be understood as telepaths who can read and control minds.
Timestamped Evidence
"...And the great danger to the system is what we call edge cases. Edge cases. Okay, edge cases, edge cases breaks the system down...."
"And that mean damn right or intentionally trying to cause an accident or self driving car. Does that make sense? And the answers you..."
"...the road outside the designated crosswalk. The textbook definition of an edge case scenario."
"...is, whatever I do, okay, I cannot solve something called the edge case. The edge case is, if the car is on the road,..."
"But in a situation where I'm the human being, and I don't like self -driving cars because I'm a taxi driver, and it's stealing..."
"think about actual applications of AI it's it's very limited you also look at self -driving cars now there are cars that have self..."
"structure it cannot create by itself okay so if I'm able to do all three things then I'm able to use AI to optimize..."
"...so Eric asked a great question. Will this psychohistory model encounter edge cases? And the answer is, that's the best question. Okay? That's a..."
"...out, you will always have a computer. You will always have edge cases. You will always have someone appear out of nowhere. And for..."
"...AI and who observe history and who correct the AI for edge cases. Okay? Does that make sense? So, Putin appears and is like,..."
"...the answer is that you can make the argument that these edge cases that Eric refers to, these great men of history, they're actually..."
"...make sense? Okay? But thanks for the question, Eric. Okay? So, edge cases are great men who appear now and then. And you cannot..."
Relevant Lectures And Readings
The lecture starts by warning against overconfident certainty, then rewires from literary method to a hard model of AI: today’s systems are pattern-fitters optimized for compliance, so power becomes control over what counts as...
The final class turns collapse into an assignment: build a democratic psychohistory that can model war, correct history, answer great-man edge cases, and still preserve the human heart that wants to love, create, learn,...
Related Topics
How To Use And Cite This Page
This topic page is a discovery surface. For generated synthesis, cite the human-readable source reading or lens page. For Jiang-spoken claims, cite the transcript segment, source ref, and YouTube timestamp. Raw text and Markdown mirrors are fallback surfaces for tools that cannot read this HTML page.