Model whose internal weights and feature interactions are not directly interpretable in human terms.
Topic brief
A Jiang Lens evidence brief for this topic, built from source tags, transcript matches, and linked source refs.
black box
Model whose internal weights and feature interactions are not directly interpretable in human terms.
Showing 9 evidence items
No matching evidence on this topic page.
Key Notes
He frames supervised learning as query-response optimization that can be technically simple (input-to-output fitting) but opaque at high complexity, undermining semantic interpretation of outputs.
Timestamped Evidence
"They cannot explain exactly how they're learning, or how the model will behave, especially in strange edge case scenarios, because the patterns that the..."
"The idea of the black box is that weighted system, the neural network. Humans don't actually know what's going on in there, because it's..."
"have you guys have to understand this idea there's nothing truthful about what um chat to be says all it's trying to do is..."
"trying to figure out what the weighting is and I could try to play play by myself like say one percent two percent five..."
"understand um how this works is I'm trying to turn each face into a distinct mathematical model all right that is unique to it..."
"...This is what researchers mean when they call deep learning a black box."
Relevant Lectures And Readings
The lecture starts by warning against overconfident certainty, then rewires from literary method to a hard model of AI: today’s systems are pattern-fitters optimized for compliance, so power becomes control over what counts as...
Related Topics
How To Use And Cite This Page
This topic page is a discovery surface. For generated synthesis, cite the human-readable source reading or lens page. For Jiang-spoken claims, cite the transcript segment, source ref, and YouTube timestamp. Raw text and Markdown mirrors are fallback surfaces for tools that cannot read this HTML page.