MIT Researchers Discover That Deep Neural Networks Don’t See the World the Way We Do

MIT neuroscientists discovered that deep neural networks, while adept at identifying varied presentations of images and sounds, often mistakenly recognize nonsensical stimuli as familiar objects or words, indicating that these models develop unique, idiosyncratic “invariances” unlike human perception. The study also revealed that adversarial training could slightly improve the models’ recognition patterns, suggesting a new approach to evaluating and enhancing computational models of sensory perception.

Images that humans perceive as completely unrelated can be classified as the same…

Continue Reading


News Source: scitechdaily.com


Posted

in

by

Tags: