We live in a world where our eyes and ears are almost constantly bombarded with colors, shapes, textures and noises of all types. How exactly do our brains translate these sights and sounds into meaningful images and words? At the University of California, Berkeley, two groups of scientists are finding tantalizing new answers to this question. Their remarkable successes at reconstructing what our brains see and hear offer hope for future life-changing technologies.

This video is based on:

Reconstructing Visual Experiences from Brain Activity Evoked by Natural Movies, a paper published by Shinji Nishimoto, Jack Gallant, and colleagues in the journal Current Biology.

Reconstructing Speech from Human Auditory Cortex, a paper published by Brian Pasley, Robert Knight and colleagues in the online journal PLoS Biology.

Influence of Context and Behavior on Stimulus Reconstruction From Neural Activity in Primary Auditory Cortex, a paper published by Shihab A. Shamma and colleagues in the journal Journal of Neurophysiology.

 

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License.

One Response to “Episode 4: Decoding Our Senses”

  1. [...] original audiovisual stimuli — not unlike the video playback of a VHS tape. This episode of Science Bytes, produced by Kikim Media for PBS and the Public Library of Science, explores some of the research [...]

Use the Form Below to comment on this post.

Your Name: (Required)

Email Address: (Required)

Website:

Your Comments: