Episode 2: Decoding

Last episode we talked about what happens when you mess with the leech nervous system, a simple chain of pseudobrains along a pseudospinalcord (not a real word).

Episode 2 takes place somewhere a little closer to home: inside the human brain. Or at least directly on the surface, which turns out to produce a whole lot more information than, say, recording from the surface of the scalp.

Responses to voices, tones, and language in the human brain, as recorded using electrocorticography/ECoG (Wikimedia Commons)

I know last time we said some things were too invasive to be done in humans and so we use simpler organisms instead, but it’s also true that when opportunity knocks, scientists answer. After all, extraordinary claims require extraordinary evidence!

In Decoding, student host Ethan Cruikshank talks with Dr. Chris Holdgraf. Chris uses data recorded from the human brain during open-brain surgery to understand how the brain processes sound and language. He’ll tell us what it’s like for patients that need these surgeries, how the brain encodes language, and how close scientists and neuroprosthetic engineers are to decoding your thoughts.

An eCoG grid placed on a human brain (Wikimedia Commons)

Details & links:

Recorded: June 8, 2017

Released: August 3, 2017

Student Host: Ethan Cruikshank, a first-year student at Stanford University (soon to be second-year)

Guest: Dr. Chris Holdgraf, who’s just finished his PhD at the University of California, Berkeley and is also a fellow at the Berkeley Institute for Data Science

Show & tell: Reconstructing Speech from Human Auditory Cortex (not paywalled–Thanks, PLoS!) by Brian Pasley, Stephen David, Nima Mesgarani, Adeen Flinker, Shihab Shamma, Nathan Crone, Robert Knight, & Edward Chang

Thanks to: Stanford Storytelling Project for much guidance (Will Rogers, Jonah Willihnganz, Jake Warga, Jenny March), Thinking Matters for all kinds of support (Tiffany Lieuw, Parna Sengupta, Ellen Woods), the Generation Anthropocene podcast for advice (Michael Osborne and Leslie Chang), and Melina Walling for feedback on early versions of this episode

Shout-outs: More on locked-in syndrome, the P300 speller, and a movie recommendation: The Diving Bell and the Butterfly. A lot of the ECoG work around here is a collaboration between the Knight Lab at UC-Berkeley and the Chang Lab at UCSF. Here’s another good rundown on the pros and cons of ECoG research, from the Sen lab at New York University. And here’s the work Chris mentioned on decoding the contents of the visual system using functional magnetic resonance imaging.

From freesound.org: Bar sounds, plus some sound waveslow and high

Theme music: Podington Bear

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s