The Performance Artist of Memory.

On Wednesday and Thursday I visited Georgia Tech and Emory in Atlanta, and it was a trip full of happy surprises. I went to visit Steve Potter, who is best known for creating neural networks with living brain cells and giving them “bodies” to use and virtual worlds to explore.

It’s been powerfully argued by George Lakoff, Rodolfo Llinas, and others, that you can’t have cognition without having a body; brains and minds come into existence because bodies have to move about in the world and cope with its exigencies. This line of thinking says, “Forget about trying to create thinking machines by creating algorithms and rules.” Instead, create networks of sufficient complexity and turn them loose in a world where they need to do things to get by, and intelligence will emerge.

As I sat down in Steve’s red Prius (with a “Scientists for Obama” bumper sticker next to a license plate saying “FAQ WAR”), he passed me an announcement that Karl Deisseroth was speaking the next morning at Emory. My jaw dropped. Deisseroth is a leading light in optogenetics, a topic I’ve been writing about in World Wide Mind. His home base is Stanford. To see him here was beyond good luck. It was karma.

So we went to see him. Optogenetics is about using light instead of electricity to make neurons fire. At first glance that sounds counterintuitive, like jump-starting a car by beaming a flashlight at the engine. As far as the body is concerned, light doesn’t seem to do much except maybe give us a sunburn.

But everything changes when you insert new genes into the cells to make them react to light. The genes make proteins (channelrhodopsin-2 and halorhodopsin) that migrate to the cell walls. When channelrhodopsin is hit with blue light, it makes the cell fire; when halorhodopsin is hit with yellow light, the cell is inhibited from firing.

There’s two advantages to that scheme. First, you can control which cells in the brain express the new genes, by adding cell-specific promoters. So even though the new genes go into all the cells, they only get expressed in some of them, say Purkinje cells or glial cells.

That means you can decide, by designing the genes correctly, that only these cells will react to blue light and not those cells. That’s a huge leap in sophistication from electrodes, which make everything in the vicinity fire whether you want it to or not.

Second, inhibition is as important as excitation. Neurons are always getting input from other neurons and making up their little minds about whether to fire or not. The absence of an action potential is information too. Optogenetics lets you say to neurons, by shining yellow light at them, “Even though everyone is telling you to fire, don’t.” You can’t do that with electricity. Electricity only gives you an ON switch. Optogenetics gives you an ON switch and and an OFF switch.

How do you get light into a brain? For the moment, you do it the obvious way: with fiber-optic cables. People are thinking about how to get self-contained light sources into the brain, so you don’t have to have things going through the skull and skin.

Optogenetics is opening up new horizons for basic science: understanding how neurons interact. Everyone’s excited. Deisseroth got a big crowd, and he explained the basics plus new and unpublished research on how it might help understand brain diseases such as Parkinson’s.

Its implications for the design of brain-machine interfaces are significant. It’s going to be quite a while before anyone tries it in humans, not least because to get it to work, you have to infect brain tissue with viruses. The viruses have had all the bad stuff taken out, and can’t self-replicate, but still and all. It’s not going to be an easy sell to the FDA.

Then I gave my own talk to Steve’s class. I did my basic Cochlear Implant 101 talk, explaining how it works, and aired some new ideas from my second book. It went well; you know a talk’s going well when class ends and nobody gets up.

Steve and I then visited Phil Kennedy of Neural Signals. Kennedy is working on brain-machine interfaces to let locked-in people communicate.

Locked-in syndrome is rare but quite horrible. It happens when one has a brain stem stroke, which prevents the brain from sending signals to the body. The patient remains fully conscious, and fully aware of sensation, but is totally unable to move, eat, or speak. (This was the plight of Jean-Dominique Bauby, author of The Diving Bell and the Butterfly.) Kennedy’s goal is to detect and decode the brain’s activity while the patient is trying to generate phonemes, and speak them aloud.

As we were talking, the door opened and a young man was wheeled in on a very large and elaborate wheelchair. I was stunned to see it was Erik Ramsey, who has been “locked in” for ten years following a car accident when he was 16.

At first it was difficult to get a sense of his presence; his gaze tended to wander, and he couldn’t turn his head to look at a person. The only thing he could do was lift his eyes up for “yes” and down for ‘no.” His eyesight had deteriorated, his father explained, because his tear ducts were often dry – which accounted, I thought, for why he didn’t seem to see me very well. But I explained my cochlear implant to him, taking care to hold the processor in his field of view, and when I asked if he understood his eye shot upward: yes. So he was in there, paying attention.

I watch as Erik is prepped by Phil Kennedy on the left and his parents on the right. Phil is attaching the coils to his head, and his mom is injecting medications into his stomach. (Photo by Steve Potter.)
erik_ramsey_and_phil_kennedy.jpg

Erik Ramsey has electrodes in his brain that report on neural activity, and Kennedy attached magnetic coils – similar in principle to cochlear implant headpieces – that pick up data from them. I had hoped to see him use them to set up a communications link to Erik, because I was dying to see the poor kid get a chance to say something. No such luck. They were having him think “yes” over and over again, looking for its neural traces, apparently without much luck that day.

Kennedy sees Erik three times a week, working on the code that can suss vowels and consonants out of the vast tangle of neural activity that goes on in the brain. Consonants appear to be harder than vowels; why, I don’t know.

It was hard to keep from crying. When I had a major sensory system drop out of my head in 2001, the technology to get it back was right at hand. I have been very, very lucky. Still, I sometimes find it difficult to hear what’s going on, which forces me to paddle about in my own head for a while. Seeing a human being cut off from everything was a very hard, very emotional experience. I wondered: if he went mad, how would anyone know?

I gave a second lecture the next day, this time using some new slides on optogenetics that Ed Boyden had sent me, and that went very well too. In the Q&A afterward someone asked me whether implants could be powered by harvesting energy from the body’s own heat and motion, and I said I didn’t know.

But, I said to him, I want to talk to you afterward – because I saw he was wearing a body-mounted computer, with a tiny monitor over one eye and a one-handed keyboard in his pocket. A “gargoyle,” to use Neal Stephenson’s term. He came up afterward and introduced himself: Thad Starner, director of the Contextual Computing Group at Georgia Tech. And he fingerspelled his name. I was soon to find out why.

Talking with Thad Starner over lunch. Thad was having me repeat back phone numbers while moving stuff around on the table, to show doing complex motor activities didn’t detract from cognition. (I saw his point but wasn’t fully convinced.) You can also see two cochlear implants on the table. In fact, there are at least nine computers in this picture; can you find them all? (Photo by Steve Potter.)
Mike Chorost and Thad Starner

Thad had been using wearable computers since his undergrad days at MIT in 1993, because, as he put it, he realized he was spending a lot of money on his his education and forgetting most of it. Not only that, much of the most interesting stuff happened out of class, where it wasn’t polite to divert one’s gaze and write on a notepad. What he wanted was to be connected to a computer all the time, where he could take notes while maintaining eye contact and look up his earlier notes.

He let me look through his monitor, and what I saw was a white computer screen hovering in space. Since the other eye was unobstructed, the screen appeared somewhat translucent. I could see both it and most of my visual field at the same time.

So he could, during a conversation, do a google search on you and take notes without breaking his gaze. The monitor was slightly below his line of sight, so you could see both his eyes; he tilted his head down just a bit and peered at you from above it, as if wearing bifocals. I found that talking with him seemed surprisingly natural, if one ignored the fact that he was a terrific clinical case of ADD.

I was fascinated, because Thad was obviously The Guy To Ask on what it might be like to live with a brain-computer interface in one’s head. I asked him about the lived experience of always being connected to a computer. Being able to take notes while staying immersed in the experience kept him focused, he said. And, he pointed out, it was fast and always available, unlike a Blackberry or iPhone. It takes 20 seconds to turn it on, find the right application, and start typing. A notepad is faster, but it’s not searchable; you can’t have your notepads from 15 years ago to hand. He does. He said that it augmented his ability to think, to remember, to recall deep background during conversations.

But, I asked him, doesn’t trying to multitask degrade your ability to converse and think in rich, sustained ways? There’s a lot of research that shows that while people believe they can multitask, in reality they’re not very good at it. Trying to talk on a cellphone while driving makes you worse at both. Trying to write while being constantly interrrupted by email makes it hard to get “in the zone.” I told him about Maggie Jackson’s book Distracted: The Erosion of Attention and the Coming Dark Age and its fears that we’re raising a generation of addicts who can’t focus on anything.

He granted my concern, but said that there’s a difference between multitasking and multiplexing. In multiplexing, tasks reinforce each other. For him, taking one-handed notes and looking up Google during conversations adds depth to his life instead of taking it away; that’s multiplexing. He also said, as we were hectically walking up the hill to his lab, that he doesn’t do email while he’s talking to people; for him, that would be multitasking.

I asked him why he was interested in sign language. He did his masters’ thesis on recognizing sign handshapes with computers, and his lab had four or five posters on ASL. Sign language – at least when initially learned – recruits the brain’s motor cortex area for much of the body, instead of just the lips and face. For that reason, it should be easier to “see” physically signed words rather than voiced ones in brainscanning. So if you taught people with ALS (amyotrophic lateral sclerosis) a number of important signs before they lost all motor control, their later efforts to make those signs might be much clearer and easier to read than imagined words. Fascinating stuff. ASL: ALS; a startling connection.

In talking with him, I quickly gave up trying to be as smart as he is. Our IQs might be similar, for all I know. But Thad has 15 years of notes at hand and Google constantly in his field of view. If he sees someone he met five years ago, he can remember the key points of the conversation. Everyone has an internal, neural representation of the richness of their lives. But Thad has an external representation of that as well, and one with a diachronic axis. He has an access to time, memory, and information that most people can’t even imagine. He isn’t playing at augmentation. He really is augmented.

It comes at a cost, to be sure. He has to carry around 8 or 10 pounds of computer hardware all the time. And he has to be willing to be conspicuously and mysteriously different, a constant spectacle.

But he is exploring the intersection of minds and bodies, in a way that is as much art as engineering. He’s a performance artist of memory, and as any true artist does, he shows the rest of us new ways to think about being human.

Minds and bodies were a constant theme of the visit. Karl Deisseroth is changing minds by changing bodies; indeed, his research shows that the old Cartesian distinction between mind and body is growing ever more useless. Steve Potter, in giving neural circuits “bodies” so that they can grow and evolve, is trying to repair that old mistake. And I met the most disconnected man in the world, and the most connected man in the world.

It was an exciting and moving visit, from the nitty-gritty of channelrhodopsin-2 and halorhodopsin to the empyrean reaches of memory. In the evenings Steve and I had long, luxurious conversations about optogenetics and whether the Internet is a hive mind. Steve has a roomy, well-stocked mind, with a home to match, and the sweetest wife in the world. The visit was a sustained conversation about the relationship of technologies to bodies, and bodies to minds, and a fascinating experience all around.

Comments

  1. Arika Esalona says

    I’m not sure I should have read this all in one sitting. My head is spinning now! Fascinating stuff.

  2. “Forget about trying to create thinking machines by creating algorithms and rules. Instead, create networks of sufficient complexity and turn them loose in a world where they need to do things to get by, and intelligence will emerge.”

    The same day I read your post I came across:

    Planning Algorithms, By Steven M. LaValle, Copyright 2006
    Cambridge University Press, 842 pages
    To buy: Cambridge ($63) Amazon ($46.40)
    The whole thing is online at:
    http://planning.cs.uiuc.edu/

    It is an amazingly detailed exploration of all that goes in to the algorithms enabling a robot to deal with worldly interactions. Quite a contrast to the evolutionary network model.
    Regards.

Trackbacks

  1. […] If your brain is connected to the internet, can you think of anything else? Michael Chorost interviewed Starner (below) in World Wide Mind: The Coming Integration of Humanity, Machines, and the Internet, p.142,160) As […]

Speak Your Mind

*