Hippo Campus, and retinal implants.

I just spent two packed days in L.A. seeing friends and giving a talk at the Biomimetic Microelectronics Systems conference. (“Biomimetic” is a synonym for “bionic”.) I sat between Mark Humayun, who is working on retinal implants, and Ted Berger, who is working on hippocampal implants. The word “hippocampus” makes me laugh, because I visualize Sandra Boynton-drawn hippos walking solemnly around a college campus carrying bookbags. The image isn’t so far off, because the hippocampus is the part of the brain that’s crucial to memory and learning. If the hippocampus is damaged, the result is amnesia.

Berger talked about his team’s effort to develop an implantable chip that would replace a damaged hippocampus. They’ve created mathematical models that, when given the input a mouse’s hippocampus gets from the rest of its brain, generate the same output. The hope is to someday do it for a human.

Berger showed us some of the math involved. It went way over my head, of course, but I was pretty sure of this: biological hippocampuses don’t do their work by solving systems of equations on tiny little blackboards. They work because of the intricately networked connections between billions of neurons. It looked to me like the math was duplicating the output by an entirely different mechanism.

I raised my hand and asked, “If you create a mathematical model of a hippocampus’s output, does that mean you understand the hippocampus?”

Berger replied, not unreasonably, “It depends on what you mean by ‘understand.'” I could sense people nodding all over the room: we’d just leaped from biology and mathematics to epistemology. He said that if you tried to model the hippocampus by creating a silicon equivalent of its circuitry — all its billions of synapses — it’d take forever to run. Much more parsimonious and efficient to create a mathematical equivalent that created the same output by different means.

But did that mean they understood the hippocampus? Berger seemed to think it meant they partly did, and that partly was good enough. The model’s visual representation of hippocampal activity, he pointed out, didn’t correspond to the actual patterns of neural activation in the physical body of the hippocampus. Those correspondences could be figured out, which would bring them to a deeper level of understanding, but what they had was good enough for now, from a functional standpoint.

Later on I asked another scientist, Jerry Loeb, “Who has damage just to the hippocampus and nothing else?” Good question, Jerry said. “It’s not clear whether there really is a population for a hippocampal implant.” But whether there is or not, it seems to me developing a hippocampal implant is a stepping stone to replacing other, and larger, parts of the brain.

At lunch I was steered to sit next to Terry Byland, one of the very few people who have a retinal implant. He started going blind in his late 30s and lost it all by his mid-forties. The internal processing circuitry of his implant is a reprogrammed cochlear implant, which is less strange than it sounds; a cochlear implant is just a neurostimulator. You can replace its 1-by-16 linear array with a 4-by-4 square array and implant that in the retina, on top of the optic nerve. It works — but it’s very primitive compared to what a cochlear implant does for a deaf person. The technology is where cochlear implants were in the 1970s.

Naturally, Terry and I compared notes, a deaf wired guy talking to a blind wired guy. Terry wasn’t using his implant just then, and I asked him why. The room was too big and didn’t have enough contrasts for the implant to give him useful information, he told me. I guessed — I didn’t ask him, but I’m pretty sure I’m right about this — that it didn’t have enough resolution to give him useful information about what was on the table in front of him. Sixteen electrodes can give you information about big simple objects not too far from you, and that’s about it. I know from videos I’ve seen that Terry sees vague shapes rendered in shades of gray.

Terry went on to tell me that his field of vision is about nine inches wide, so he has to scan back and forth to gather information. I asked him whether he gets any depth perception, and he told me none, not at all.

What we most had in common was what the learning process was like. Like me, he’d had to map a strange and counterintuitive set of inputs onto what he knew about the world from touch and memory. He’d learned that a certain pattern of dark and light corresponded to the edge between his driveway and the sidewalk — and that pattern hadn’t looked like what a sighted person would call an edge. He’d had to put things together with practice and experience. I was nodding: yep, that’s what sensory neurostimulation’s like.

That was yesterday. I got back to SF last night, got five hours of sleep, and drove to the California Ear Institute to get my Harmony processor. Finally, two years after writing my Wired article on the subject, I’ve got the commercial version of the software. I was going to tell that story, but it’s late; I’ll tell it in my next entry. I need to get some sleep, or my own Hippo Campus is going to go on strike.

Speak Your Mind

*