Update: Week Three with Naída CI Q70 Processors

As I said in my First Look review, the Naída CI Q70 is a powerful but complex system with a long learning curve. Here is what I’ve learned in the past week. 

I’ve been told through Facebook that Advanced Bionics is working on a software patch that will reduce the annoying 6.6-second interval of silence when switching programs. Thanks, Advanced Bionics. I hope the delay is a lot less, like half a second.

I think I’ve finally figured out ZoomAhead. (Advanced Bionics calls it “UltraZoom.” That’s a confusing name, so for clarity I call it “ZoomAhead.”) ZoomAhead is designed to help the user in very noisy environments such as a crowded restaurant. The processor compares input from different microphones to focus on what’s directly ahead of the user. The key seems to be to use it in very noisy environments. My wife and I had dinner in a pizza joint packed with customers, and ZoomAhead definitely made it easier for me to hear her compared to my “standard” program. I had seen less benefit in a less crowded restaurant, so my guess is that it works best in environments with very loud speech babble.

I’ve also figured out ZoomToSide (which Advanced Bionics calls “ZoomControl.”) Ever since I met my wife Tory (fifth anniversary of our first date was two days ago!) I’ve had to walk on her right side, since my left ear is my good ear. With ZoomToSide, I can now walk on her left side if I want. The mike on the right processor sends her voice over to my left processor, and I can understand her very well. That’s very nice. But I doubt I’m going to be using it much, except maybe on our rare long car trips. It takes me 15 or 20 seconds to set it up and then as long to take it down. That’s just too complicated for casual switching. As I said in my review, the processor really needs to become smart enough to do such things on its own, reliably. That’s a tough challenge. As any Apple user knows, Siri still screws things up a lot. But it’s got to be solvable within the decade.

I’ve tried using the ComPilot (what I call the “Connector”) for listening to the TV. I’m not using Phonak’s TV-Link accessory for this. I simply strung a patch cable from the audio output jack of the TV to the input jack of the Connector: twelve bucks or so at Radio Shack, cheaper online, as here. The basic idea is to get a male-to-male cable with 1/8-inch jacks, and it has to be long enough to get from the TV to your couch. The sound clarity is fantastic. Documentaries are extremely easy to understand without captions. My wife and I watched Spinal Tap and for some reason, the captions didn’t work. It’s an older DVD we rented at the store. So I tried watching without them. Those guys have English accents, you know. I was able to follow most of the dialogue with some work, which I could never, ever have done watching the TV unaided. I doubt I’ll ever quit using captions, simply because they make things so easy, but it’s nice to know I can now do without them if I have to.

I’ve been listening to music quite a lot. I’ve begun to go beyond my old favorites, which is a relief because I was worried that I would only enjoy music that I remembered hearing before with hearing aids. I’ve especially enjoyed the Beatles’ Revolution, with its intriguing sound effects.

I’ve also been listening to Eye of the Tiger, Space Oddity, Unknown Legend, a few pieces from the soundtrack of Myst, The Rip (Portishead), and Stairway to Heaven. These pieces are somewhat familiar to me from my hearing-aid years, but I haven’t listened to any of them since before 2001. My wife said, “Of course, you know Stairway to Heaven was the last song played at every junior-high school prom.” All right, maybe so, but after 13 years out of the game I’m entitled to like what I like. I’ve also been trying out classical pieces like Beethoven’s Fifth, but for now shorter pop music is best for what I’m up to.

Which is getting my right ear up to speed. My right ear is much worse and much less carefully programmed than my left ear. On its own, it hears music terribly. It sounds like a vaguely rhythmic rumble. But put together with my left ear, it seems to supply some of the bass effect that gives music depth and richness. It makes a very substantial difference.

Today I went back to the audiologist Julie Verhoff at the River School here in D.C. to work specifically on the right ear. (It provides audiological services to the community via Chattering Children, its nonprofit arm.) Dr. Verhoff has been very thoughtful and thorough, and I’m learning a lot.

The right ear has two problems: poor frequency resolution, so that adjacent keys on a piano sound the same, and a mismatch with the left ear, so that the same note sounds different to each ear. We decided to work only on the first issue today. For years I’d been using only 11 of its 16 electrodes. The other five were turned off because they seemed to interfere with the rest. The 11 electrodes that remained on were the ones that ordinarily deliver the lower-frequency sounds, which partly explains why the ear sounded so rumbly to me. (Only partly; it’s more complicated than that, but I won’t go into the details.)

I asked Dr. Verhoff to turn on all 16 electrodes. When I’m trying new maps my general strategy is to make a new map that’s very different, to give me a clear difference that I can assess. I might not be able to hear much difference between having 11 electrodes on and 12 electrodes. But 16 sounds very different indeed: crisper and more high-pitched. I also asked her to turn off ClearVoice in both ears. It’s designed to filter out what it thinks is noise, so it might also be filtering out some of what makes music sound good.

Here’s my new music program. Red is right ear, blue is left ear. It’s in program slot 2. I still have my original music program, the one Dr. Verhoff set up two weeks ago, in slot 3.

Music program 3-10-14

So now I’m home listening to Boléro, which I use to assess new maps. Music sounds more like music in the right ear, because of the high-frequency sounds the newly turned-on electrodes are giving it. But when I put the two ears together, I’m not sure if it’s better. I hear a pronounced hum in the left ear, as if I was sitting next to an air conditioner. Turning off ClearVoice in that ear might not have been a good idea – although maybe I just need to habituate to the difference.

I’ve also been trying the old music program in the left ear and the new one in the right. The hum in the left ear is gone, and I’m hearing the highs in the right ear. In theory that seems like it ought to be the best combination, but I’m not sure if it is. It may be that the pitches I hear in both ears are so different that I’m having difficulty integrating them. Or it may be that the newly turned-on electrodes in the right ear are interfering with the rest, which was what I had concluded when mapping that ear years ago. Or I may simply have to adjust.

The only way to know will be to (a) do a lot of listening with programs in various combinations and sort out what’s going on in each ear individually and in both ears together, and (b) experiment with making more changes with Dr. Verhoff, particularly to try to match pitch percepts on both sides. It may also be a matter of (c) getting my brain to learn to hear and use the different pitches being presented to my right ear. I take notes as I listen, which is very useful later on.

It is a complicated undertaking, sort of like solving a set of equations that have variables that change when the other ones do. Mapping two ears is more than twice as complicated than mapping one.

Finally, in my first-look review I mused about enabling the processors to be controlled directly by a smartphone via Bluetooth, without any intermediaries. That has just become possible in the hearing-aid world. The new hearing aid is called the LiNX, made by GN ReSound. According to CNN,

It’s been possible for people to operate their hearing aids via custom remote controls and even link them to smartphones, but that has required an intermediary piece of hardware, most often a small, clunky box worn around the neck. Now, using a combination of Bluetooth and a proprietary Apple protocol, the LiNX hearing aids can communicate directly with Apple mobile devices. (Link to full story here.)

That “small, clunky box” is the ComPilot, the “Connector.” Its sound quality is fantastic, but it is clunky. Moving the Bluetooth capability directly into the Naída would be a tremendous advance. No more connectors and remotes to juggle; just the clean, powerful smartphone one always has to hand. Or later in 2014, as many people expect, an iWatch.

According to this press release, Advanced Bionics’ biggest competitor, Cochlear Corporation, has been working with GN Resound to create a new processor:

“One of the benefits of 2.4 GHz technology is that it enables the end-user to receive streamed sound directly without wearing an intermediary device around the neck,” said Jennifer Groth, MA, GN ReSound. “We know that eliminating body worn streamers is a priority for hearing aid users. Coupled with superior sound quality, 2.4 GHz addresses core user needs.”

This is exactly what I was wishing the Naída could do.

I don’t know how feasible it will be with cochlear implant processors, which are much more complex and power-hungry than hearing aids. I’m no expert on the subject, but just thinking out loud, I can see there being two levels of difficulty. First, there’s using the smartphone to change processor programs and settings, which requires occasional exchanges of data. Second, there’s offloading complex processor tasks to the smartphone, which would require, I would guess, near-constant data exchange. I imagine that both would be difficult to do without quickly draining the small batteries that power cochlear implant processors.

If Cochlear can do even the first, in a way that actually works in daily usage, at a development price it can afford, and that users can afford, it will have a serious advantage. If I was an exec at Advanced Bionics, I would be trying to produce a Bluetooth-native Naída Mark II as fast as possible.


  1. Tammy Petraitis says

    Good points!!!!

  2. Kathleen Colligan says

    IThank you for sharing this. have had my Naida since activated FEb 25 and just getting used to new sounds has been a challenge. Yesterday I got the T-coil program and that lag between programs is a big problem if you want to change programs while answering the phone. I asked my Audi about music because I dont hear the instruments. She said that most likely I woul d not hear it with a CI? I will address this again at my next mapping. I know I have Clear Voice on and if I had to choose I guess hearinng speech is more important. But- miss my old music from the late 60’s and 70’s that I could recognize with my hearing aids.I had not heard they were working on improving it so that is wonderful news. Yesterday I got the NEptune at my second mapping. A whole other BIg box of things to figure out which will take time. I won’t wear it in the water untill i qm sure that I am using correct parts.

  3. Mary Robertson says

    Hi I’m a bit late posting a comment. I just came across your great review if the Naida. First implant 2003 started with Auria graduated to Harmony in 2096 with second implant used Harmonies until May 2913 traded them in for Naida, first generation. PROS: Sound quality is great, music excellent. Interchange sides. ( I don’t use the extra features like palm pilot, blue tooth etc ) CONS:Initially I had lots of problems with cutting out, due to defectuve cords. That has been resolved. I continue to be bothered by the long time out between programs, as you mentioned. But my biggest complaints are about wearability. While they reduced the processor size they made the headpiece bulkier than the Harmony. And the connectors to the headpiece with that swivel feature make the headpiece and cord stick out, which I find cosmetically unslightly. The Harmony cord shaped itself to the curvature of the head and the headpiece was flat. And lastly the lightweight tmic provides no counterweight to the processor. The Harmony tmic actually molded to the ear. My right side processor often falls off, and,no, I don’t want an earmold. Seems I’m a lone ranger with these beefs, am surprised not to see others with the same issues. I’m grateful I can hear so well but I miss my Harmonies for the reasons mentioned. Thank you!

  4. The problem with early reviews (and some are somewhat later here) is you have a massively-expensive implant and, based on many other earlier reviews of prior implants, the CI isn’t put through sufficient tests, so users will find the new problems and there are many.

    Main problems: 1) Moisture on battery and processor. I severely limit the use of my processor because of the humidity here. Seems rather stupid because the point was to process hearing and I opt to not wear it out to extend its life. If I don’t, the device dies early. I’ve had just under ten processors and the warranty isn’t up yet. I don’t trust the components* and have no reason to believe any replacement will last more than a couple of years. *There continue to be design flaws that aren’t fixed, so I learned to jury-rig various components and again, at a cost installed of close to $100k, this is ridiculous. Design flaws were pointed out and likely ended up in a black hole (no matter what you may be told).
    2) I personally would stay away from AB in the future – you may opt to go or remain with them. I explored replacing it a couple years ago with Cochlear due to ongoing problems with AB. I may go back that route again soon. 3) Something stops working? They’re fast to send a replacement part but you never know what caused the problem with the component you sent in. They don’t tell you what the problem is. I’m speaking my mind and there are other buried stories about problems enough people are having. Many stories, are buried deep – bring them out to the forefront, so others now they’re not alone and maybe some action can be taken to stop this.

Speak Your Mind