The Uncanny Valley of Alien Technology; or, How to write a novel with really good, creepy alien technology.

The Strugatsky brothers’ novel Roadside Picnic has terrific examples of bizarre alien technology. There’s a mysterious, bizarre Zone imposed on an quotidian railway yard. An old car tire’s shadow points in a different direction from the rest. There’s weird “empties” (which I figure are 4-D objects stuck in our 3-D space.)

Alien tech like that gives me a chill down the spine, because I think I sort of understand it, while knowing that it conceals vast areas beyond my comprehension.

It’s a familiar sf trope that alien artifacts would be simply incomprehensible to us, even if they’re useful. Our cats think a flash drive dangled from a lanyard is simply a great cat toy. On the other hand, the trope of an alien spaceship is entirely understandable. We don’t know how the tech works, but we know what it does.

But it’s more interesting to think of alien tech that is partly understandable. If we sent an iPhone to Thomas Jefferson, he’d be awed by the calculator, which he would understand just fine. But he’d be unnerved by almost all the other apps, which need an Internet connection to work. He’d have no idea what they do. It would be partly understandable, and that’s what would make it really unnerving.

That’s why, in sf, I like alien tech is neither totally mysterious nor mostly understandable. Alien tech like this falls into the uncanny valley of partial comprehensibility, and that’s what makes it really interesting.

Here’s an example. In Piers Anthony’s Macroscope, the humans receive an alien message. The thing about it is, it destroys the mind of anyone who is smart enough to understand it. People with IQs above 150 are fascinated by it and keep reading until they…fall over comatose. Some die. People with IQs below 150 can’t understand it, so they’re unharmed. That makes the message tantalizing but lethal. As one of the characters says to another, “We know the hard way: there are certain thoughts an intelligent mind must not think” (p. 49). But as it turns out, the alien senders are not simply malevolent. It turns out there is a reason for the signal, and it’s a good one. The way Anthony spins this out, gradually explaining the message and its uses, is fascinating.

The idea of a mind-destroying concept falls into the uncanny valley. It’s analogous to something we have, but it’s qualitatively, ungraspably better. It echoes Godel’s Theorem, which proves that it, the theorem itself, is unprovable. A theorem that proves its own unprovability is a fascinating, mindbending thing. It upended mathematics when Godel published it in 1931. I have never understood it myself in a whole and complete moment of insight, and there’s a reason for that; it is fundamentally paradoxical. I can understand the pieces one at a time, but not the pieces put together. It gives me the feeling that if I ever did fully grasp all of it, my mind would be both much smarter, and broken. (Godel in fact went insane toward the end of his life.) The point is, we already know of ideas that probably exceed the mental capacity of most human beings. Macroscope invites us to consider the possibility that even higher-octane ideas would break our minds.

Here’s another example: the alien Message in Stanislaw Lem’s His Master’s Voice. It contains instructions for making a substance. When scientists make it, it looks quasi-biological, like protoplasm, or meat. They find that this substance, when exposed to the specific neutrino beam that carried the Message, becomes even more lifelike. Not any old neutrino beam, but specifically the beam carrying the Message. There is something about its distribution of neutrino energy that is life-giving. (By the way, Lem accounts for the fact that neutrinos barely interact with matter.) This, the narrator explains, is sort of like setting metal type for a symphony in such a way that, when the type has a metal stylus dragged across it, the symphony actually gets played. The text of the Message doesn’t just give instructions for creating life, it actually creates life. It intertwines symbolism and action in a way that no human language can. It’s as if the distribution of phonemes in the sentence “I think you should be smarter” actually manipulated the listener’s neurons to make them smarter.

Could the uncanny valley give us a glimpse at what alien minds could actually be like? Maybe. This is where I bring in Shannon entropy.

Which I don’t understand in any deep mathematical sense. Here is the little I think I do know, which I’ve learned from Robert Sawyer’s discussion of it in WWW: Wake. Mathematics can be used to work out the complexity, or rather the maximum potential complexity, of a message.

One such measure is Shannon entropy. To use the simplest example, a sequence of coin flips, which is purely random, has no complexity at all, and a Shannon entropy value of 1. What that means is, knowing one coin flip gives you no information at all about what the next flip will be. English, on the other hand, is much more predictable. If I say “What did you have for…” you will probably guess that the next word is breakfast. That sentence has a Shannon entropy value of 6, because given five words you can guess the sixth. Given a large number of English sentences, a computer can often guess as far as the eighth or ninth word out just from probability alone. That means that English has a Shannon entropy value of 8 or 9.

What about animals? Dolphin utterances can be predicted about three or four units out, giving dolphin language a Shannon entropy value of 3 to 4. That would seem to suggest that dolphin communication is far from being mere noise, but is not as complex as English.

Could an alien message have a Shannon value higher than 8 or 9, and if so, what would it be like? Laurance Doyle, a mathematician working on SETI at UC Berkeley, offers this example of an English sentence that is grammatically correct, but is more complicated than any human mind can manage:

“By this time tomorrow he will have had to have been to be going to be finished.”

As you can see, there’s just too many tenses and nested clauses to keep track of. Even more importantly, this sentence describes a social situation more complex than human life circa 2012 ever presents. For example, it might describe the situation of a time traveler facing a deadline.

A corpus of sentences like these would have a Shannon entropy higher than 9. How high, I don’t know, and Doyle doesn’t venture a guess. But messages like the ones in Macroscope and His Master’s Voice have a kind of deeply efficient recursiveness. I’m guessing that it would give them a Shannon entropy much higher than 9. In other words, maybe this is what a civilization can do once its cognitive capacity can manage language at a Shannon entropy level above 9.

To understand a sentence like Doyle’s, you have to have a large working memory. You have to be able to hold a lot of pieces in mind at once. There is evidence that working memory and intelligence are closely connected, though the exact relationship is unclear. Maybe one of the reasons I can’t fully grasp Godel’s Theorem is that my working memory is not large enough. Which may be just another way of saying I’m not smart enough.

Neither Piers Anthony nor Stanislaw Lem mention Shannon entropy in their novels. But if they had, their characters could have measured the Shannon entropy level of their respective Messages. As far as I understand it, that’s just a straightforward numerical calculation. You don’t have to understand a message to calculate its Shannon entropy.

If we ever get a Message and its Shannon entropy level is way higher than 9, maybe we shouldn’t be surprised if it says, and does, things we can’t understand. I can sort of grasp this idea, while at the same time recognizing it’s way beyond our capabilities. And that gives me chills down my spine.

Making up partly understandable alien tech is very hard to do. Writers are as bound by the mental infrastructure of our civilization as anyone else. When a writer pulls off the partly understandable, it’s a work of genius.

References:

Colom et al (2008). Working memory and intelligence are highly related constructs, but why? Intelligence 36: 584-606. LInk here.

Doyle, Laurance, McCowan, Brenda, Johnston, Simon, and Hanser, Sean (2011). “Information theory, animal communication, and the search for extraterrestrial intelligence.” Acta Astronautica 68, p. 416.

Sawyer, Robert (2009). WWW: Wake. Ace Books, p. 238.

Posted in

Leave a Comment