Page 1 of 8  >>

Why the immune system is so damn complicated

Trying to understand the immune system can seem like a neverending task. There are tens of different varieties or subvarieties of immune system cells, with new subvarieties being discovered every so often. For sending messages between those cells, there are tens (or is it hundreds?) of signaling molecules (“cytokines”, among others). A signaling molecule that turns up one part of the immune system may turn down another, as in (but almost certainly not limited to) the “Th1” versus “Th2” concept, itself a not very precise notion. There are also homeostatic loops in which the body reacts to its own reactions, damping an immune response when it has gone on for too long and threatens to be more damaging than it is worth.

Such subtleties have not propagated to popular culture, where various substances are described as “boosting the immune system”, with no qualification as to which part of the system is being boosted or how long that boost might last. But they are well known to specialists — who, themselves, will be the first to admit that even they don’t fully understand the system and that it needs more study. The immune system is often blamed for disease, but although it is not surprising that a complicated system might malfunction, still this is generally a diagnosis of exclusion, a sort of thing that Darwin had a rule to avoid believing: a disease is labeled “autoimmune” because nobody has found a causative microbe that is goading the immune system on, not because anyone has proven that there is no such microbe. Indeed, with such diseases, though doctors commonly profess certainty about autoimmunity being the root cause, there is usually in the scientific literature a constant trickle of attempts to blame them on one microbe or another. The one thing that is completely clear about such diseases is that whatever immune system activity is going on isn’t curing the patient, and is causing distress to him or her. The complexity of the system makes other conclusions uncertain.

So where does all this Rube Goldberg action come from? It’s tempting to blame evolution, and the accumulation of cruft in the genome, but evolution can be quite good at simplifying when simplicity is actually optimal. We have only one backbone in our body, not five sort-of-parallel ones all trying to combine to support us. So there must be something optimal here about complexity, and when considered it’s obvious: if we could understand the immune system easily, so could microbes, and so they could subvert it easily. Indeed, it seems like whenever I read about the workings of any well-studied human pathogen, those workings include at least one way of eluding, deceiving, or sabotaging the immune system, and often two or three of them. Germs don’t seem to qualify as human pathogens, in the eyes of doctors, unless they have such a way; otherwise they are just one of the “harmless” background microbes which the immune system usually deals with so efficiently that we don’t even know that they are trying to eat us (though they can still be harmful in high doses). Yet even when a germ has three different ways of eluding the immune system, that doesn’t make it 100% deadly; most of the time the immune system can still eventually get it under control, using a fourth (and maybe a fifth and a sixth) mechanism in its arsenal.

This situation differs greatly from the situation with computers, where the simplest mechanisms to counter computer viruses and worms are commonly the best. With computers, you can make, in circuitry or with the aid of circuitry, a separate protected area which can’t be sabotaged. In wetware everything is swimming in the same soup: both microbes and immune system can do anything to each other that biochemistry allows, which is quite a lot. Any signaling molecule used by the immune system can be detected by microbes, allowing them to know what the system is doing, or can be synthesized by them, causing the system to do the wrong thing.

With computers, countering malware is mostly a question of how paranoid you are in letting information into the protected area. In practice the standard is often pretty permissive, but that is a matter of convenience — of programmers cutting corners to ship products fast, and of eliminating barriers that would inconvenience users. But then when customers suffer from security holes, programmers change course and get more serious about security. For those trying to make the best of this unpleasant tradeoff, simplicity is a good guiding light: when things are simple to program it lessens the temptation to cut corners; and where barriers must be inserted that inconvenience users, simplicity makes it possible to explain why those barriers are there.

People try to do some of the things in computer security that the immune system does, but it doesn’t work well. Antivirus products are the prime example of this. Like the immune system, they try to recognize hostile intruders yet without any really definitive way of doing so. The result is that they spend so much effort searching that they often noticeably slow down machines, and that they sometimes interfere with legitimate activities — sometimes openly and obnoxiously objecting, and sometimes insidiously sabotaging. And like the immune system, they are themselves subject to subversion: a virus can alter the antivirus program.

In computing, this qualifies as a big mess, which many people choose to avoid entirely. In wetware, this sort of thing is the best we’ve got. As big creatures, we can afford a big mess of complexity; microbes don’t have the genome size to understand our immune system — or, to speak more precisely, to react as if they understood it. They can adopt the occasional dodge, but a full understanding, such as would be needed to take thorough control of the system and use it for their own purposes, is beyond them. For microbes to evolve to expand their genomes and get more complicated would go against their basic life strategy of being fast breeders who are small and simple. Also, it wouldn’t just be a matter of learning one host species’s immune system, but rather that of all their hosts. Most microbes can live off any one of a number of host species, which is a great advantage to them, since when they leave one animal the next potential victim that they encounter is likely to be of a different species. And, although immune systems of different species are similar, they are not identical, so learning how to deal with a variety of animals’ immune systems is harder than learning how to deal with just one.

Or, to view things another way, microbes don’t need to get more complicated: they’re already doing so well in the struggle for existence that doing a bit better wouldn’t provide them with much evolutionary advantage.

As I hope has been apparent, when writing of microbes “understanding” the immune system, I’m not referring to an intellectual understanding but an operational understanding. An intellectual understanding is something that is possessed by a programmer who writes code to model a system; an operational understanding is something that is possessed by the code itself. Not that a microbe would do this digitally, of course; any model they might have of the immune system would be analog in nature, somewhat like the old analog computers. But in those, to have a working model of a system with N variables, you needed a computer with at least N amplifiers. To model the immune system in this fashion would mean making some sort of biochemical analog which had as many different working parts as the immune system does. By boosting the number of working parts, we put this task out of the reach of microbes — at the cost of making the system annoyingly complicated to its human students.

So that’s how they really do Tempest

One of the recent Snowden revelations was a catalog of spying items that the NSA’s “Tailored Access Operations” unit had for breaking into bad guys’ computers. Most of the items weren’t particularly surprising. We already know that since they can’t break cryptography, they try to break into endpoints, where the plaintext lives — and even if we hadn’t known that from recent revelations, it makes complete sense for them to operate that way. What was surprising was the Tempest stuff.

To explain a bit, Tempest is the code word for spying on people’s computers via unintentional electronic emanations. A computer monitor, for instance, is driven by a high frequency signal which more or less broadcasts whatever is being shown on the monitor. If thoroughly shielded it wouldn’t be broadcasting, but it never is thoroughly shielded, except in special Tempest rated equipment such as is sold to various agencies of the federal government who worry about such things. And the broadcast is performed regularly at 60 times a second (at least that’s the usual refresh rate these days), a piece of redundancy which makes the signal easier to retrieve. Old-fashioned CRTs amplified this signal to high voltage to shoot it through an electron gun, but as Markus Kuhn has found, even modern flat panel displays can produce decipherable emanations.

But how well Tempest worked in practice was never quite clear to me. Okay, various demos have shown it to work in some cases. But whether those cases are typical has been unclear; monitors no doubt vary in how well their shielding is designed and built. And even if you can do a good job picking up the signal from one monitor, in practice there’ll probably be tens or hundreds of monitors within range; what can you do with the resulting mess of signals stomping all over each other? So it was no surprise reading, a while ago, in the book Security Engineering, by Kuhn’s Ph.D. advisor Ross Anderson, that

“Despite the hype with which the Tempest industry maintained itself during the Cold War, there is a growing scepticism about whether any actual Tempest attacks had ever been mounted by foreign agents in the USA.”


“Having been driven around an English town looking for Tempest signals, I can testify that doing such attacks is much harder in practice than it might seem in theory…”

What was a surprise was looking at the recently-leaked NSA catalog and seeing an entry for a “radar”. Radar? What is this, for tracking airplanes? “Primary uses include VAGRANT and DROPMIRE collection”. Googling those, they turn out to be Tempest stuff, the former being on computer screens and the latter on printers.

So that’s how the pros do it: not just by passively listening for emanations, but by making emanations. This unit, the “CTX4000”, broadcasts at a frequency adjustable from 1 to 2 gigahertz, and listens for return signals with a bandwidth of up to 45 megahertz. (As the catalog states, this unit is obsolete and, in 2008, was already scheduled for replacement; modern flat-panel displays are driven by signals of higher bandwidth than that.) Power levels are “up to 2W using the internal amplifier; external amplifiers make it possible to go up to 1kW”. The carrier wave is broadcast continuously.

But this calls for another bit of explanation, as to why this would work. Well, to start with the simple part, the use of a “radar” makes it possible to pick out the device you want to spy on: point the antennas at it, and not at all the other devices within range. Antennas at these sorts of frequencies can be quite directional without being too large. The more complicated part, at least to the uninitiated, is the modulation: why would you get back a signal of interest modulated on to the carrier wave?

Well, you might not. If all the materials involved are “linear”, you won’t; if frequencies A and B are present in a linear device, each might be attenuated or amplified, and/or phase shifted, but no new frequencies will be generated. Linear devices include wires, resistors, capacitors, and inductors — at least the ideal versions of all those. (Real versions are of course subtly nonlinear, but probably not usefully enough so for the present purpose.) But silicon devices (transistors, diodes, and such) are all nonlinear — though for small signals, they can be more-or-less linear; thus the utility of high “radar” power, to force them into their nonlinear regimes. Going through a nonlinear device, signals “mix”; in radio technology, the ideal “mixer” is a multiplier, but in practice one usually uses some cruder mixer which does something very far from an exact multiplication. When you pass frequencies A and B through an ideal two-input mixer, you get out the frequencies A+B and A-B. That’s for an exact multiplication of A by B; if the mixer is cruder, you also get frequencies such as A, B, 2A, 2B, A+2B, A+3B, 2A-2B, and so forth.

In the case of a spy beam, the nonlinear “mixer” might be the transmit or receive transistor at one end of the wire connecting the computer to the video monitor. Frequency A might be somewhere in the signal driving the screen (which perhaps spans the frequency range of zero to 30 MHz), and frequency B the spy beam (perhaps 1.5 GHz), picked up by that same wire acting as an antenna. Then the mixer generates a modulated version of the spy beam (1.5 GHz +/- 30 MHz), which will then get re-radiated and picked up by the spy’s antenna. As for the unwanted mix frequencies, many of them are outside the frequency ranges which are being received (e.g. 2A+2B, which is about 3 GHz). As for the rest, one can try to filter them out somehow, or one can just hope that they generate a low enough level of noise that the resulting signal is still decipherable. This being spy work, one doesn’t need a perfect image of the screen being spied on or of the page printed by the printer being spied on. It’s enough if the text is readable; it doesn’t have to look pretty.

If this isn’t good enough, you might have to sneak into the building and implant something. The device codenamed RAGEMASTER, perhaps, at a unit cost of $30. They recommend putting it onto the red video line; “it was found that, empirically, this provides the best video return and cleanest readout of the monitor contents”. In the photo, it seems to be a tiny little device that won’t even put a bulge in the cable where implanted: just some well-chosen nonlinearity, probably in silicon. Presumably in practice you slit the cable insulation to insert it, then somehow seal the slit closed.

Or you might also sneak in if you want other services, such as a microphone in the room. The catalog has microphones which insert into cabling and are readable via “radar”. It also has devices which can be implanted on low-frequency channels such as keyboards, to make reading those via “radar” feasible.

In any case, this system is quite easily detected by the intended victim, since he is being continuously illuminated by a microwave signal at rather substantial power. In the Cold War, the US embassy in Moscow frequently complained to the Russians about being irradiated with microwave beams. The NSA probably isn’t so gauche as to use power levels that actually harm the victim personally (the one-kilowatt option would be for use at a great distance, not for frying people at close range), but even the lower sorts of power levels furnish more than enough power to use standard direction finding techniques on, so as to track the spy beam back to its source. But using a sledgehammer on the source seems ill-advised, it (in its latest version, “PHOTOANGLO”) being government property with a price tag of “$40k (planned)” and likely twice that after the usual cost overruns.

The only caveat about the easy detectability would be if they were using spread spectrum techniques; spread spectrum stuff can be hard to detect. A cryptographically-spread signal can be below the noise floor and undetectable to people who don’t know the cryptographic key, and yet still can convey useful information to someone with the cryptographic key. But while that’s enough to make communications invisible, it can’t necessarily make radar invisible. With radar, the power level at the target has to be high enough that even faint echoes of it are detectable back at the radar unit. Also, high-frequency spread-spectrum stuff is hard to design and build, and my guess is that if they were using such techniques they’d be boasting about it in the catalog. So these NSA “radars” are probably easily detectable: just wave around a frequency counter, and it’ll tell you what frequency you are being illuminated at.

At any rate, this NSA Tempest stuff is too interesting for it to have really been a good idea to leak it. It doesn’t relate to dragnet surveillance of the whole population: the “radar” has to be pointed at one particular target, and someone has to get close to the target to emplace the “radar” and operate it. It’s an expensive unit, and the salaries of the people manning it are even more expensive. It’s for when they want to pay very close attention to a very special person, not for serving as everyone’s nanny.

Evil Almond Butter

8 parts almond butter
1 part soy sauce
1 part fresh ginger, destroyed

Mix together. Use as you would almond butter (for sandwiches or whatever), but with the added taste of Evil™.

The recipe works with peanut butter, too.

Just chopping the ginger into small pieces isn’t enough for this recipe; it should be destroyed. I use a grater, but “grated” isn’t the right word for the result, which is a mush plus fibers. Being fibrous, ginger isn’t easy to grate with an ordinary hand grater, but the Microplane variety, originally made to cut wood, does fine, as does a grater blade on a food processor.

Page 1 of 8  >>