Thursday 12 January 2017

NONSENSE AND NONSENSIBILITY

Dick Pountain/Idealog 264/06 July 2016 10:38

When Joshua Brown's Tesla Model S smashed right through a trailer-truck at full speed, back in May in Florida, he paid a terrible price for a delusion that, while it may have arisen first in Silicon Valley, is now rapidly spreading among the world's more credulous techno-optimists. The delusion is that AI is now sufficiently advanced for it to be permitted to drive motor vehicles on public roads without human intervention. It's a delusion I don't suffer from, not because I'm smarter than everyone else, but simply because I live in Central London and ride a Vespa. Here, the thought that any combination of algorithms and sensors short of a full-scale human brain could possibly cope with the torrent of dangerous contigencies one faces is too ludicrous to entertain for even a second - but on long, straight American freeways it could be entertained for a while, to Joshua's awful misfortune.

The theory behind driverless vehicles is superficially plausible, and fits well with current economic orthodoxies: human beings are fallible, distractable creatures whereas computers are Spock-like, unflappable and wholly rational entities that will drive us more safely and hence save a lot of the colossal sums that road accidents cost each year. And perhaps more significant still, they offer to ordinary mortals one great privilege of the super-rich, namely to be driven about effortlessly by a chauffeur.

The theory is however deeply flawed because it inherits the principal delusion of almost all current AI research, namely that human intelligence is based mostly on reason, and that emotion is an error condition to be eradicated as far as possible. This kind of rationalism arises quite naturally in the computer business, because it tends to select mathematically-oriented, nerdish character types (like me) and because computers are so spectacularly good, and billions of times faster than us, at logical operations. It is however totally refuted by recent findings in both cognitive science and neuroscience. From the former, best expressed by Nobel laureate Daniel Kahneman, we learn that the human mind mostly operates via quick, lazy, often systematically-wrong assumptions, and it has to be forced, kicking and screaming, to apply reason to any problem. Despite this we cope fairly well and the world goes on. When we do apply reason it as often as not achieves the opposite of our intention, because of the sheer complexity of the environment and our lack of knowledge of all its boundary conditions.

That makes me a crazy irrationalist who believes we're powerless to predict anything and despises scientific truth then? On the contrary. Neuroscience offers explanations for Kahneman's findings (which were themselves the result of decades of rigorous experiment). Our mental processes are indeed split, not between logic and emotion as hippy gurus once had it, but between novelty and habit. Serious new problems can indeed invoke reason, perhaps even with recourse to written science, but when a problem recurs often enough we eventually store an heuristic approximation of its solution as "habit" which doesn't require fresh thought every time. It's like a stored database procedure, a powerful kind of time-saving compression without which civilisation could never have arisen. Throwing a javelin, riding a horse, driving a car, greeting a colleague, all habits, all fit for purpose most of the time.

Affective neuroscience, by studying the limbic system, seat of the emotions, throws more light still. Properly understood, emotions are automatic brain subsystems which evolved to deal rapidly with external threats and opportunities by modifying our nervous system and body chemistry (think fight-and-flight, mating, bonding). What we call emotions are better called feelings, our perceptions of these bodily changes rather than the chemical processes that caused them. Evidence is emerging, from the work of Antonio Damasio and others, that our brains tag each memory they deposit with the emotional state prevailing at the time. Memories aren't neutral but have "good" or "bad" labels, which get weighed in the frontal cortex whenever memories are recalled to assist in solving a new problem. In other words, reason and emotion are completely, inextricably entangled at a physiological level. This mechanism is deeply involved in learning (reward and punishment, dopamine and adrenalin), and even perception itself. We don't see the plain, unvarnished world but rather a continually-updated model in our brain that attaches values to every object and area we encounter.

This is what makes me balk before squeezing my Vespa between that particular dump-truck and that particular double-decker bus, and what would normally tell you not to watch Harry Potter while travelling at full speed on the freeway. But it's something no current AI system can duplicate and perhaps never will: that would involve driverless vehicles being trained for economically-unviable periods using value-aware memory systems that don't yet exist.

ASSAULT AND BATTERY

Dick Pountain/Idealog 263/07 June 2016 11:26

Batteries, doncha just hate them? For the ten thousandth time I forgot to plug in my phone last night so that when I grabbed it to go out it was dead as the proverbial and I had to leave it behind on charge. My HTC phone's battery will actually last over two days if I turn off various transceivers but life is too short to remember which ones. And phones are only the worst example, to the extent that I now find myself consciously trying to avoid buying any gadget that requires batteries. I do have self-winding wristwatches, but as a non-jogger I'm too sedentary to keep them wound and they sometimes stop at midnight on Sunday (tried to train myself to swing my arms more when out walking, to no effect). I don't care for smartwatches but I did recently go back to quartz with a Bauhaus-stylish Braun BN0024 (design by Dietrich Lubs) along with a whole card-full of those irritating button batteries bought off Amazon that may last out my remaining years.

It's not just personal gadgets that suffer from the inadequacy of present batteries: witness the nightmarish problems that airliner manufacturers have had in recent years with in-flight fires caused by the use of lithium-ion cells. It's all about energy density, as I wrote in another recent column (issue 260). We demand ever more power while away from home, and that means deploying batteries that rely on ever more energetic chemistries, which begin to approach the status of explosives. I'm sure it's not just me who feels a frisson of anxiety when I feel how hot my almost-discharged tablet sometimes becomes.

Wholly new battery technologies look likely in future, perhaps non-chemical ones that merely store power drawn from the mains into hyper-capacitors fabricated using graphenes. Energy is still energy, but such ideas raise the possibility of lowering energy *density* by spreading charge over larger volumes - for example by building the storage medium into the actual casing of a gadget using graphene/plastic composites. Or perhaps hyper-capacitors might constantly trickle-charge themselves on the move by combining kinetic, solar and induction sources.

As always Nature found its own solution to this problem, from which we may be able to learn something, and it turns out that distributing the load is indeed it. Nature had an unfair advantage in that its design and development department has employed every living creature that's ever existed, working on the task for around 4 billion years, but intriguingly that colossal effort came up with a single solution very early on that is still repeated almost everywhere: the mitochondrion.

Virtually all the cells of living things above the level of bacteria contain both a nucleus (the cell's database of DNA blueprints from which it reproduces and maintains itself) and a number of mitochondria, the cell's battery chargers which power all its processes by burning glucose to create (ATP) adenosine triphosphate, the cellular energy fuel. Mitochondria contain their own DNA, separate from that in the nucleus, leading evolutionary biologists to postulate that billions of years ago they were independent single-celled creatures who "came in from the cold" and became symbiotic components of all other cells. Some cells like red blood cells, simple containers for haemoglobin, contain no mitochondria while others, like liver cells which are chemical factories, contain thousands. Every cell is in effect its own battery, constantly recharged by consuming oxygen from the air you breath and glucose from the food you eat to drive these self-replicating chargers, the mitochondria.

So has Nature also solved the problems of limited battery lifespan and loss of efficiency (the "memory effect")? No it hasn't, which is why we all eventually die. However longevity research is quite as popular among the Silicon Valley billionaire digerati as are driverless cars and Mars colonies, and recent years have seen significant advances in our understanding of mitochondrial aging. Enzymes called sirtuins stimulate production of new mitochondria and maintain existing ones, while each cell's nucleus continually sends "watchdog" signals to its mitochondria to keep them switched on. The sirtuin SIRT1 is crucial to this signalling, and in turn requires NAD (nicotinamide adenine dinucleotide) for its effect, but NAD levels tend to fall with age. Many of the tricks shown to slow aging in lab animals - calorie-restricted diets, dietary components like resveratrol (red wine) and pterostilbene (blueberries) - may work by encouraging the production of more NAD.

Now imagine synthetic mitochondria, fabricated from silicon and graphene by nano-engineering, millions of them charging a hyper-capacitor shell by burning a hydrocarbon fuel with atmospheric oxygen. Yes, you'll simply use your phone to stir your tea, with at least one sugar. I await thanks from the sugar industry for this solution to its current travails...

ALGORITHMOPHOBIA

Dick Pountain/Idealog 262/05 May 2016 11:48

The ability to explain algorithms has always been a badge of nerdhood, the sort of trick people would occasionally ask you to perform when conversation flagged at a party. Nowadays however everyone thinks they know what an algorithm is, and many people don't like them much. Algorithms seem to have achieved this new familiarity/notoriety because of their use by social media, especially Google, Facebook and Instagram. To many people an algorithm implies the computer getting a bit too smart, knowing who you are and hence treating you differently from everyone else - which is fair enough as that's mostly what they are supposed to be for in this context. However what kind of distinction we're talking about does matter: is it showing you a different advert for trainers from your dad, or is it selecting you as a target for a Hellfire missile?

Some newspapers are having a ball with algorithm as synonym for the inhumane objectivity of computers, liable to crush our privacy or worse. Here are two sample headlines from the Guardian over the last few weeks: "Do we want our children taught by humans or algorithms?", and "Has a rampaging AI algorithm really killed thousands in Pakistan?" Even the sober New York Times deemed it newsworthy when Instagram adopted an algorithm-based personalized feed in place of its previous reverse-chronological feed (a move made last year by its parent Facebook).

I'm not algorithmophobic myself, for the obvious reason that I've spent years using, analysing, even writing a column for Byte, about the darned things, but this experience grants me a more-than-average awareness of what algorithms can and can't do, where they are appropriate and what the alternatives are. What algorithms can and can't do is the subject of Algorithmic Complexity Theory, and it's only at the most devastatingly boring party that one is likely to be asked to explain that. ACT can tell you about whole classes of problem for which algorithms that run in managable time aren't available. As for alternatives to algorithms, the most important is permitting raw data to train a neural network, which is the way the human brain appears to work: the distinction being that writing an algorithm requires you to understand a phenomenon sufficiently to model it with algebraic functions, whereas a neural net sifts structures from the data stream in an ever-changing fashion, producing no human-understandable theory of how that phenomenon works.  

Some of the more important "algorithms" that are coming to affect our lives are actually more akin to the latter, applying learning networks to big data sets like bank transactions and supermarket purchases to determine your credit rating or your special offers. However those algorithms that worry people most tend not to be of that sort, but are algebraically based, measuring multiple variables and applying multiple weightings to them to achieve ever more appearance of "intelligence". They might even contain a learning component that explicitly alters weightings on the fly, Google's famous PageRank algorithm being an example .

The advantage of such algorithms is that they can be tweaked by human programmers to improve them, though this too can be a source of unpopularity: every time Google modifies PageRank a host of small online businesses catch it in the neck. Another disadvantage of such algorithms is that they can "rot" by decreasing rather than increasing in performance over time, prime examples being Amazon's you-might-also-like and Twitter's people-you-might-want-to-follow. A few years ago I was spooked by the accuracy of Amazon's recommendations, but that spooking ceased after it offered me a Jeffrey Archer novel: likewise when Twitter thought I might wish to follow Jimmy Carr, Fearne Cotton, Jeremy Clarkson and Chris Moyles.

Flickr too employs a secret algorithm to measure the "Interestingness" of my photographs: number of views is one small component, as is the status of people who favourited it (not unlike PageRank's incoming links) but many more variables remain a source of speculation in the forums. I recently viewed my Top 200 pictures by Interestingness for the first time in ages and was pleasantly surprised to find the algorithm much improved. My Top 200 now contains more manipulated than straight-from-camera, pictures; three of my top twenty are from recent months and most from the last year; all 200 are pix I'd have chosen myself; their order is quite different from "Top 200 ranked by Views", that is, what other users prefer. As someone who takes snapshots mostly as raw material for manipulation, the algorithm is now suggesting that I'm improving rather than stagnating, and closely approximates my own taste, which I find both remarkable and encouraging. The lesson? Good algorithms in appropriate contexts are good, bad algorithms in inappropriate contexts are bad. But you already knew that didn't you...  

SOCIAL UNEASE

Dick Pountain /Idealog 350/ 07 Sep 2023 10:58 Ten years ago this column might have listed a handful of online apps that assist my everyday...