Tuesday 3 July 2012

VOICE OF MULTITUDES

Dick Pountain/PC Pro/Idealog 208: 15/11/2011

It can't have escaped regular readers of this column that I'm deeply sceptical about several much-hyped areas of progress in IT. To pick just a couple of random examples, I've never really been very impressed by voice input technologies, and I'm extremely doubtful about the claims of so-called "strong" Artificial Intelligence, which insists that if we keep on making computers run faster and store more, then one day they'll become as smart as we are. As if that doesn't make me sound grouchy enough, I've been a solid Windows and PC user for 25 years and have never owned an Apple product. So surely I'm not remotely interested in Apple's new Siri voice system for the iPhone 4S? Wrong. On the contrary I think Siri has an extraordinary potential that goes way beyond the purpose Apple purchased it for, which was to impress peoples' friends in wine bars the world over and thus sell more iPhones. It's not easy to justify such faith at the moment, because it depends upon a single factor - the size of the iPhone's user base - but I'll have a go.

I've been messing around off and on with speech recognition systems since well before the first version of Dragon Dictate, and for many years I tried to keep up with the research papers. I could ramble on about "Hidden Markoff Models" and "power cepstrums" ad nauseam, and was quite excited, for a while, by the stuff that the ill-fated Lernhout & Hauspie was working on in the late 1990s. But I never developed any real enthusiasm for the end results: I'm somewhat taciturn by nature, so having to talk to a totally dumb computer screen was something akin to torture for me ("up, up, up, left, no left you *!£*ing moron...")

This highlights a crucial problem for all such systems, namely the *content* of speech. It's hard enough to get a computer to recognise exactly which words you're saying, but even once it has they won't mean anything to it. Voice recognition is of very little use to ordinary citizens unless it's coupled to natural language understanding, and that's an even harder problem. I've seen plenty of pure voice recognition systems that are extremely effective when given a highly restricted vocabulary of commands - such systems are almost universally employed by the military in warplane and tank cockpits nowadays, and even in some factory machinery. But asking a computer to interpret ordinary human conversations with an unrestricted vocabulary remains a very hard problem indeed.

I've also messed around with natural language systems myself for many years, working first in Turbo Pascal and later in Ruby. I built a framework that embodies Chomskian grammar rules, into which I can plug different vocabularies so that it spews out sentences that are grammatical but totally nonsensical, like god-awful poetry:

    Your son digs and smoothly extracts a gleaming head
          like a squid.
    The boy stinks like a dumb shuttle.

So to recap, in addition to first recognising which words you just said, and then parsing the grammar of your sentence, the computer comes up against a third brick wall, meaning, and meaning is the hardest problem of them all.

However there has been a significant breakthrough on the meaning front during the last year. I'm talking of course about IBM's huge PR coup in having its Watson supercomputer system win the US TV quiz show "Jeopardy" against human competitors, which I discussed here back in Idealog 200. Watson demonstrated how the availability of cheap multi-core CPUs, when combined with software like Hadoop and UIMA capable of interrogating huge distributed databases in real time, can change the rules of the game when it comes to meaning analysis. In the case of the Jeopardy project, that database consisted of all the back issues of the show plus a vast collection of general knowledge stored in the form of web pages. I've said that I'm sceptical of claims for strong AI, that we can program computers to think the way we think - we don't even understand that ourselves and computers lack our bodies and emotions which are vitally involved in the process - but I'm very impressed by a different approach to AI, namely "case based reasoning" or CBR.

This basically says don't try to think like a human, instead look at what a lot of actual humans have said and done, in the form of case studies of solved problems, and then try to extract patterns and rules that will let you solve new instances of the problem. Now to apply a CBR-style approach to understanding human every-day speech would involve collecting a vast database of such speech acts, together with some measure of what they were intended to achieve. But surely collecting such a database would be terribly expensive and time consuming? What you'd need is some sort of pocketable data terminal that zillions of people carry around with them during their daily rounds, and into which they would frequently speak in order to obtain some specific information. Since millions upon millions of these would be needed, somehow you'd have to persuade the studied population to pay for this terminal themselves, but how on earth could *that* happen? Duh.

Collecting and analysing huge amounts of speech data is a function of the Siri system, rendered possible by cloud computing and the enormous commercial success of the iPhone, and such analysis is clearly in Apple's own interest because it incrementally improves the accuracy of Siri's recognition and thus gives it a hard-to-match advantage over any rival system. The big question is, could Apple be persuaded or paid to share this goldmine of data with other researchers in order build a corpus for a more generally available natural language processing service? Perhaps once its current bout of manic patent-trolling subsides a little we might dare to ask...

[Dick Pountain doesn't feel quite so stupid talking to a smartphone as he does to a desktop PC]

OOPS THEY'VE DONE IT AGAIN

Dick Pountain/PC Pro/Idealog 207 16/10/2011

Should there be anyone out there who's been reading me since my very first PC Pro column, I'd like to apologise in advance for revisiting a topic I've covered here no less than four times before (in 1994, 1995, 1997 and 2000). That topic is how Microsoft's OS designers just don't get what an object-oriented user interface (OOUI) ought to look like, and the reason I'm covering it again here is the announcement of the Metro interface for Windows 8, which you'll find described in some detail in Simon Jones' Advanced Office column on pg xxx of this issue. It would appear that, after 17 years, they still don't quite get it, though they're getting pretty close.

A proper OOUI privileges data over applications, so that your computer ceases to be a rats-nest of programs written by people like Microsoft, Adobe and so on and becomes a store of content produced by you: documents, spreadsheets, pictures, videos, tunes, favourites, playlists, whatever. Whenever you locate and select one of these objects, it already knows the sorts of things you might want to do with it, like view it, edit it, play it, and so it invisibly launches the necessary application for you to do that. Metro brings that ability to Windows 8 in the shape of "Tiles" which you select from a tablet's touch screen with your finger, and which cause an app to be launched. The emphasis is still on the app itself (as it has to be since Microsoft intends to sell these to you from its app store), but it is possible to create "secondary" Tiles that are pinned to the desktop and launch some particular data file, site, feed or stream.

It's always been possible to do something like this with Windows, in a half-arsed kind of way, and I've been doing so for 15 years now. It's very, very crude because it's wholly dependent upon fragile and ambiguous filename associations - assign a particular application to open a particular type of file identified by name extension. Ever since Windows 95 my desktop has contained little but icons that open particular folders, and clicking on any file within one of these just opens it in Word, Textpad, Excel or whatever. I need no icons for Office applications, Adobe Reader or whatever, because I never launch them directly.

This was actually a horrid step backwards, because under Windows 3.1 I'd grown used to an add-on OOUI called WinTools that was years ahead of the game. Under WinTools desktop icons represented user data objects, which understood a load of built-in behaviours in addition to the app that opened and edited them. You could schedule them, add scripts to them, and have them talk to each other using DDE messages. It featured a huge scrolling virtual desktop, which on looking back bore an uncanny resemblance to the home screens on my Android phone. Regrettably Tool Technology Publishing, the small outfit that developed WinTools, was unable to afford to port it to Windows 95 and it disappeared, but it kept me using Windows 3.1 for two years after everyone else had moved on to 95.

That resemblance to Android is more than just coincidence because hand-held, touch-screen devices have blazed the trail toward today's object-oriented user interfaces. For most people this trend began with Apple's iPhone and iPod Touch, but to give credit where it's due PalmOS pioneered some of the more important aspects of OOUI. For example on the Palm Pilot you never needed to know about or see actual files: whenever you closed an app it automatically saved its content and resumed where you left off next time, a feature now taken for granted as absolutely natural by users of iPads and other tablets.

Actually though we've barely started to tap the real potential of OOUIs, and that's why Metro/Windows 8 is quite exciting, given Microsoft's expertise in development tools. Processing your data via intelligent objects implies that they should know how to talk to each other, and how to convert their own formats from one app to another without manual intervention. As Simon Jones reports, the hooks to support this are already in place in Metro through its system of "contracts": objects of different kinds that implement Search, Share, Settings and Picker interfaces can contract to find or exchange data with one another seamlessly, which opens up a friendlier route to create automated workflows.

In his Advanced Windows column last month Jon Honeyball sang the praises of Apple's OSX Automator, which enables data files to detect events like being dropped into a particular folder, and perform some action of your choice when they do so. This ability is built right into file system itself, a step beyond Windows where that would require VBA scripts embedded within Office documents (for 10 years I've had to use a utility called Macro Express to implement such inter-file automation). Now tablet-style OSes like Metro ought to make possible graphical automation interfaces: simply draw on-screen "wires" from one tile into an app, then into another, and so on to construct a whole workflow to process, for example, all the photographs loaded from a camera. Whoever cracks that in a sufficiently elegant way will make a lot of money.

FRAGILE WORLD

Dick Pountain/PC Pro/Idealog 206/ 19/09/2011

I'm writing this column in the middle of a huge thunderstorm that possibly marks the end of our smashing Indian Summer in the Morra Valley (I know, sorry). Big storms in these mountains sometimes strike a substation and knock out our mains supply for half a day, but thankfully not this time - without electric power we have no water, which comes from a well via a fully-immersed demand pump. Lightning surges don't fry all the electrical goods in the house thanks to an efficient Siemens super-fast trip-switch, but years ago, before I had broadband, I lost a Thinkpad by leaving its modem plugged in. Lightning hit phone line, big mess, black scorchmarks all the way from the junction box...

Nowadays I have an OK mobile broadband deal from TIM (Telecom Italia Mobile), €24 per month for unlimited internet (plus a bunch of voice and texts I never use), which I don't have to pay when we're not here. It's fast enough to watch BBC live video streams and listen to Spotify, but sometimes it goes "No Service" for a few hours after a bad thunderstorm, as it has tonight. That gives me a sinking feeling in my stomach. I used to get that feeling at the start of every month, until I realised the €24 must be paid on the nail to keep service (and there's no error message that tells you so, just "No Service"). Now I know and I've set up with my Italian bank to top-up via their website - but if I leave it too late I have to try and do that via dial-up. Sinking feeling again.

Of course I use websites to deal with my UK bank, transfer funds to my Italian bank, pay my income tax and my VAT, book airline tickets and on-line check in. A very significant chunk of my life now depends upon having not merely an internet connection, but a broadband internet connection. And in London just as much as in the Umbro-Cortonese. I suspect I'm not alone in being in this condition of dependency. The massive popularity of tablets has seen lots of people using them in place of PCs, but of course an iPad is not much more than a fancy table mat without a 3G or Wi-Fi connection. But then, the internet isn't going to go away is it? Well, er, hopefully not.

After the torrid year of riots, market crashes, uprisings, earthquakes and tsunamis, and near-debt-defaults we've just had, who can say with the same certainty they had 10 years ago that every service we have now will always be available? The only time I ever detected fear in ex-premier Tony Blair's eyes was on the evening news during the 2000 petrol tanker drivers' strike, when it became clear we were just a day or so from finding empty shelves at the supermarket. In Alistair Darling's recent memoirs he comes clean that in 2008 - when he and Gordon were wrestling the financial crisis precipitated by the collapse of Lehman Brothers - it was at one point possible that the world's ATM machines could all go offline the next morning. Try to imagine it. It's not that all your money has gone away (yet), just that you can't get at it. How long would the queues be outside high-street branches, and how long would you be stuck in one? My bank repeatedly offers me a far better interest rate if I switch to an account that's only accessible via the internet, but much as it pains me I always refuse.

Now let's get really scary and start to talk about the Stuxnet Worm and nuclear power stations, Chinese state-sponsored hackers, WikiLeaks and Anonymous and phishing and phone hacking. Is it possible that we haven't thought through the wisdom of permitting our whole lives to become dependent upon networks that no-one can police, and none but a handful of engineers understand or repair? When a landslide blocked the pass between our house and Castiglion Fiorentino a few years back, some men with a bulldozer from the Commune came to clear it, but at a pinch everyone in the upper valley could have pitched in (they all have tractors). Not so with fixing the internet. I might know quite a lot about TCP/IP, but I know bugger-all about cable splicing or the signalling layers of ATM and Frame Relay, or DSLAMs.

What contributes most to the fragility of our brave new connected world though is lack of buffering. Just-in-time manufacturing and distribution mean that no large stocks are kept of most commodities, everyone assuming that there will always be a constant stream from the continuous production line, always a delivery tomorrow. It's efficient, but if it stops you end up with nothing very fast. Our water is like that: shut off mains power and two seconds later, dry. I could fix that by building a water-tank and have the pump keep it full, via a ballcock valve like a big lavatory cistern. Then I could buy a lot of solar panels and a windmill to keep the pump running (plus my laptop). I could even buy a little diesel generator and run it on sunflower oil. I'm not saying I will, but I won't rule it out quite yet...

GRAND THEFT TECHNO

Dick Pountain/PC Pro/Idealog 205/  14/08/2011

Watching CCTV footage of the London riots shot from a high perspective, it was hard not to be reminded of video games like Grand Theft Auto. I don't want to open up that rusting can of worms about whether or not violent games cause imitation - the most I'll say is that these games perhaps provide an aesthetic for violence that happens anyway. The way participants wear their hoods, the way they leap to kick in windows, even the way they run now appears a little choreographed because we've seen the games. But this rather superficial observation underestimates the influence of digital technologies in the riots. The role of Blackberry messaging and Twitter in mustering rioters at their selected targets has been chewed over by the mainstream press ad nauseam, and David Cameron is now rumbling about suspending such services during troubles. This fits in well with my prediction, back in Idealog 197, that governments are now so nervous about the subversive potential of social media that the temptation to control them is becoming irresistible.

The influence of technology goes deeper still. The two categories of goods most looted during the riots were, unsurprisingly, clothes and electronic devices and the reason isn't too hard to deduce - brands have become a crucial expression of identity to several generations of kids. Danny Kruger, a youth worker and former adviser to David Cameron put it like this in the Financial Times: "We have a generation of young people reared on cheap luxuries, especially clothes and technology, but further than ever from the sort of wealth that makes them adults. A career, a home of your own - the things that can be ruined by riots - are out of sight. Reared on a diet of Haribo, who is surprised when they ransack the sweetshop?"

The latest phase of the hi-tech revolution makes this gap feel wider still. Neither PCs nor laptops were ever very widely desired: only nerds could set them up and they were barely usable for the exciting stuff like Facebook or 3D games. Steve Jobs and his trusty designer Jonathon Ive, together with Sony and Nintendo, changed that for ever. Electronic toy fetishism really took off with the iPod (which just about every kid in the UK now possesses) but it reached a new peak over the last year with the iPad, ownership of which has quickly become the badge of middle-class status. These riots weren't about relative poverty, nor unemployment, nor police brutality, nor were they just about grabbing some electronic toys for free. They were a raging (tinged with disgust) against exclusion from full membership of a world where helping yourself to public goods - as MPs and bankers are seen to do - is rife, and where you are judged by the number and quality of your toys. They demonstrated a complete collapse of respect for others' property.

I've been arguing for years that the digital economy is a threat to the very concept of property. Property is not a relationship between persons and things but rather a relationship between persons *about* things. This thing is mine, you can't take it, but I might give it, sell it or rent it to you. This relationship only persists so long as most people respect it and those who don't are punished by law. The property relationship originally derives from two sources: from the labour you put into getting or making a thing, and from that thing's *exclusivity* (either I have or you have it but not both of us). Things like air and seawater that lack such exclusivity have never so far been made into property, and digital goods, for an entirely different reason, fall into this category. Digital goods lack exclusivity because the cost of reproducing them tends toward zero, so both you and I can indeed possess the same game or MP3 tune, and I can give you a copy without losing my own. The artist who originally created that game or tune must call upon the labour aspect of property to protest that you are depriving them of revenue, but to end users copying feels like a victimless crime and what's more one for which punishment has proved very difficult indeed.

I find it quite easy to distinguish between digital and real (that is, exclusive) goods, since most digital goods are merely representations of real things. A computer game represents an adventure which in real life might involve you being shot dead. But I wonder whether recent generations of kids brought up with ubiquitous access to the digital world aren't losing this value distinction. I don't believe that violent games automatically inspire violence, but perhaps the whole experience of ripping, torrents and warez, of permanent instant communication with virtual friends, is as much responsible for destroying respect for property as weak parenting is. Those utopians who believe that the net could be the basis of a "gift economy" need to explain precisely how, if all software is going to be free, its authors are going to put real food on real tables in real houses that are really mortgaged. And politicians of all parties are likely to give the police ever more powers to demonstrate that life is not a computer game.

[Dick Pountain is writing a game about a little moustachioed Italian who steals zucchini from his neighbour's garden, called "Grand Theft Orto"]

UNTANGLED WEB?

Dick Pountain/PC Pro/Idealog 204/14/2011

In my early teens I avidly consumed science-fiction short stories (particularly anthologies edited by Groff Conklin), and one that stuck in my mind was "A Subway Named Moebius", written in 1950 by US astronomer A.J. Deutsch. It concerned the New York subway system, which in some imagined future had been extended and extended until its connectivity exceeded some critical threshold, so that when a new line was opened a train full of passengers disappeared into the fourth dimension where it could be heard circulating but never arrived. The title is an allusion to the Moebius Strip, an object beloved of topologists which is twisted in such a way that it only has a single side.

I've been reminded of this story often in the last few years, as I joined more and more social networks and attempted to optimise the links between them all. My first, and still favourite, is Flickr to which I've been posting photos for five years now. When I created my own website I put a simple link to my Flickr pix on it, but that didn't feel enough. I soon discovered that Google Sites support photogalleries and so placed a feed from my Flickr photostream on a page of my site. Click on one of these pictures and you're whisked across to Flickr.

Then I joined Facebook, and obviously I had to put links to Flickr and to my own site in my profile there. I started my own blog and of course wanted to attract visitors to it, so I started putting its address, along with that of my website, in the signature at the bottom of all my emails. Again that didn't feel like enough, so I scoured the lists of widgets available on Blogger and discovered one that would enable me to put a miniature feed from my blog onto the home page of my website. Visitors could now click on a post and be whisked over to the blog, while blog readers could click a link to go to my website.   

Next along came LibraryThing, a bibliographic site that permits you to list your book collection, share and compare it with other users. You might think this would take months of data entry, but the site is cleverly designed and connected to the librarians' ISBN database, so merely entering "CONRAD DARKNESS" will find all the various editions of The Heart of Darkness, and a single click on the right one enters all its details. I put 800+ books up in a couple of free afternoons. It's an interesting site for bookworms, particularly to find out who else owns some little-read tome. I suppose it was inevitable that LibraryThing would do a deal with Facebook, so I could import part of my library list onto my Facebook page too. 

I've written a lot here recently about my addiction to Spotify, where I appear to have accumulated 76 public playlists containing over 1000 tracks: several friends are also users and we swap playlists occasionally. But then, you guessed it, Spotify did a deal with Facebook, which made it so easy (just press a button) that I couldn't resist. Now down the right-hand edge of my Spotify window appears a list of all my Facebook friends who are also on Spotify - including esteemed editor Barry Collins - and can just click one to hear their playlists.

There are now so many different routes to get from each area of online presence to the others that I've completely lost count, and the chains of links often leave me wondering quite where I've ended up. I haven't even mentioned LinkedIn, because it has so far  refrained from integrating with Facebook (though of course my profile there has links to my Flickr, blog and websites). And this is just the connectivity between my own various sites: there's a whole extra level of complexity concerning content, because just about every web page I visit offers buttons to share it with friends or to Facebook or wherever.

It's all starting to feel like "A Social Network Named Moebius" and I half expect that one day I'll click a link and be flipped into the fourth dimension, where everything becomes dimly visible as through frosted glass and no-one can hear me shouting. That's why my interest was piqued by Kevin Partner's Online Business column in this issue, where he mentions a service called about.me. This site simply offers you a single splash page (free of charge at the moment) onto which you can place buttons and links to all your various forms of web content, so a visitor to this single page can click onto any of them. Now I only need add "about.me/dick.pountain" to each email instead of a long list of blogs and sites. Easy-to-use tools let you design a reasonably attractive page and offer help submitting it to the search engines - ideally it should become the first hit for your name in Google. Built-in analytical tools keep track of visits, though whether it's increased my traffic I couldn't say - I use the page myself, in preference to a dozen Firefox shortcuts.

[Dick Pountain regrets the name "about.me" has a slightly embarassing Californian ring to it - but that's enough about him]

PADDED SELL

Dick Pountain/PC Pro/12/06/2011/Idealog 203 

I'm a child of the personal computer revolution, one who got started in this business back in 1980 without any formal qualifications in computing as such. In fact I'd "used" London University's lone Atlas computer back in the mid 1960s, if by "used" you understand handing a pile of raw scintillation counter tapes to a man in a brown lab coat and receiving the processed results as a wodge of fanfold paper a week later. Everyone was starting out from a position of equal ignorance about these new toys, so it was all a bit like a Wild West land rush.

When Dennis Publishing (or H.Bunch Associates as it was then called) first acquired Personal Computer World magazine, I staked out my claim by writing a column on programmable calculators, which in those days were as personal as you could get, because like today's smartphones they fitted into your shirt-pocket. They were somewhat less powerful though: the Casio FX502 had a stonking 256 *bytes* of memory but I still managed to publish a noughts-and-crosses program for it that played a perfect game.

The Apple II and numerous hobbyist machines from Atari, Dragon, Exidy, Sinclair and others came and went, but eventually the CP/M operating system, soon followed by the IBM PC, enabled personal computers to penetrate the business market. There ensued a couple of decades of grim warfare during which the fleet-footed PC guerilla army gradually drove back the medieval knights-in-armour of the mainframe and minicomputer market, to create today's world of networked business PC computing. And throughout this struggle the basic ideology of the personal computing revolution could be simply expressed as "at least one CPU per user". The days of sharing one hugely-expensive CPU were over and nowadays many of us are running two or more cores each, even on some of the latest phones.

Focussing on the processor was perhaps inevitable because the CPU is a PC's "brain", and we're all besotted by the brainpower at our personal disposal. Nevertheless storage is equally important, perhaps even more so, for the conduct of personal information processing. Throughout the 30 years I've been in this business I've always kept my own data, stored locally on a disk drive that I own. It's a mixed blessing to say the least, and I've lost count of how many of these columns I've devoted to backup strategies, how many hours I've spent messing with backup configurations, how many CDs and DVDs of valuable data are still scattered among my bookshelves and friends' homes. As a result I've never lost any serious amount of data, but the effort has coloured my computing experience a grisly shade of paranoid puce. In fact the whole fraught business of running Windows - image backups, restore disks, reinstalling applications - could be painted in a similar dismal hue. 

In a recent column I confessed that nowadays I entrust my contacts and diary to Google's cloud, and that I'm impressed by the ease of installation and maintenance of Android apps. Messrs Honeyball and Cassidy regularly cover developments in virtualisation, cloud computing and centralised deployment and management that all conspire to reduce the neurotic burden of personal computing. But even with such technological progress it remains both possible and desirable to maintain your own local copy of your own data, and I still practise this by ticking the offline option wherever it's available. It may feel as though Google will be here forever, but you know that *nothing* is forever.

Sharing data between local and cloud storage forces you sharply up against licensing and IP (intellectual property) issues. Do you actually own applications, music and other data you download, even when you've paid for them? Most software EULAs say "no, you don't, you're just renting". The logic of 21st-century capitalism decrees IP to be the most valuable kind of asset (hence all that patent trolling) and the way to maximise profits is to rent your IP rather than sell it - charge by "pay-per-view" for both content and executables. But, despite Microsoft's byzantine licensing experiments, that isn't enforceable so long as people have real local storage because it's hard to grab stuff back from people's hard drives.

Enter Steve Jobs stage left, bearing iPad and wearing forked tail and horns. Following the recent launch of iCloud, iPad owners no longer need to own either a Mac or a Windows PC to sync their music and apps with iTunes. Microsoft is already under great pressure from Apple's phenomenal tablet success, and might just decide to go the same way by allowing Windows phones and tablets to sync directly to the cloud. In that case sales of consumer PCs and laptops are destined to fall, and with them volume hard disk manufacture. The big three disk makers have sidestepped every prediction of their demise for 20 years, but this time it might really be the beginning of the end. Maybe it will take five or ten years, but a world equipped only with flash-memory tablets syncing straight to cloud servers is a world that's ripe for a pay-per-view counter-revolution. Don't say you haven't been warned.

[Dick Pountain can still remember when all his data would fit onto three 5.25" floppy disks]

GO NERDS, GO

Dick Pountain/PC Pro/Idealog 202/11/05/2011

I've been a nerd, and proud of it, more or less since I could speak. I decided I wanted to be scientist at around 9 years old, and loathed sport at school with a deep passion. (During the 1960s I naively believed the counterculture was an alliance of everyone who hated sport, until all my friends came out as closet football fans in 1966). However my true nerdly status was only properly recognised a week ago when,: totally frazzled by wrestling with Windows 7 drivers, for a diversion I clicked on a link to www.nerdtests.com and filled in the questionnaire. It granted me the rank of Uber Cool Nerd King, and no award has ever pleased me more (a Nobel might, but I fear I've left that a bit too late).

So what exactly constitutes nerdhood? To Hollywood a nerd is anyone in thick-rimmed spectacles with a vocabulary of more than 1000 words, some of which have more than three syllables - the opposite of a jock or frat-boy. What a feeble stereotype. In the IT world a nerd is anyone who knows what a .INF file is for and can use a command prompt without their hands shaking, but that's still a bit populist for an Uber Cool Nerd King. "Developers" who know at least four programming languages might be admitted to the lower echelons, but true nerd aristocracy belongs only to those with a deep knowledge and love of programming language *design*. If you've ever arm-wrestled someone to solve a dispute over late versus early binding, you might just be a candidate.

In the late 1980s and early '90s I steeped myself in programming language design. I could write 14 different languages, some leading edge like Oberon, Occam, Prolog and POP-11. I wrote books on object-orientation and coded up my own recursive-descent parsers. I truly believed we were on the verge making programming as easy and fun as building Lego models, if only we could combine Windows' graphical abilities with grown-up languages that supported lists, closures, concurrency, garbage collection and so on. That never happened because along came the Web and the Dot Com boom, and between them they dumbed everything down. HTML was a step backward into the dark ages so far as program modularity and security was concerned, but it was simple and democratic and opened up the closed, esoteric world of the programmer to everybody else. If you'd had to code all websites in C++ then the Web would be about one millionth the size it is today. I could only applaud this democratic revolution and renounce my nerdish elitism, perhaps for ever. Progress did continue in an anaemic sort of way with Java, C#, plus interpreted scripting languages like Python and Ruby that modernised the expressive power of Lisp (though neither ever acquired a tolerable graphics interface).

But over the last few years something wonderful has happened, a rebirth of true nerdhood sparked by genuine practical need. Web programming was originally concerned with page layout (HTML, Javascript) and serving (CGI, Perl, ASP,.NET, ColdFusion), then evolved toward interactivity (Flash, Ajax) and dynamic content from databases (MySQL, PostgreSQL, Drupal, Joomla). This evolution spanned some fifteen-odd years, the same years that new giant corporations like Google, eBay and Amazon began to face data processing problems never encountered before because of their sheer scale. Where once the supercomputer symbolised the bleeding edge of computing - thousands of parallel processors on a super-fast bus racing for the Teraflop trophy - nowadays the problem has become to deploy hundreds of thousands of processors, not close-coupled but scattered around the planet, to retrieve terabytes of distributed data fast enough to satisfy millions of interactive customers. A new class of programming languages is needed, designed to control mind-bogglingly huge networks of distributed processors in efficient and fault-tolerant fashion, and that need has spawned some profoundly nerdish research.

Languages like Erlang and Scala have resurrected the declarative programming style pioneered by Lisp and Haskell, which sidesteps many programming errors by deprecating variable assignment. Google has been working on the Go language to control its own huge processor farms. Designed by father-of-Unix Ken Thompson, Rob Pike and Java-machine specialist Robert Griesemer, Go strips away the bloat that's accumulated around object-oriented languages: it employs strong static typing and compiles to native code, so is extremely efficient for system programming. Go has garbage collection for security against memory leaks but its star feature is concurrency via self-synchronising "channels", derived (like Occam's) from Tony Hoare's seminal work on CSP (Communicating Sequential Processes). Channels are pipes that pass data between lightweight concurrent processes called "goroutines", and because they're first-class objects you can send channels through channels, enabling huge network programs to reconfigure their topology on the fly - for example to route around a failed processor or to balance a sudden load spike. Google has declared Go open-source and recently released a new version of its App Engine that offers a Go runtime in addition to supporting Java and Python. I had sworn that Ruby would be my final conquest, but an itching under my crown signals the approach of a nerd attack...

[Dick Pountain decrees that all readers must wear white gloves and shades while handling his column, and must depart walking backwards upon finishing it.]

FREE TO BROWSE

Dick Pountain/PC Pro/Idealog 201     15/04/2011

Last month I finally succumbed and took out a subscription to Spotify Premium, and the steps that lead me there are rather illuminating about this increasingly popular online music business model. In his RWC Web Applications column this issue, Kevin Partner experiments with free versus paid-for web services, and I believe my experience with Spotify, when contrasted with Apple's iTunes model, complements Kevin's insights rather well.

I've been using Spotify for two and a half years now, almost from its 2008 launch, but for the first two I used only the free, ad-supported, version and stoutly resisted their paid-for service. I soon became hooked on the ability to sample different kinds of unfamiliar music that I would never have thought of buying, and so when I discovered that the free service wasn't available in Italy I struggled for a while using UK proxy servers but eventually caved in and signed for the £5 per month Unlimited service which *is* available in Italy and has no adverts. Then a couple of months ago I discovered that only the £10 per month Premium service is available for mobile phones, and I was sufficiently curious to try Spotify on my Android mobile that I coughed up (I could always cancel if I didn't like it). But the increase in utility was so enormous that suddenly it seemed cheap at the price.

It may just be that I'm the ideal Spotify customer and that this process of luring by degrees won't work on younger users, users who have the iPod habit, or users with particularly fixed and narrow tastes in music. But I'm not so sure. Kevin's conclusion is that any payment at all is an enormous obstacle to new custom, and that offering free trials is more cost effective than even the smallest compulsory subscription. You then need to devise a premium offering that adds sufficient utility to persuade people to pay up, which means that it has to be truly excellent, not just a bit of added fluff. I first encountered, and was captured by, a similar model with the New York Review of Books, whose print edition I've subscribed to for many years. As I spent more and more time working online it became more and more useful to me to be able to search that magazine's archive and download previous articles, especially when abroad where my paper copy wasn't available. The NYRB wisely charges a modest, $20 annual premium for such full archive access, which I gladly pay and feel I get value from.

Contrast with Apple's philosophy of tying closed hardware firmly to tightly-regulated marketplaces is stark. The price of individual tracks from iTunes is sufficiently modest not to deter buyers, but buy them you must - no free roaming around the store. This model is a phenomenal commercial success, to the point where almost the whole print publishing (and soon film) industry is grovelling to Steve Jobs to save them from digital apocalypse. The problem for me with this model is that it's brand dependent: you need to know what you want, whereupon Apple will supply it in unbeatably slick and transparent fashion. My problem is that I don't consume music (or books) that way.

I spend a lot of time reading and walking and in neither case do I want a laptop or tablet in my hand. I've never really learned to love the iPod though I have used MP3 players. I possess a large, ecletic record collection on vinyl and CD which  ranges from opera and classical through jazz, blues, soul, old R&B to country and bluegrass. I've ripped some of my CDs but could never be bothered to digitize all that vinyl, nor pay thousands of pounds to those services that do it for you. My tastes are highly volatile and unpredictable so even a few thousand tracks on an MP3 player aren't enough, and access is too clumsy.

Whole evenings on Spotify are devoted to hearing every conceivable version of "These Foolish Things", or comparing half a dozen performances of the Goldbergs, or exploring some wholly new genre. Last week I went to modern guitar recital at the Wigmore Hall and afterwards spent days exploring composers like Brouwer, Bogdanovic and Duarte. I'd never do that if I had to explicitly pay even 50p to download each track (even ones I might reject after 10 seconds). The freedom to browse is worth £120 per year to me because Android phone plugged into hi-fi or earphones has become my sole music source, replacing both CDs and downloads.

Not all record companies and managements have yet signed up with Spotify - notable exceptions being Bob Dylan and The Beatles - but I have what I want of them in my collection (and if the service succeeds in the USA I believe they'll come knocking). Such free, ad-supported services that lead you toward value-added premium services will eventually prove more effective at extracting payment than Apple's walled-garden approach, for books, film and TV as well as music. Encouraging people to explore and broaden their tastes rather than reinforcing brand loyalties also spreads the proceeds to artists outside of the Top 10, which could be why some larger companies are resisting...

JUST HOW SMART?

Dick Pountain/15 March 2011 14:05/Idealog 200

The PR boost that IBM gleaned from winning the US quiz show Jeopardy, against two expert human opponents, couldn't have come at a better time. We in the PC business barely register the company's existence since it stopped making PCs and flogged off its laptop business to Lenovo a few years ago, while the public knows it only from those completely incomprehensible black-and-blue TV adverts. But just how smart is Watson, the massively parallel Power 7-based supercomputer that won this famous victory?

It's probably smart enough to pass a restricted version of the Turing Test. Slightly reconfigure the Jeopardy scenario so a human proxy delivers Watson's answers and it's unlikely that anyone would tell the difference. Certainly Jeopardy is a very constrained linguistic environment compared to free-form conversation, but the most impressive aspect of Watson's performance was its natural language skill. Jeopardy involves guessing what the question was when supplied with the answer, which may contain puns, jokes and other forms of word play that tax average human beings (otherwise Jeopardy would be easy). For example a question "what clothing a young girl might wear on an operatic ship" has the answer "a pinafore", the connection - which Watson found - being Gilbert and Sullivan's opera H.M.S. Pinafore.

Now I'm a hardened sceptic about "strong" AI claims concerning the reasoning power of computers. It's quite clear that Watson doesn't "understand" either that question or that answer the way that we do, but its performance impressed in several respects. Firstly its natural language processing  (NLP) powers go way beyond any previous demonstration: it wasn't merely parsing questions into nouns, verbs and so on, and their grammatical relationships, but also some *semantic* relationships which it used as entry points into vast trees of related objects (called ontologies in  NLP jargon) to create numerous diverging paths to explore in looking for answers. Secondly it determined its confidence in the various combinations generated by such exploration using probabilistic algorithms. And thirdly it did all this within the three seconds allowed in Jeopardy.  

Watson retrieves data from a vast unstructured text database that contains huge quantities of general knowledge info as well as details of all previous Jeopardy games to provide clues to the sort of word-games it employs. 90 rack-mounted IBM servers, running 2,880 Power 7 cores, present over 15 Terabytes of text equivalent to 200,000,000 pages (all stored locally for fairness, since the human contestants weren't allowed Google) and all accessible at 500GB/sec. This retrieval process is managed by two open-source software frameworks. The first,  developed by IBM and donated to the Apache project is UIMA (Unstructured Information Management Architecture) which sets up multiple tasks called annotators to analyse pieces of text, create assertions about them and assign probabilities to these assertions.

The second is Hadoop, which Ian Wrigley has covered recently in our Real World Open Source column. This massively parallel distributed database - employed by the likes of Google and Amazon - is used to place the annotators onto Watson's 2,880 processors in an optimal way so that work on each line of inquiry happens close to its relevant data. In effect the UIMA/Hadoop combination tags just those parts of this vast knowledge base that might be relevant to a particular query, on the fly. This may not be the way our brains work (we know rather little about low-level memory mechanisms) but it's quite like the way that we work on our computers via Google: search for a couple of keywords, click further links in the top listed documents to build an ad hoc path through the vast ocean of data.

Web optimists describe this as using Google as an extension of our brains, while web pessimists like Nicholas Carr see it as an insidious process of intellectual decay. In his book "The Shallows: How the Internet is Changing the Way We Think, Read and Remember", Carr suggests that the phenomenon called neuroplasticity allows excessive net surfing to remodel our brain structure, reducing our attention span and destroying our capacity for deep reading. On the other hand some people think this is a good thing. David Brooks in the New York Times said that "I had thought that the magic of the information age was that it allows us to know more, but then I realised that the magic is... that it allows us to know less". You don't need to remember it because you can always Google it.

It's unlikely such disputes can be resolved by experimental evidence because everything we do may remodel our brain: cabbies who've taken "the knowledge" have enlarged hippocampuses while pianists have enlarged areas of cortex devoted to the fingers. Is a paddle in the pool of Google more or less useful/pleasurable to you than a deep-dive into Heidegger's "Being and Time"? Jim Holt, reviewing Carr's book in the London Review of Books, came to a more nuanced conclusion. The web isn't making us less intelligent, nor is it making us less happy, but it might make us less creative. That would be because creativity arises through the sorts of illogical and accidental connection (short circuits if you like) that just don't happen in the stepwise semantic chains of either a Google search or a Watson lookup. In fact, the sort of imaginative leaps we expect from a Holmes rather than a Watson...

WRITE ON

Dick Pountain/17 February 2011 14:53/Idealog 199

Around a year ago in issue 186 I pronounced that the iPad's touch interface was the future of personal computing, which was greeted with a certain scepticism by colleagues who thought perhaps I'd become an Apple fanboi or else had a mild stroke, the symptoms being very similar. Events since have made me surer than ever: the touch interface feels right in exactly same way the windows/icons/mouse/pull-down interface felt right the first time I used it. The number of people who got iPads this Christmas was further evidence, and most convincing for me was a good friend who hates computers (but likes iPods and email) who bought himself one for Christmas. When I visited a few weeks ago he had it on a perspex stand with one of those slim aluminium keyboards underneath, and told me that he'd junked his hated laptop completely: the iPad now did everything he needed (apart from a raging addiction to Angry Birds). Evidence is piling up that 2011 will be the year of the tablet, and even Microsoft will be only one year late to the party this time around, with a Windows 8 tablet by 2012 according to PC Pro's news desk. HP's recent announcement of its WebOS tablet perhaps offers real competition for Apple, in a way that cheaply thrown-together Android tablets can't do until some future version of that OS arrives. 

I don't have an iPad myself, partly because I won't pay the sort of money Apple is asking, partly because I've never been an Apple person, partly because I hate iTunes. Also I *don't* hate laptops, I have a gorgeous Sony Vaio that weighs no more than an iPad, and I'm in no hurry. I do now use an Android phone and am using that to familiarise myself with the quirks of touch-based computer interaction, and so far I love it except for one aspect, and that is entering text.

I often just find myself staring paralysed inside some application when I need to enter a word but nothing is visible except a truly tiny slot and the keyboard hasn't popped up yet (yes Wikipedia, that means you). My phone's screen is smaller than an iPhone's and typing at any speed on its on-screen keyboard is hard, even with vibrating "haptic feedback" on. My Palm Treo had Blackberry-style hard keys which makes the contrast greater still. I've eventually settled on CooTek's Touchpal soft keyboard which puts two letters on each enlarged keytop (sideswipe your thumb to get the second) and has a powerful predictive text capability.

I appreciate that the larger screen of a tablet makes using an on-screen keyboard less finicky, but there's another mental factor at work because on generations of Palms I was an enthusiastic and expert user of Grafitti handwriting recognition. I'm totally used to being able to roughly scribble a letter with my fingertip onto the screen in the Contacts application and have it go straight to someone's name, invaluable on dark nights, in the rain, when you've lost your specs and so on. The reason I got so good at Grafitti was no thanks to Palm - which completely screwed up with Grafitti 2 - but thanks to Tealpoint Software whose wonderful Tealscript app enabled me to use the whole screen area inside any application, to get caps and numbers by shifting over to the right, and to customise the strokes for particular glyphs I found difficult.

Now handwriting recognition just isn't feasible on Android or iPhones as their screens are too small and the capacitive technology they employ lacks positional precision. However it ought to be possible, using a fingertip rather than a stylus, on the larger screen of a tablet, and indeed there's already a handwriting application for the iPad called WritePad, mostly aimed at children, which lets you scribble with a finger over a wide screen area. It seems to me that by combining such a recognition engine with two other existing software techniques, handwriting could become a major input method for tablets. The first technique is predictive text, as employed in TouchPal, which gradually analyses your personal vocabulary into a user dictionary and so improves its guesses. The second is the "tag cloud", a graphical UI trick familiar from social networking sites like Flickr, De.licio.us and Technorati, in which a collection of user tags gets displayed in a random clump in font sizes proportional to their frequency of use.

I can imagine a tablet interface in which you scribble with your finger onto the screen, leaving a semi-transparent trace for feedback, and the predictive text engine generates a tag cloud - also semi-transparent - of the most likely candidates, in which both the size and the proximity to your current finger position of each candidate word is proportional to its probability. This cloud would need to be animated and  swirl gently as your finger moved, which should be possible with next generation mobile CPUs. When you see the right word in the cloud, a quick tap inserts it into the growing text stream. From what I've seen of the development tools for iPad and Android, I'm well past writing such a beast myself, so I bequeath the idea to some bright spark out there.

[Dick Pountain is impressed that a Palm Pilot could decipher his handwriting, given that most humans can't.]

TEXT TWEAKER

Dick Pountain/16 January 2011 12:54/Idealog 198

A couple of years back you'd still hear arguments about whether or not electronic readers could ever take over from print-on-paper. That already feels like a long time ago. I found myself in a mid-market hotel just before Christmas, and when I came down for breakfast in the morning at *every* table was someone (or a whole family) reading news from an iPad, except for the one that had a Kindle. I had to make do with my Android phone and felt a bit out of place.

I've written here before about my Sony Reader, but it hasn't make the grade and is now gathering dust. Its page turning is just too slow and it's too much of a fag to download content to it, but the final straw was the way it handles different ebook formats, unpredictably and far from gracefully. The problem is simply that I'm not in the market for commercial ebooks and never buy novels from Amazon or publishers' websites. What novels I read, I still read on paper (possibly decades or centuries old) and the rest of the time I read non-fiction that's rarely if ever available as an ebook. Commercial books properly formatted in ePub look fine on the Sony - cover, contents and navigation - but I rarely read them. Perhaps a third of my reading is nowadays done on screen, but almost always laptop or phone and off a web page: the Guardian website, Open Democracy, Arts & Letters, various blogs, and white papers from numerous tech sites.

I have however collected an extensive library of classic texts and reference works that I use a lot, stored on my laptop to be always available off-line, and it's there the Sony really fell down. I get most of these books from the Internet Archive where they're typically available in several formats: PDF and PDF facsimile (scanned page images), ePub, Kindle, Daisy, plain text and DjVu (an online reading format). However the Internet Archive is a non-profit organisation that relies on voluntary, mostly student, labour to scan works in, so inevitably most documents are raw OCRed output that hasn't been cleaned up manually. Really old books set in lovely letterpress typefaces like Garamond and Bodoni are the saddest, because OCR sees certain characters as numerals so the texts are peppered with errors like "ne7er" and "a8solute". Many such books also contain a lot of page furniture - repeating book and chapter titles in headers or footers for example - that scanning leaves embedded throughout the text, extremely irritating if you consult them often. 

One solution is to download a facsimile version, but that's glacially slow to read on the Sony Reader, taking ten seconds to turn each page and looking crap in black-and-white: on laptop or iPad in colour it's a fine way to read (it even preserves pencilled margin notes) but it isn't searchable which defeats half the purpose, so I always have to download a text-based PDF or plain text version too. Unfortunately the Sony Reader displays PDFs unpredictably: it only has three text sizes and if you're unlucky none of them will look right, either being too huge or too tiny.

I started cleaning up certain books myself, downloading a plain text version and using Microsoft Word (of all things), which actually has powerful regular expression and replacement expression facilities, though well hidden and with lamentably poor Help. I soon learned how to quickly bulk-remove all page numbers and titles, auto locate and reformat subheads, and even cull improbable digits-in-the-middle words like "ne7er". However outputting the cleaned up result as  PDFs proved a lottery on the Sony as regards text sizing, contents page and preserving embedded bookmarks (you need one per chapter for navigation purposes). For many books I found that an RTF version actually looks and works better than a PDF.

Someone tipped me off to try Calibre (http://calibre-ebook.com/), a shareware ebook library manager that converts between different ebook formats, and in particular can output in Sony's own LRF file format which proved more reliable than PDF. It was already too late for me though. Calibre works well but is quite techie to use and, like Sony's proprietory Reader software, it maintains its own book database, so yet another file system to deal with. Eventually I just couldn't be bothered. I've checked Google Books offer of a million free-to-download public domain titles, only to discover that they are of course the same hastily-scanned copies I already have from archive.org (which stands to reason as once some public-spirited volunteer has scanned an obscurity like Santayana's "Egotism in German Philosophy", no-one else is ever going to do it).

My own gut feeling is that, Kindle notwithstanding, none of the current ebook formats will be the eventual winner and that plain old HTML, in its 5 incarnation, will become the way we all read stuff on our tablets in a couple of years time. Perhaps PDF too if Adobe puts its house in order in time. And we'll need to recruit a whole second generation of volunteer labour to clean up all those documents scanned by the first generation once the Google book project gets into its full stride: the bookworms' equivalent of toiling in the cane fields...

PENTACLE OF CONFLICT

Dick Pountain/16 December 2010 13:39/Idealog 197b

Last month I confessed that I've abandoned the Palm platform, after 14 years of devotion, for an Android phone and Google online storage of my personal information. One important side-effect of this move that's almost invisible to me, is a huge leap in my consumption of IP bandwidth. I use the phone at home, on Wi-Fi, as a constant reference source while I'm away from my PC reading books, and my time connected to BT Broadband must have at least doubled, though that doesn't cost me a penny more under a flat-rate tariff. And I'm far from alone in this altered consumption pattern: a report by network specialist Arieso recently analysed data consumption of latest generation smartphones and found their users staying connected for longer, and downloading twice as much data as earlier models.

Android users were hungriest, with iPhone4 and others close behind (and though they didn't even include iPad users, you just know those stay connected pretty well all the time). It's partly the nature of the content - faster CPUs make movie and TV viewing practical - and also that smarter devices soon lead you not even to know whether you're looking at local or networked content. The long-term implications for the net, both wired and 3G, are starting to become apparent, and they're rather alarming. It's not that we'll actually run out of bandwidth so much as the powerful political and industrial forces being stirred up to grouch about its unfair distribution.

The same week as that Arieso report, the Web '10 conference in Paris heard European telecom companies demanding a levy on vendors of bandwidth-guzzling hardware and services like Google, Yahoo!, Facebook and Apple. These firms currently make mega-profits without contributing anything to the massive infrastructure upgrades needed to support the demand they create. Content providers at the conference responded "sure, as soon as you telcos start sharing your subscription revenues with us". It's shaping up to be an historic conflict of interest between giant industries, on a par with cattle versus sheep farmers or the pro and anti-Corn Law lobbies.

But of course there are more parties involved than just telcos versus web vendors. Us users, for a start. Then there are the world's governments, and the content-providing media industries. In today's earnest debates about Whither The Webbed-Up Society, no two journalists seem to agree how many parties need to be considered, so I'll put in my own bid, which is five. My five players are Users, Web Vendors, Governments, ISPs and Telcos, each of whose interests conflict with every other, which connects them in a "pentacle of conflict" so complex it defies easy prediction. The distinction is basically this: users own or create content and consume bandwidth; web vendors own storage (think Amazon servers and warehouses, Google datacenters) and consume bandwidth; telcos own wired and wireless fabrics and bandwidth; and poor old ISPs are the middle-men, brokering deals between the other four. Note that I lump in content providers, even huge ones like Murdoch's News International, among users because they own no infrastructure and merely consume bandwidth. And they're already girded for war, for example in the various trademark law-suits against Google's AdWords.   

What will actually happen, as always in politics, depends on how these players team up against each other, and that's where it starts to look ominous. At exactly the same time as these arguments are surfacing, the Wikileaks affair has horrified all the world's governments and almost certainly tipped them over into seriously considering regulating the internet. Now it's one of the great cliches of net journalism that the 'net can't be regulated - it's self-organising, it re-routes around obstacles etcetera, etcetera - but the fact is that governments can do more or less anything, up to and including dropping a hydrogen bomb on you (except where the Rule of Law has failed, where they can do nothing). For example they can impose taxes that completely alter the viability of business models, or stringent licensing conditions, especially on vulnerable middle-men like ISPs.

Before Wikileaks the US government saw the free Web as one more choice fruit in its basket of "goodies of democracy", to be flaunted in the face of authoritarian regimes like China. After Wikileaks, my bet is that there are plenty of folk in the US government who'd like to find out more about how China keeps the lid on. The EU is more concerned about monopolistic business practices and has a track record of wielding swingeing fines and taxes to adjust business models to its own moral perspective. All these factors point towards rapidly increasing pressure for effective regulation of the net over the next few years, and an end to the favourable conditions we presently enjoy where you can get most content for free if you know where to look, and can get free or non-volume-related net access too. The coming trade war could very well see telcos side with governments (they were best buddies for almost a century) against users and web vendors, extracting more money from both through some sort of two-tier Web that offers lots of bandwidth to good payers but a mere trickle to free riders. And ISPs are likely to get it in the neck from both sides, God help 'em. 



ELECTRIC SHEEP

Dick Pountain/15 December 2010 11:45/Idealog 197

Regular readers will know that it's normal policy for successive columns to skip from subject to subject like a meth-head cricket with ADHD (that is, without any visible continuity) but last month's column, about abandoning the Palm platform, represents such a major life change that I feel obliged to follow it up immediately. It's 14 years to the day since I first mentioned Palm (actually then the US Robotics Pilot) in this column, and my address book, notes and appointments have persisted inside Palm products ever since. My leaving Palm for an Android phone plus Google cloud storage certainly struck a chord with other be-Palmed readers.

Richard, who is following the same path, emailed to tell me he needs to transfer all his old Palm archived appointments to Google, but Palm Desktop won't export them in any useful format. Within 24 hours another reader, Mike, had pointed us to a solution at
http://hepunx.rl.ac.uk/~adye/software/palm/palm2ical/, an app that exports them in iCal format. Mike also told me several other interesting things, including:

1) A jail-broken iPod Touch can run Palm apps via a third-party emulator called Style Tap. I'm glad I didn't know this as it might have kept me stuck in my groove.
2) While the Orange San Francisco phone I bought has a lovely AMOLED screen, ones on sale now are rumoured to have reverted to TFT which makes them somewhat less of a bargain.
3) Rooting the San Francisco to a non-Orange ROM is not something I want to get involved in just yet awhiles.

So how am I coping with Android? Actually I like it much better than expected. I use my phone mostly at home via Wi-Fi, which makes web browsing and downloading apps from the Market fast, easy and cheap. I'm pretty impressed by the stability and multi-tasking of Android 2.1. Most dying apps do so gracefully via a "bye-bye" dialogue and you can always get into Task Manager to kill them, unlike the Palm Treo which was forcing me to pull its battery and reboot about once a week towards the end. I haven't really explored third-party task managers that automatically shut down unused background apps yet, and still do a manual Kill All from time to time.

The quality of Android apps is extremely variable, since they're not vetted the way Apple's AppStore is, but since they take only seconds to download and install I just try 'em and chuck 'em till I find one I like. And there usually is one, eventually. My main requirements, beyond phone calls and web browsing, are for contacts, calendar, note-taking, document reading, photo viewing and music playing. Google's own apps for mail, contacts and calendar sync perfectly with their online counterparts without any fuss (I just love clicking the location for an appointment and being whisked straight into a Google map).

It took me quite a few rejections to find a plain text editor I can live with, Txtpad Lite, and the same for PDF viewers. I've actually ended up with three of the latter because no one does everything I want. Adobe's own is feeble, lacking both search function and bookmarks: only kept for reference. I paid for Documents To Go, part of which is PDF To Go which has both, but I also use a free one called ezPDF Reader whose UI is easier to use one-thumbed, and which remembers your document place when you switch away (unlike PDFTG which maddeningly returns to the cover page). That's essential for reading a manual while programming, but it's slower on pictures.

Did I just say programming? That's right, there's an excellent Ruby interpreter for Android called Ruboto which ran all my text-based Ruby apps unchanged, to my ecstatic amazement. It has a rudimentary integrated editor but I can write longer scripts in Txtpad and load them at the IRB prompt. Now I just need an Android-based graphical Ruby API, equivalent to Shoes under Windows. 

The built-in music player is plenty good enough for me, not merely sucking up all the MP3s from my laptop but automatically organising them far better, complete with track names I didn't even realise were in there (you can tell I'm not of the iPod generation). Ditto for photos and videos where the built-in apps suffice: I mostly use Flickr and a real camera anyway.

That just leaves reference data. I couldn't transfer dictionaries from my Palm and so had to pay for some new ones. The Oxford English cost me £12 and is excellent, with a better UI than the Palm version. I plumped for SlovoEd's language dictionaries since TrueTerm, which I've used for years, doesn't appear to have made the move to Android. I paid for $10 for SlovoEd's full Italian, and live with its free versions for French and German. Thanks to Orange's five-page home screen layout I can have these as icons all on one screen, for rapid one-thumb access. I'm training myself to like the optional CoolTek T+ keyboard layout with two characters per key which you select via a sidewise thumb slide - it's fast once you get the hang. All in all my Android experience so far has been deeply pleasurable, and I do indeed dream of Electric Sheep, being chased by back-flipping Toucans...

BOWLED A GOOGLY

Dick Pountain/17 November 2010 14:37/Idealog 196

November was a hectic month for me, with lots of social engagements, so it was particularly galling when Palm Desktop 6.2 abruptly lost all my appointments. It was in fact one of those "last straw" moments. I'd been having trouble with Palm Desktop for some time, ever since I upgraded to Windows Vista. It soon started losing large chunks of my contacts database - at lengthy but random intervals, perhaps twice a year - but I stupidly persisted in working around it because the data still existed on my Treo phone so I'd just Hotsync set to "Handheld overwrites Desktop" and restore them. I searched the forums and found that others were having problems too, but version 6.2, the Vista version, is the last from Palm so bug fixes and future support are non-existent.

This bug of course negates the entire point of Hotsync, since I always enter new addresses via my laptop, not my phone, and had to start deliberately syncing after each new entry to make sure the Treo had all the data. And having my phone hold my master database is a terrible idea anyway. But when those appointments disappeared they weren't on the Treo either, and that brought me sharply to my senses: as the young folks might say, Palm is soooo over for me...

The problem is, what to replace it with. I'd already taken a first step away from Palm by adopting Googlemail as  my email client (I never used the email facility in Palm Desktop). Actually I use Gmail as the *interface* to my mail rather than as client: it aggregates mails sent to two older email addresses, plus mail to my own domain which comes via a BT mailbox so I always have a second copy. Around a year ago I exported all 939 of my Palm contacts to my Gmail Contacts, a painless enough process once you choose vCard format, but disastrous if you try comma or tab delimited files (I tried both of course). But, there's no automatic sync between Gmail and Palm Desktop.

Losing those appointments pushed me further to investigate Google Calendar and I discovered that I like it. What tipped it for me is that it automatically parses locations contained in appointments and gives me links straight into Google Maps. So I had the basis of a new life-organising paradigm here, if I junked my Treo for an Android phone that can sync with Gmail, Contacts and Calendar. Another alternative would be to junk the Treo for a Windows Mobile 7, or any other smartphone that can sync with MS Outlook, which most of them can. The problem there is that I'm allergic to Outlook, which is the reason I went Palm in the first place. I find Outlook incomprehensible and confused to a maddening degree, and what's more incomprehensible still to me is that so many people can tolerate it. So, Android it is then.

This brings me to a large and highly-topical philosophical question, which is, can you trust all your personal data to "the cloud". After searching my conscience in depth, my answer is a muted "yes". It all depends upon your estimate of comparative risks, and mine is that the chances of my laptop or phone being lost, stolen or destroyed in a house fire somewhat exceed those of Google going bust, or turning evil and charging £100 a week for storage. The truth is that these Palm Desktop hiccups have rather soured phone/PC synchronisation for me. I will of course be downloading my Google Contacts periodically and backing them up locally just in case, for which purpose I'll probably write a script and schedule it. As for the privacy issues that obsess some people, its only my banking and financial details I worry about. On the Palm I kept these behind a separate password, but Gmail doesn't have any such facility so I'll have to investigate other ways to store them.

I've taken the first step by purchasing an Orange San Francisco smartphone, as recommended by Paul in this month's Mobile & Wireless column. It's a cute little device and I don't even mind the Orange crap it's smothered in so much as some reviewers appeared to. But then I remain an extraordinarily low-volume mobile user: in the UK I barely make mobile voice calls at all, and in Italy where I use it more I use a Telecom Italia PAYG account. What's more I don't suffer from OSAS (Obsessive Smartphone Aesthetics Syndrome) which is epidemic at the moment, making otherwise sane people grizzle that they "hate the chrome strip down the side", or that the "pinch-and-twiddle gesture has the wrong feel". I'm only barely reconciled to using mobiles in any case, and they fall way below guitars, pottery or even cushions in my aesthetic concern list.

I will however soon get around to unlocking the San Fran, bunging in a T-Mobile SIM and then rooting, reaming and rogering it until it does things the way I want, which is basically to sync up with Gmail, Google Contacts and Google Calendar. I suspect that endeavour will provide me with hours of fun/heartbreak between now and Christmas.

THE FUTURE IS ORGANIC

Dick Pountain/11 October 2010 15:08/Idealog 195

If you use a modern touch-screen smartphone or watch a newish flat-screen telly then you've probably marvelled at the sharp and bright display quality, which is thanks to Active Matrix Organic Light Emitting Diode (AMOLED) technology. You might not, however, have thought further about the implications of this technology for the future of IT, the clue to which lies in that "O" for Organic.

AMOLED displays contain a layer of organic LEDs (one per pixel) fabricated from light-emitting organic polymers like poly(p-phenylene vinylene) or doped poly(n-vinylcarbazole), printed onto a plastic sheet which is then sandwiched with another thin film containing the transistor switches that turn each pixel on and off. At the moment the only materials suitable for making this substrate layer are still inorganic, silicon-based ones like polycrystalline silicon or amorphous silicon - so AMOLED  is still a hybrid technology in this sense. It's likely that soon we'll see displays in which amorphous silicon transistors are created using cold vapour deposition methods so that the whole display can become flexible, but the ideal would be if the transistors themselves could be fabricated in some organic, plastic technology. That would revolutionise the electronics business in ways we can barely imagine yet.

I've written in this magazine before about how the computing revolution was basically driven forward by the benign scaling properties of the Complementary Metal-Oxide-Semiconductor (CMOS) fabrication process, which is what gave rise to Moore's Law. Not every chip in your phone or PC is made in CMOS, but almost all of them are made from *some* combination of silicon, silicon dioxide and metal layers and for that reason they're all very hard and brittle. Indeed they're so fragile that they have to be encased in plastic, with metal pins sticking out to make contact with their circuits. It's these packages that determine the shape and size of what we think of as an electronic device. A motherboard consists of metal tracks on a plastic substrate, connecting together the pins of chip packages like so many railway lines connecting cities. In a mobile phone this board will be thin and bendy to some extent, but you can't bend the chip  packages.

If we had a wholly organic system for making electronic circuits, then intelligence could be distributed throughout what at present is merely the protective casing of a device. Just about anything that can be fabricated from plastic could be made smart, and that means just about anything. So what are the prospects for a wholly organic electronics? Well, this year's Nobel Prize for physics reveals one very promising avenue: Andre Geim and Konstantin Novoselov, two Russian expatriate scientists working at Manchester University shared the prize for their work on graphenes, which might be the key to whole new ways of making electronic circuits.

You may remember from school chemistry that graphite, the form of carbon used in pencil leads and lubricating greases, is slippery because it's made up of piles of one-atom-thick sheets whose the carbon atoms are connected hexagonally like molecular chicken wire. These sheets are phenomenally strong in themselves, but very weakly connected to their neighbours so they can slip over each other like a pack of cards. A graphene is a single one of these sheets, and Geim and Novoselov pioneered methods for extracting and manipulating such sheets. (Every time you write with a lead pencil, you leave a smear of graphenes across the paper).

The electronic properties of graphenes are as interesting as their mechanical ones: in their plane they conduct electricity and heat better than silver, and they display semiconduction of a potentially useful kind. Their conduction properties can be modified by electrical and magnetic fields perpendicular to their plane, and Field Effect Transistors (FETs), albeit rather inefficient ones, have already been made from them. You can oxidise graphenes and dope them with other elements to further modify their properties. They're closely related to fullerenes and carbon nanotubes, whose mechanical properties have been under intense scrutiny for years, and it's not too far fetched to imagine layers of doped graphene deposited on plastic with individual components connected by nanotube conductors.

It's recently been demonstrated that you can inject spin-polarised electrons into graphene lattices, opening up the possibility of "spintronic" memory devices based on reading electron spins rather than charges. Graphenes also have optical properties that make them potentially useful in displays and in photonic circuits. And because graphenes conduct very poorly indeed perpendicular to their plane, when appropriately layered they form the thinnest capacitor imaginable, so it's equally possible to imagine a hypercapacitor formed within the actual plastic case of a gadget replacing a battery: it might only power the device for a few minutes but would take only seconds to recharge from some wireless inductive source. In fact I might just rewrite that famous scene from the movie "The Graduate":

Mr. McGuire: "I want to say one word to you. Just one word".
Benjamin: "Yes, sir".
Mr. McGuire: "Are you listening?"
Benjamin: "Yes, I am".
Mr. McGuire: "Graphenes".

In the movie the word was of course "Plastics" but that was back in 1967. The two words may one day be seen as marking the frontier between technological epochs.

STRONG ARM TACTICS

Dick Pountain/14 September 2010 10:14/Idealog 194

In the interests of transparency I begin this column with full disclosure: in 1992 I accepted a bottle of 21-year Springbank single-malt whisky from Robin Saxby (now Sir Robin) the CEO of Arm Holdings Ltd. It was an innocent enough gift. I'd just written a feature for Byte about the ARM610 processor which Apple was considering building into the Newton, and I ended saying that if this tiny UK chip maker could clinch a deal with the US giant I'd celebrate with a dram of Scotland's finest, which to my genuine surprise turned up with a ribbon around it.

ARM is a very British success story, in that 99% of the population have never heard of it, it doesn't actually manufacture anything, yet 15 *billion* of its chips are fitted into mobile telephones, cash machines, medical instruments, toys and so on, throughout the world. And the firm may present more of a challenge to Intel's market dominance than AMD.

The original Acorn RISC Machine (ARM) launched in 1986 to power Acorn's Archimedes PC, using a minimalist architecture designed by Steve Furber, who is now ICL Professor of Computing at Manchester. Pretty soon the ARM team came to two imporant conclusions - no British firm could hope to compete with Intel in manufacturing chips, but their tiny simplified processor core could be shrunk further, more easily than even the smallest of US RISC designs. Accordingly they decided to sell only intellectual property - processor designs - to other chip makers, in return for licence fees and royalties.

So small was the ARM CPU core that even by the 1990s it was possible to put more than one onto a single silicon die, and ARM diversified the design into various dedicated cores for memory management, disk control, signal processing, communications and so on, so that by the late '90s it was possible to build the electronics for a whole device onto a single die that consumed far less power than competing chips. They sold the rights to these core designs to huge manufacturers like Samsung, Qualcomm and even to Intel, who added extra logic of their own and made the actual chips. Chips containing ARM cores, and thus running ARM binary code, now dominate the mobile phone marketplace. The Snapdragon processor fitted in many Android touchscreen smartphones? Qualcomm, with ARM core. The iPhone? Powered by Samsung chip with an ARM core. And many, many more ARM cores live inside embedded controllers that manage everything from cars to cash machines.

However dominance in the mobile and embedded markets alone is never likely to threaten Intel, which has the huge desktop, laptop and servers markets almost to itself, yielding only a small slice to AMD since Sun quit the server scene. ARM's next moves could be about to change that, because the superb shrinkability of that core puts it in harmony with two of today's leading concerns in the server market: energy saving and virtualisation.

As the volume of data stored in the world's search engine indexes and databases continues to mushroom, datacentre power consumption becomes of critical concern: scare stories continually surface about the internet consuming most of the world's electric supply by the year 20xx. Now the ARM may appear to suffer a tremendous disadvantage in the server market because it remains a 32-bit architecture and thus limited to addressing no more than 4Gbytes of memory directly. What's more, revamping it completely into a full 64-bit design might well spoil its excellent scalability and power-consumption advantages. Processor designers for years have solved such dilemmas by applying virtual memory, a memory management unit that translates each memory access into a larger address space and fools the CPU into only seeing one 4Gbyte chunk at a time (Intel has done it since the first Pentium). That has disadvantages both in operating system complexity and power consumption.

However the server world nowadays is less and less interested in operating systems per se and more and more interested in hypervisors. A typical web server is going to be virtualised, with each processor core running several virtual machines, each machine serving one request with fairly modest memory requirements. It's only applications like video editing that actually demand terabytes of contiguous memory nowadays. Not too much surprise then when in August this year ARM's architecture program manager David Brash revealed that the ARMv7-A (aka "Eagle" and "Cortex") features hardware extensions to address a terabyte or more of memory in two stages, and also virtualisation extensions that create a new privilege level for suitably designed hypervisors running over the ARM core. In September ARM revealed that its next-but-one Cortex product will feature 16 cores on the same die, running at 2.5GHz, and will be aimed at the cloud server market.

Assume that there's also a communication controller core in the pipeline with equivalent capabilities to scale I/O bandwidth, and you'd have a solution to those problems of parallel processing I've been writing about here recently, which might use an order of magnitude less power than current architectures and give Intel a real battle at last. If that comes to pass I'll celebrate with a glass of 1949 La Tache and a 300gm tin of beluga caviar (worth a try, worked last time...)

[BIO: "Dick Pountain started out on Burgundy, but soon hit the harder stuff"]

GOTHIC HI-TECH

Dick Pountain/19 August 2010 10:54/Idealog 194

I'd fully planned for this column to be an uber-techie prophecy about spintronics research, since a team at Ohio Uni just created an organic semiconductor based memory cell that manipulates electron spins to store data. However while researching it online I fatally stumbled across a speech delivered by cyberpunk sci-fi author Bruce Sterling to a futurology conference called Reboot 11 in Copenhagen last year (http://video.reboot.dk/video/486788/bruce-sterling-reboot-11). Sterling's bleakly dystopian view of our prospects for the next 10 years is hugely entertaining, delivered in his deadpan, geek/Eeyore style, but it's so eminently plausible that it completely unmanned me for the task of burbling optimistically about the joy-giving potential of bendy plastic computers...

Perhaps unusually for someone in this tech-nerdy profession I'm not really a sci-fi fan so I've never actually read the novels of Bruce Sterling, nor his co-punk William Gibson. Actually I did enjoy sci-fi in my teens (particularly classic anthologies edited by Groff Conklin) but during the hippy sixties I read all the sci-fi classics until I sickened myself of the whole genre. After that only J.G. Ballard and Bill Burrough's "Nova Express" were sufficiently pointed to penetrate my horny carapace of ennui. But Sterling's Reboot speech got to me through his intense realism, coupled to a laconic refusal to over-dramatise.

Sterling starts with some deceptively mild observations about Fiat's new Cinquecento (a car I think is really cute). He asked a Fiat designer whether, having revived this 1957 model to huge commercial success, they planned to follow chronologically with succeeding models, but the designer said of course not - they would just watch how people customise the new 500 and develop new models based on that. They don't intend to recapitulate history, merely to plunder it for exploitable images. 

Sterling's vision of the near future is not one of apocalypse but of a steady and deeply disruptive decline as energy and food prices rocket, economic inequality grows ever more grotesque, weather becomes more hostile and numerous low-intensity wars are precipitated by drought, famine and mass migration. But rather than a return to the Middle Ages (or even the Stone Age) he sees this as all taking place within the context of a continually-expanding electronic technology sector and online consumer culture. As he pithily describes our prospects "...we're moving into a situation with Generation-Xers in power, in a Depression, where people are afraid of the sky", "the future is an old paradigm", "it's neither progress nor conservatism because there's nothing left to conserve and no direction in which to progress".

As a professional futurologist of course Sterling is honour-bound to invent some sexy neologisms to spice up all this gloom, and he delivers magnificently through concepts like "Dark Euphoria", "Gothic Hi-Tech" and "Favela Chic". His first illustration of Gothic Hi-Tech is harsh: "You're Steve Jobs, you've built the iPhone, which is a brilliant technical innovation, but you've also had to sneak off to Tennessee to get a liver transplant". His putting the boot into Nicholas Sarkozy displays the same languid malice, as a politician who is "brilliant, poly-ethnic, but with no ideology or alternative...", someone who has "sucked all the air out of the political room..." so that "if you debate him you make him stronger, if you ignore him he simply steals your clothes". Favela Chic, named after the notorious slums of Brazil, is the condition of owning nothing but still keeping up a cool public facade. For Sterling, FaceBook is a kind of virtual favela: millions of tiny rooms that advertise their occupants' ego through encrustation with gaudy detritus (like the shells of hermit crabs).

It's in the later part of his address that Sterling gets really serious as he attacks those ascetics in the Green movement who counsel a return to pre-industrial, agrarian societies. Though he never uses the term, he's sufficient of a Keynesian to understand that we're obliged to keep making stuff and buying stuff in order to provide jobs and livings for the huge population of this planet. He would just like us to insist on far better-designed stuff, and to learn to live with far less of it. His final remarks proffer a Zen-like prospectus for examining all the stuff in your life and dividing it into four categories: beautiful objects; emotionally-important objects; useful objects and tools; and all the rest. Having thus evaluated them all, give most of them away (after digitising their images if you insist, for future reference).

Bruce Sterling and I come at things from different generations, different politics and vastly different cultural references (apart from a shared fascination with computer technology) but what grabbed me in his speech was a clear awareness that in the recent banking crisis we've missed, fumbled and dropped, a once-in-a-century opportunity to reform our out-of-control economic system in a more sensible direction, and that a consequence will be the sort of squalid decline he so cruelly sketches here. We lost that chance not only through the devious and tenacious rapacity of the banking classes and the credit-addled myopia of the voting masses, but through the enfeebled quality of our politicians (not excluding the once-promising Obama). Sterling witheringly describes these as "people who position themselves in the narrative rather than building any permanent infrastructure", that is "they're cheerleaders, they're not leaders".

PAPER CHASE

Dick Pountain/14 July 2010 13:52/Idealog 192

"I'll go to Morra to post that letter"/"Why not post it today?" We never tire of that dumb joke. Morra is a tiny village in the valley in the Umbro-Cortonese mountains where I live for half the year, and its equally tiny post office has stone walls, a terracotta roof and just the one postmaster who does everything and lives next to the shop. Whatever reason you have for visiting Morra post-office, you will not get out in less than half-an-hour. Rural Italians use the post-office as a bank far more than for communication, so the queue is mostly people depositing or withdrawing money, or paying bills. And every transaction takes at least ten minutes, thanks to the wonders of Information Technology.

Morra post-office is not short of digital equipment - in fact it's crammed with the stuff. In one corner sits a large cabinet full of blinkenlights signifying multiple ADSL lines, while the post-master has a large flat-screen monitor at his right shoulder, but he actually faces The Beast. This apparatus is the key to the whole enterprise (perhaps the key to whole Italian economy). It's as high as his head, as wide as a photocopier, and has a wide mouth beneath a drum-shaped top, all in regulation beige plastic. It's probably made by Olivetti, Seimens-Nixdorf or some similar corporate IT behemoth from a barely-remembered generation.

What The Beast does is suck in documents, scan them, OCR them and then print them out again. The post-master spends much of his time neurotically stroking documents on The Beast's lower lip to make sure they're not too creased for it to eat. Whenever you pay a bill - for dustbin collection, car tax or whatever - you get sent in the post a pair of coupons which you take to post office, sign in the appropriate places and he feeds them to The Beast. Sometimes these forms are more complicated and so The Beast regurgitates them several times for you to sign in various places and swallows them again. This whole magnificent edifice forms an interface between a 19th-century paper-based bureaucracy and a modern computer-based communications network, whose sole purpose is to transfer pieces of paper, rather than digital information, from one place to another. Unlike the internet it keeps people employed, which is a good thing - a Morra post-office website wouldn't have the same atmosphere - but it's at least three times slower than a wholly paper-based system would have been, even one that used quill pens and inkwells. 

Now I've written plenty about the paperless office debacle in this column over the years, and have no wish to revisit that topic - suffice it to state Pountain's Law, which is that whatever advances are made in communication bandwidth and computational power, three-quarters of the human population will combine together in a conspiracy to fritter them away entirely, the net result being a very slow *diminution* in efficiency. I belong to that other quarter (perhaps it's only one-hundredth, though I'd like to believe not) who are very happy indeed to have the more tedious aspects of our lives eased by digital tricks.

The reason I can live in rural Italy five months of the year is that I do all my editing work via email, pay my VAT and income tax via www.hmrc.gov.uk, do all my banking online, and even sign business documents in PDF form using Adobe Acrobat (which almost everyone except Barclay's bank now accepts). However it's all still very far from being seamless. There are passwords and PINs to remember, and particularly horrendous ones where government is concerned. When registering for VAT online they insisted on sending me a one-off, scratch-card code by post to my doormat in London, which a neighbour had to collect and read to me over the phone...

I wrote here in approving terms of the iPad a couple of months ago, and although I still haven't had my hands on one  I remain convinced that its user interface points in the direction we must go. Here's how I imagine things might work in future: I receive an email telling me my motorbike tax is due, with a link to a page of a government website. I go there and then drag the icon for my bank onto the appropriate place on the page; it pops up a box asking for me to authorise the payment; I click the button and the payment is made. Both the necessary passwords are stored locally on my machine, and neither institution - transport department or bank - gets to see the other one. They are temporarily connected via my machine, for the duration of this one transaction, which requires just two clicks from me.

There's nothing technical to stop such a system from being implemented right now: all the obstacles are to do with existing laws, existing working practices, existing mental attitudes, paranoia and the sheer weight of those bloody-minded three quarters of the population who are determined that it will not be made to work. I'm not unaware of the problem of how to redeploy those employed in the bureaucracy who're no longer needed, but that's a political problem not a technical one, and I think I've made my opinions in that domain abundantly clear without repeating  them here (cue sound effect of noisy expectoration).

INFORMOTION THEORY

Dick Pountain/11 June 2010 11:02/Idealog 191

Claude Shannon founded our  Information Technology industry when he published his classic paper "The Mathematical Theory of Communication" in 1948. It lead directly to the error-correction codes that make possible hard-disks, DVDs, network protocols and more. To make his great breakthrough Shannon had arrived at two crucial abstractions: he abstracted away from the physical fabric that carries messages, thinking of them as pure streams of abstract bits whose values distinguish one message from another, and he deliberately excluded questions about the *meaning* of messages. These abstractions permitted him to measure information as though it were a substance, so we now talk about the bandwidth of a connection in Mbits/sec. Living with these necessary abstractions for so long tempts us to believe that information actually is a substance, spawning a recent spate of books that claim the universe is made out of information - the latest of these is Vlatko Vedral's "Decoding Reality: The Universe as Quantum Information" (OUP 2010). Thing is, I don't believe them.

To me this is to put things exactly the wrong way round: information is not a substance, it's actually the location in space and time of the one substance that exists, matter. That might call for a little more explanation. Shannon wrote about messages conveyed by electrons down copper wires, but electrons are part of matter and, since Einstein, so are photons or any other kind of energy. It's their patterns of presence and absence in space, time (or both) that support messages. But you may object that "matter" is nowadays a far-from-obvious concept. Once we thought it was atoms, then protons, neutrons and electrons, then quarks, next maybe strings? This makes information equally ambiguous: the amount of it needed to describe a system depends on the scale you're looking at. You're currently reading patches of black ink that your eyes and brain automatically isolate from the white paper background of this page and interpret as words: under a microscope you'd see each letter is a collection of printers' dots; a forensic chemist could identify different compounds in the ink and a physicist the atoms composing those ink compounds. So how much information is contained in a word? It depends at what scale you're sampling it, and why.

Shannon's Bell Labs colleague Nyquist left us a deep understanding of sampling: you can extract just part of the information needed to describe a system and obtain a less precise but smaller copy, just as a CD recording does with sound waves and your eye or digital camera does to the torrent of light pouring in from the outside world. In fact all living things exist by sampling information, to discover what's going on both inside and outside their own skins or cell walls. You could almost use this as a definition of life - a chunk of matter that starts to sample information from the rest of the universe. This is a rather different claim from saying that the universe is made of information.

Living things sample their surroundings to avoid danger, find food and reproduce, and each has its own repertoire of chemically-driven actions to achieve these ends. These actions, triggered when certain kinds of message are detected, can be called "emotions". What we normally call emotions - happiness, sadness, anger and so on - are very complex and evolved examples, but for a one-celled creature they might be as simple as swimming towards or away from some particular environmental chemical. Recent advances in neuroscience reveal how these emotional subsystems work in higher animals. Even intentions depend upon emotions, because you can't even twitch a finger unless you "want to", which requires a little squirt of dopamine in your brain. Advanced nervous systems like our own also possess a memory function that stores certain experiences which may be useful in deciding future actions, and it seems likely that what we actually store are processed information samples - sights, sounds, smells - tagged with a marker of the particular emotional state they triggered when first gathered.

So, we should be able to extend Information Theory by reintroducing some notion of meaning, and I propose to call the result "Informotion Theory". A message's meaning for an animal is the emotion/action combination that it triggers, which might be anything from leaping out of a window, recalling a childhood memory, sparking a certain train of thought or a change of mood. Perhaps so much of 20th-century linguistic philosophy feels sterile and circular (snow is white if and only if "snow is white") because it hasn't caught up with such developments. Logic and reason are still needed, but any message received by a real person causes memories to be retrieved in deciphering it, which bring with them emotional states that can't be avoided - the meaning of the message doesn't reside wholly the symbol stream but partly in the memory of the recipient. It's only because we share so many basic experiences (like learning the same language as children) that we can communicate at all. If you'd like to know more, I've put "Sampling Reality", an abridged first volume of my book on this topic, on my website at http://www.dickpountain.co.uk/home/sampling-reality. It's not too big and it's not too clever, do give it a try...

[biog: "Dick Pountain is still smarting at his rejection slip from Mills and Boon".]

SOCIAL UNEASE

Dick Pountain /Idealog 350/ 07 Sep 2023 10:58 Ten years ago this column might have listed a handful of online apps that assist my everyday...