Readers probably haven’t noticed, but The Japan Times has a new computer system. It’s a lot like our old one, although it is speedier and it integrates a whole host of functions in one terminal; no longer do we have to leave our desk to accomplish different tasks.
To be honest, I’m probably a bad judge of the merits of the new system. I attended the training classes (they were pretty simple), but apart from learning a few new keys and commands, I haven’t even scratched the surface of the new possibilities it offers. There are several reasons for that: time, need, inclination. Those all sound good to me, but there is a slight flaw in my thinking: If I haven’t explored the possibilities, how do I know the new system won’t help me in my work?
My situation isn’t unique. Lots of people — and not just technophobes — hesitate to buddy up with new technology. Busy people often think they don’t have time to spare for training and experimenting, forgetting that time invested now can yield productivity gains later. The logic can’t be beyond them. These people spend big bucks training junior execs in the hope of a future payoff, yet somehow they themselves are immune to those savings. (Must be the thin air waaaay up there on the executive ladder that inhibits clear thought.)
There are libraries of information devoted to the ways companies integrate new technologies into their operations. Not surprisingly, the consensus view seems to be that it is a pretty hit-and-miss procedure, more miss than hit. Says M. Lynne Markus, a professor of management and information science at Claremont Graduate University, “a fair body of research suggests that few organizations get full value from their IT investments, either because people have not learned how to use it well or because managers have not learned how to manage its benefits.”
Examples aren’t hard to find. I know people who use a word processor every day but still haven’t figured out that they don’t have to hit return at the end of each line as they did with a typewriter. The vast majority of cellular-phone users have no idea of the options they have at their fingertips. (No wonder some of the best-selling books in Japan are on how to modify your keitai.) There are a whole bunch of icons on my desktop and I don’t have the slightest idea why they are there or what they are for. And Netizens: How many of you have explored all the buttons on your browser? It isn’t asking much. We’re talking about an hour, tops, to see just what that piece of machinery will do. (For comparison: How long does it take you to find out the top speed of a new car?)
In recent years, economists have argued over “the productivity paradox”: Why is it that billions of dollars worth of investment in IT has yielded no discernible increase in productivity? Yes, despite all the new technologies, worker productivity has risen little over the last few decades.
One explanation is that technical factors are to blame. Our statistical tools can’t measure the shifts triggered by IT. In particular, changes in quality escape our crude assessments. For example, how do we account for the fact that today’s 200,000 yen computer is nothing like the 200,000 yen computer of two years ago? Both the modem and the CPU are faster, the hard drive bigger; but both are still 200,000 yen computers.
Another explanation is that we tend to use new technologies as we did their predecessors. Typists still hit that damn return key. It takes a generation or two for the effects of new technologies to percolate through the system and change behavior. The classic example that students of technology use to make this point is the way that automobiles completely reshaped American society. No one could foresee how newfound mobility would transform a country.
The failure to adapt to new technology is plainly evident in the way that many businesses adapt to the Internet. A home page provides on “online presence,” but for many, the process ends there. Aside from a few success stories, most companies haven’t figured out that the Net is interactive: You need an e-mail address on that page, and you have to answer that e-mail when it arrives.
I worry about passivity of a different kind. These technologies are fundamentally different from previous ones and their potential impact is far greater and far more harmful than that of earlier technologies.
These are information technologies. They are designed to process, calculate and use information. They create symbols that have meaning within their own world. They have their own referents.
The danger is that we will abdicate thinking and imagining to the machine. We are lazy enough as it is — especially when it comes to using new tools. If those tools are willing to bear the creative burden as well — telling us how to use them and claiming for themselves their place in our lives — will we opt out of the process altogether? Impossible? Factor in this prediction: By 2015 the computational power of microprocessors will exceed the power of the human brain.
Automation is seductive. One of my colleagues has created a macro that corrects many of the basic errors that editors are supposed to catch. In theory, we are now free to concentrate on ideas, headlines — important stuff. Instead, some folks will just turn their brains down another degree. Call them desk potatoes. A lot of it has to do with an individual’s disposition, but the threat is real nonetheless.
Richard Thieme, a commentator on new technologies, doesn’t go that far. In a recent issue of his “Islands in the Clickstream” e-letter, he argued that “it is easy to lose ourselves in the act of building simulations that our brains think are real and forget that intentionality animates the network like a ghost in the machine. … The symbols we think we use as tools disappear, the nested levels built of those symbols collapse and we see in that moment our responsibility for what we are building instead of pretending we’re merely technicians or just along for the ride.”
He’s convinced we will eventually wake up. I’m not so sure.