Unitasking

With an iPhone in one pocket and an iPad in another, my shoulder bag allows me to work pretty much anywhere. I work in the car (Bluetooth headphone when I’m driving; iPad when my wife drives). I grab a free half-hour and knock items off my OmniFocus master task management system, which syncs, completely and reliably, the lists on my iPhone, iPad, and iMac. I read a lot of books and articles. I suppose my life runs toward busier-than-average. After a vacation week without email, and the use of iPad only to homebrew detours around nasty interstate backups, I’ve given some thought to the clear benefit of unitasking.

My auto-correct just underlined the word “unitasking” because the computer did not recognize it. I checked with Merriam Webster; my search for “unitasking” returned “the word you’ve entered is not in the dictionary.” When I checked “multitasking,” the first sense read: “the concurrent performance of several jobs by a computer” and the second read, “the performance of multiple tasks at one time.”

Google Search: "multitask" returned these and many other images.

Google Search: “multitask” returned these and many other images.

Of course, computers and humans multitask all the time. Computers run multiple operations at high speeds, some simultaneously, some sequentially. As I write, I run my fingers along the keys and press buttons at surprisingly high speeds; I read what I am writing on the screen; and I think ahead to the next few words, and further out, to the next setup for the next idea; I am aware of grammar, clarity, flow and word choices; and I make corrections and adjustments along the way. As I write, I am also aware of the total word count, keeping my statements brief because I am writing for the internet (so far so good–my current word count is 279).

To ease the writing process, I listen to music. For six dollars, I recently purchased a 6-LP box of Bach Cantatas on the excellent Archiv label, and I am becoming familiar with them by playing the music in the background as I write. I do not find this distracting, but the moment the telephone rings when I’m in mid-thought, I become instantly grouchy.

I wonder why.

When I write, I focus on the writing, but writing is not a continuous process as, for example, hiking a mountain might be. I write for a few seconds, perhaps cast a phrase, then pause, listen to the soprano or the horns for a moment, then write a few sentences in burst of energy, then pause again to collect thoughts. I am thinking sequentially, not requiring my brain or body to do several things at one time. So far, this morning, the scheme seems to be working (441 words so far).

Do I multitask?

You betcha, but not when I write because writing requires so much of my focused attention.

Just before vacation, I attended a meeting with two dozen other people, all seated for discussion around a large rectangular hotel conference room in a Chicago airport hotel. Most of the participants were CEOs or people with similar responsibilities. From time to time, half of the people in the room were sufficiently engaged in the discussion to lift their eyes from their iPads (few used computers; most used iPads). Most of the time, just a few people were deeply engaged. Part of the problem: the meeting was similar to the meeting held last summer in the same location discussing the same topics. If there’s low engagement, then the active brain fills-in with activities promising higher degrees of engagement, and the iPad provides an irresistible alternative to real life. Of course, if more of the people looked up from their iPads and engaged in the conversation not-quite-happening in the room where they are seated, it might well be worthy of more of everyone’s attention. This raises the stakes for those who plan such meetings–if the meeting does not offer sufficient nourishment, minds drift.

Once again, that makes me wonder about the value of multitasking. Each of spent a good $1,000 to meet in Chicago, to discuss matters of importance in the company of one another. During the few times when the whole group was fully engaged, the engagement was not the result of multitasking. Instead, there was a single topic presented, and a single, well-managed discussion. In short, we engaged in a unitasking activity.

One final though before we all drift back into our bloated to-do lists. On vacation, I like to read a good story, well-crafted fiction by an author who really knows how to write. This time, I tried Jeffrey Lent’s A Peculiar Grace. Reading in the car, in the hotel room late into the night, whenever a good half hour’s free time presented itself, I found myself thinking about the characters, the story, the setting, the overall feeling of the book, even when I was not reading it. This was accomplished, of course, by the unitasking that a good book demands and deserves.

So why isn’t unitasking in the dictionary? And what do we need to do in order to bring this word into common use, and this way of thinking into common practice?

The Brilliant Douglas Engelbart

Douglas Engelbart passed away recently. His name may be unfamiliar. His work is not.

Engelbart was an engineer who invented, among other things, your computer’s mouse, and, by extension, his work made the trackpad possible as well. In his conception, the mouse was a box with several buttons on top and the ability to move what he called an on-screen “tracking point.” In 1968, this idea was radically new. I encourage you to watch Mr. Engelbart in action by screening the video, now widely known as “The Mother of All Demos” in the hardware and software community because of all that he presents. Among the innovations: a video projection system, hyperlinking, WSYWIG (what you see is what you get–the basis of word processing and more), teleconferencing and more. He’s clearly having a wonderful time with this demo, very proud of what has been accomplished, keen on the possibilities for a future that we all now accept as routine.

Douglas Englebart in "The Mother of All Demos," as this hour-plus presentation has come to be known.

Douglas Engelbart in “The Mother of All Demos,” as this hour-plus presentation has come to be known.

You’re actually able to point at the information you’re trying to retrieve, and then move it

Intrigued? Here’s a look at the input station. On the right is the mouse; at the center is the keyboard; and on the left is an interesting five-switch input device that allows quick typing by holding down each of the five keys in various combinations to enter characters without using the keyboard (some of these ideas were later revised for current trackpad use).

A very early version of a computer mouse as explained by its inventor, Douglas Engelbart.

A very early version of a computer mouse as explained by its inventor, Douglas Engelbart.

Still unsure about whether this video is worth your time? Think of it as a TED Talk, circa 1968.

Hungry for more? Watch this video on the Doug Engelbart Institute website. Here, he speaks about collective learning and the need for a central knowledge repository. The video was recorded in 1998, shortly after the internet first became popular. His vision recalls the era when we all dreamed about what the internet might someday be.

The complexity and urgency of the problems faced by us earth-bound humans are increasing much faster than are our aggregate capabilities for understanding and coping with them. This is a very serious problem; and there are strategic actions we can take, collectively. – Doug Engelbart

Outsourcing the Human Brain

(Copyright 2006 by Zelphics [Apple Bushel])

(Copyright 2006 by Zelphics [Apple Bushel])

Before we start outsourcing, let’s prepare an inventory and analysis with this concept in mind:

Our intelligence has enabled us to overcome the restrictions of our biological heritage and to change ourselves in the process. We are the only species that does this.”

And, this one:

We are capable of hierarchical thinking, of understanding a structure composed of diverse elements arranged in a pattern, representing that arrangement with a symbol, and then using that symbol as an element in an even more elaborate configuration.”

Simple though it may sound, we may think in terms of not just one apple, but, say, a bushel filled with, say, 130 medium sized apples, enough to fill about 15 apple pies.

We call this vast array of recursively linked ideas knowledge. Only homo sapiens have a knowledge base that itself evolves, grows exponentially, and is passed from one generation to another.

Remember Watson, the computer whose total Jeopardy! score more than doubled the scores of its two expert competitors? He (she, it?) “will read medical literature (essentially all medical journals and leading medical blogs) to become a master diagnostician and medical consultant. Is Watson smart, or simply capable of storing and accessing vast stores of data? Well, that depends upon what you mean by the word “smart.” You see, “the mathematical techniques that have evolved in the field of artificial intelligence (such as those used in Watson and Siri, the iPhone assistant) are mathematically very similar to the methods that biology evolved in the form of the neocortex (from Science Daily: “[the neocortex is part of the brain and] is involved in higher functions such as sensory perception, generation of motor commands, spatial reasoning, conscious thought, and in humans, language.”

Kurzweil bookGenius author Ray Kurzweil has spent a lifetime studying the human brain, and, in particular, the ways in which the brain processes information. You know his work: it is the basis of the speech recognition we now take for granted in Siri, telephone response systems, Dragon, and other systems. No, it’s not perfect. Human speech and language perception are deeply complicated affairs. In his latest book, How to Create a Mind: The Secret of Human Thought Revealed, Kurzweil first deconstructs the operation of the human brain, then considers the processing and storage resources required to replicate at least some of those operations with digital devices available today or likely to be available in the future. At first, this seems like wildly ridiculous thinking. A hundred pages later, it’s just an elaborate math exercise built on a surprisingly rational foundation.

Kurzweil-headshotMuch of Kurzweil’s theory grows from his advanced understanding of pattern recognition, the ways we construct digital processing systems, and the (often similar) ways that the neocortex seems to work (nobody is certain how the brain works, but we are gaining a lot of understanding as result of various biological and neurological mapping projects). A common grid structure seems to be shared by the digital and human brains. A tremendous number of pathways turn or or off, at very fast speeds, in order to enable processing, or thought. There is tremendous redundancy, as evidenced by patients who, after brain damage, are able to relearn but who place the new thinking in different (non-damaged) parts of the neocortex.

Where does all of this fanciful thinking lead? Try this:

When we augment our own neocortex with a synthetic version, we won’t have to worry about how much additional neocortex can physically fit into our bodies and brains as most of it will be in the cloud, like most of the computing we use today.”

What’s more:

In order for a digital neocortex to learn a new skill, it will still require many iterations of education, just as a biological neocortex does today, but once a digital neocortex somewhere and at some time learns something, it can share that knowledge with every other digital neocortex without delay. We can each have our own neocortex extenders in the cloud, just as we have our own private stores of personal data today.”

So the obvious question is: how soon is this going to happen?

2023.

TED-neocortex

Skeptical? Click the image and watch the 2009 TED Talk by Henry Markham. It’s called “A Brain in a Supercomputer.”

In terms of our understanding, this video is already quite old. Kurzweil: “The spatial resolution of noninvasive scanning of the brain is improving at an exponential rate.” In other words, new forms of MRI and diffusion tractography (which traces the pathways of fiber bundles inside the brain) are among the many new tools that scientists are using to map the brain and to understand how it works. In isolation, that’s simply fascinating. Taken in combination with equally ambitious, long-term growth in computer processing and storage, our increasing nuanced understanding of brain science makes increasingly human-like computing processes more and more viable. Hence, Watson on Jeopardy! or if you prefer, Google’s driver-less cars that must navigate through so many real-time decisions and seem to be accomplishing these tasks with greater precision and safety than their human counterparts.

Is the mind a computer? This is an old argument, and although Kurzweil provides both the history and the science / psychology behind all sides of the argument, nobody is certain. The tricky question is defining consciousness, and, by extension, defining just what is meant by a human mind. After considering these questions through the Turing Test, ideas proposed by Roger Penrose (video below), faith and free will, and identity, Kurzweil returns to the more comfortable domain of logic and mathematics, filling the closing chapter with charts that promise the necessary growth in computing power to support a digital brain that will, during the first half of this century, redefine the ways we think (or, our digital accessory brains think) about learning, knowledge and understanding.

Closing out, some thoughts from Penrose, then Kurzweil, both on video:

The Multiplier Effect

Quickly now… If you multiply 633 by 11, what’s the answer?

No doubt, you recognize the pattern, and you may recall the mental math process:

633 x 10, plus 633 x 1, or 6,330 plus 633, or 6,963, which is the answer (or, in terms used by math teachers, the “product”).

There is another way to solve the problem, a faster way that assures fewer computational errors, and does not involve any sort of digital or mechanical device. It does, however, involve a simple rule and a different way to write the problem down.

The rule is: “write down the number, add the neighbor.” The asterisk just above each number is there only to help you to focus. If you prefer, think of it as a small arrow.

Here’s how it works:

Mult by 11

Try multiplying 942 x 11  and you’ll quickly get the hang of it.

Do it once more, this time with a much larger number: 8,562,320 x 11. It goes quickly, as you’ll see.

Multiplying by 12 is just as easy, but the rule changes to: “double the number, add the neighbor.” Here, my explanation includes specific numbers.

Mult by 12

In fact, there is a similar rule for multiplication by any number (1-12). And there are rules for quickly adding long, complicated columns of numbers, as there are for division, square roots and more.

These rules were developed by a man facing his own demise in the Nazi camps during the Second World War. Danger was nothing new to him…this is the story and the enduring legacy of Jakow Trachtenberg, who first escaped the wrath of the Communists as he escaped his native Russia, then became a leading academic voice for world peace. His book, Das Friedensministerium (The Ministry of Peace), was read by FDR and other world leaders. His profile was high; capture was inevitable. He made it out of Austria, got caught in Yugoslavia, and was sentenced to death at a concentration camp. To maintain his sanity, Trachtenberg developed a new system for mathematical calculation. Paper was scarce, so he used it mostly for proofs. The rest, he kept in his head.

Madame Trachtenberg stayed nearby, in safety. She bribed officials, pulled strings, and managed to get Jakow moved to Dresden, which was a mess, allowing him to escape. Then, he was caught again, and was moved to Trieste. More bribes and coercion from Madame. He escaped. The couple maneuvered into a more normal existence beginning at refugee camp in Switzerland. By 1950, they were running the Mathematical Institute in Zurich, teaching young students a new way to think about numbers. A system without multiplication tables. A system based upon logic. A system that somehow survived.

A system that, against all odds, made it into my elementary classroom. One classroom in the New York City school district. For one year. The parents were certain that the teacher was making a terrible mistake, that the people in my class, myself included, would never be able to do math in the conventional way again. Of course, we learned a lot more than an alternative from of arithmetic.

And now, after decades out of print, in an era when arithmetic hardly matters because of calculators and computers, the original book is back in print. The brilliance of system remains awesome, and the book is worth reading just to understand how Trachtenberg conceived an entirely fresh approach under the most extraordinary circumstances.

20130113-222930.jpg