Unitasking

With an iPhone in one pocket and an iPad in another, my shoulder bag allows me to work pretty much anywhere. I work in the car (Bluetooth headphone when I’m driving; iPad when my wife drives). I grab a free half-hour and knock items off my OmniFocus master task management system, which syncs, completely and reliably, the lists on my iPhone, iPad, and iMac. I read a lot of books and articles. I suppose my life runs toward busier-than-average. After a vacation week without email, and the use of iPad only to homebrew detours around nasty interstate backups, I’ve given some thought to the clear benefit of unitasking.

My auto-correct just underlined the word “unitasking” because the computer did not recognize it. I checked with Merriam Webster; my search for “unitasking” returned “the word you’ve entered is not in the dictionary.” When I checked “multitasking,” the first sense read: “the concurrent performance of several jobs by a computer” and the second read, “the performance of multiple tasks at one time.”

Google Search: "multitask" returned these and many other images.

Google Search: “multitask” returned these and many other images.

Of course, computers and humans multitask all the time. Computers run multiple operations at high speeds, some simultaneously, some sequentially. As I write, I run my fingers along the keys and press buttons at surprisingly high speeds; I read what I am writing on the screen; and I think ahead to the next few words, and further out, to the next setup for the next idea; I am aware of grammar, clarity, flow and word choices; and I make corrections and adjustments along the way. As I write, I am also aware of the total word count, keeping my statements brief because I am writing for the internet (so far so good–my current word count is 279).

To ease the writing process, I listen to music. For six dollars, I recently purchased a 6-LP box of Bach Cantatas on the excellent Archiv label, and I am becoming familiar with them by playing the music in the background as I write. I do not find this distracting, but the moment the telephone rings when I’m in mid-thought, I become instantly grouchy.

I wonder why.

When I write, I focus on the writing, but writing is not a continuous process as, for example, hiking a mountain might be. I write for a few seconds, perhaps cast a phrase, then pause, listen to the soprano or the horns for a moment, then write a few sentences in burst of energy, then pause again to collect thoughts. I am thinking sequentially, not requiring my brain or body to do several things at one time. So far, this morning, the scheme seems to be working (441 words so far).

Do I multitask?

You betcha, but not when I write because writing requires so much of my focused attention.

Just before vacation, I attended a meeting with two dozen other people, all seated for discussion around a large rectangular hotel conference room in a Chicago airport hotel. Most of the participants were CEOs or people with similar responsibilities. From time to time, half of the people in the room were sufficiently engaged in the discussion to lift their eyes from their iPads (few used computers; most used iPads). Most of the time, just a few people were deeply engaged. Part of the problem: the meeting was similar to the meeting held last summer in the same location discussing the same topics. If there’s low engagement, then the active brain fills-in with activities promising higher degrees of engagement, and the iPad provides an irresistible alternative to real life. Of course, if more of the people looked up from their iPads and engaged in the conversation not-quite-happening in the room where they are seated, it might well be worthy of more of everyone’s attention. This raises the stakes for those who plan such meetings–if the meeting does not offer sufficient nourishment, minds drift.

Once again, that makes me wonder about the value of multitasking. Each of spent a good $1,000 to meet in Chicago, to discuss matters of importance in the company of one another. During the few times when the whole group was fully engaged, the engagement was not the result of multitasking. Instead, there was a single topic presented, and a single, well-managed discussion. In short, we engaged in a unitasking activity.

One final though before we all drift back into our bloated to-do lists. On vacation, I like to read a good story, well-crafted fiction by an author who really knows how to write. This time, I tried Jeffrey Lent’s A Peculiar Grace. Reading in the car, in the hotel room late into the night, whenever a good half hour’s free time presented itself, I found myself thinking about the characters, the story, the setting, the overall feeling of the book, even when I was not reading it. This was accomplished, of course, by the unitasking that a good book demands and deserves.

So why isn’t unitasking in the dictionary? And what do we need to do in order to bring this word into common use, and this way of thinking into common practice?

The Brilliant Douglas Engelbart

Douglas Engelbart passed away recently. His name may be unfamiliar. His work is not.

Engelbart was an engineer who invented, among other things, your computer’s mouse, and, by extension, his work made the trackpad possible as well. In his conception, the mouse was a box with several buttons on top and the ability to move what he called an on-screen “tracking point.” In 1968, this idea was radically new. I encourage you to watch Mr. Engelbart in action by screening the video, now widely known as “The Mother of All Demos” in the hardware and software community because of all that he presents. Among the innovations: a video projection system, hyperlinking, WSYWIG (what you see is what you get–the basis of word processing and more), teleconferencing and more. He’s clearly having a wonderful time with this demo, very proud of what has been accomplished, keen on the possibilities for a future that we all now accept as routine.

Douglas Englebart in "The Mother of All Demos," as this hour-plus presentation has come to be known.

Douglas Engelbart in “The Mother of All Demos,” as this hour-plus presentation has come to be known.

You’re actually able to point at the information you’re trying to retrieve, and then move it

Intrigued? Here’s a look at the input station. On the right is the mouse; at the center is the keyboard; and on the left is an interesting five-switch input device that allows quick typing by holding down each of the five keys in various combinations to enter characters without using the keyboard (some of these ideas were later revised for current trackpad use).

A very early version of a computer mouse as explained by its inventor, Douglas Engelbart.

A very early version of a computer mouse as explained by its inventor, Douglas Engelbart.

Still unsure about whether this video is worth your time? Think of it as a TED Talk, circa 1968.

Hungry for more? Watch this video on the Doug Engelbart Institute website. Here, he speaks about collective learning and the need for a central knowledge repository. The video was recorded in 1998, shortly after the internet first became popular. His vision recalls the era when we all dreamed about what the internet might someday be.

The complexity and urgency of the problems faced by us earth-bound humans are increasing much faster than are our aggregate capabilities for understanding and coping with them. This is a very serious problem; and there are strategic actions we can take, collectively. – Doug Engelbart

Welcome to the Connectome

Diffusion spectrum image shows brain wiring in a healthy human adult. The thread-like structures are nerve bundles, each containing hundreds of thousands of nerve fibers. Source: Source: Van J. Wedeen, M.D., MGH/Harvard U. To learn more about the government's new connectome project, click on the brain.

Diffusion spectrum image shows brain wiring in a healthy human adult. The thread-like structures are nerve bundles, each containing hundreds of thousands of nerve fibers.
Source: Source: Van J. Wedeen, M.D., MGH/Harvard U. To learn more about the government’s new connectome project, click on the brain.

You may recall recent coverage of a major White House initiative: mapping the brain. In that statement, there is ambiguity. Do we mean the brain as a body part, or do we mean the brain as the place where the mind resides? Mapping the genome–the sequence of the four types of molecules (nucleotides) that compose your DNA–is so far along that it will soon be possible, for a very reasonable price, to purchase your personal genome pattern.

A connectome is, in the words of the brilliantly clear writer and MIT scientist, Sebastian Seung, is: “the totality of connections between the neurons in [your] nervous system.” Of course, “unlike your genome, which is fixed from the moment of conception, your connectome changes throughout your life. Neurons adjust…their connections (to one another) by strengthening or weakening them. Neurons reconnect by creating and eliminating synapses, and they rewire by growing and retracting branches. Finally, entirely new neurons are created and existing ones are eliminated, through regeneration.”

In other words, the key to who we are is not located in the genome, but instead, in the connections between our brain cells–and those connections are changing all the time.The brain, and, by extension, the mind, is dynamic, constantly evolving based upon both personal need and stimuli.

Connectome BookWith his new book, the author proposes a new field of science for the study of the connectome, the ways in which the brain behaves, and the ways in which we might change the way it behaves in new ways. It isn’t every day that I read a book in which the author proposes a new field of scientific endeavor, and, to be honest, it isn’t every day that I read a book about anything that draws me back into reading even when my eyes (and mind) are too tired to continue. “Connectome” is one of those books that is so provocative, so inherently interesting, so well-written, that I’ve now recommended it to a great many people (and now, to you as well).

Seung is at his best when exploring the space between brain and mind, the overlap between how the brain works and how thinking is made possible. For example, he describes how the idea of Jennifer Aniston, a job that is done not by one neuron, but by a group of them, each recognizing a specific aspect of what makes Jennifer Jennifer. Blue eyes. Blonde hair. Angular chin. Add enough details and the descriptors point to one specific person. The neurons put the puzzle together and trigger a response in the brain (and the mind). What’s more, you need not see Jennifer Aniston. You need only think about her and the neurons respond. And the connection between these various neurons is strengthened, ready for the next Jennifer thought. The more you think about Jennifer Aniston, the more you think about Jennifer Aniston.

From here, it’s a reasonable jump to the question of memory. As Seung describes the process, it’s a matter of strong neural connections becoming even stronger through additional associations (Jennifer and Brad Pitt, for example), repetition (in all of those tabloids?), and ordering (memory is aided by placing, for example, the letters of the alphabet in order). No big revelations here–that’s how we all thought it worked–but Seung describes the ways in which scientists can now measure the relative power (the “spike”) of the strongest impulses. Much of this comes down to the image resolution finally available to long-suffering scientists who had the theories but not the tools necessary for confirmation or further exploration.

Next stop: learning. Here, Seung focuses on the random impulses first experienced by the neurons, and then, through a combination of repetition of patterns (for example), a bird song emerges. Not quickly, nor easily, but as a result of (in the case of the male zebra finches he describes in an elaborate example) of tens of thousands of attempts, the song emerges and can then be repeated because the neurons are, in essence, properly aligned. Human learning has its rote components, too, but our need for complexity is greater, and so, the connectome and its network of connections is far more sophisticated, and measured in far greater quantities, than those of a zebra finch. In both cases, the concept of a chain of neural responses is the key.

Watch the author deliver his 2010 TED Talk.

Watch the author deliver his 2010 TED Talk.

From here, the book becomes more appealing, perhaps, to fans of certain science fiction genres. Seung becomes fascinated with the implications of cryonics, or the freezing of a brain for later use. Here, he covers some of the territory familiar from Ray Kurzweil’s “How to Create a Mind” (recently, a topic of an article here). The topic of fascination: 0nce we understand the brain and its electrical patterns, is it possible to save those patterns of impulses in some digital device for subsequent sharing and/or retrieval? I found myself less taken with this theoretical exploration than the heart and soul of, well, the brain and mind that Seung explains so well. Still, this is what we’re all wondering: at what point does human brain power and computing brain power converge? And when they do, how much control will we (as opposed to, say Amazon or Google) exert over the future of what we think, what’s important enough to save, and what we hope to accomplish.

Outsourcing the Human Brain

(Copyright 2006 by Zelphics [Apple Bushel])

(Copyright 2006 by Zelphics [Apple Bushel])

Before we start outsourcing, let’s prepare an inventory and analysis with this concept in mind:

Our intelligence has enabled us to overcome the restrictions of our biological heritage and to change ourselves in the process. We are the only species that does this.”

And, this one:

We are capable of hierarchical thinking, of understanding a structure composed of diverse elements arranged in a pattern, representing that arrangement with a symbol, and then using that symbol as an element in an even more elaborate configuration.”

Simple though it may sound, we may think in terms of not just one apple, but, say, a bushel filled with, say, 130 medium sized apples, enough to fill about 15 apple pies.

We call this vast array of recursively linked ideas knowledge. Only homo sapiens have a knowledge base that itself evolves, grows exponentially, and is passed from one generation to another.

Remember Watson, the computer whose total Jeopardy! score more than doubled the scores of its two expert competitors? He (she, it?) “will read medical literature (essentially all medical journals and leading medical blogs) to become a master diagnostician and medical consultant. Is Watson smart, or simply capable of storing and accessing vast stores of data? Well, that depends upon what you mean by the word “smart.” You see, “the mathematical techniques that have evolved in the field of artificial intelligence (such as those used in Watson and Siri, the iPhone assistant) are mathematically very similar to the methods that biology evolved in the form of the neocortex (from Science Daily: “[the neocortex is part of the brain and] is involved in higher functions such as sensory perception, generation of motor commands, spatial reasoning, conscious thought, and in humans, language.”

Kurzweil bookGenius author Ray Kurzweil has spent a lifetime studying the human brain, and, in particular, the ways in which the brain processes information. You know his work: it is the basis of the speech recognition we now take for granted in Siri, telephone response systems, Dragon, and other systems. No, it’s not perfect. Human speech and language perception are deeply complicated affairs. In his latest book, How to Create a Mind: The Secret of Human Thought Revealed, Kurzweil first deconstructs the operation of the human brain, then considers the processing and storage resources required to replicate at least some of those operations with digital devices available today or likely to be available in the future. At first, this seems like wildly ridiculous thinking. A hundred pages later, it’s just an elaborate math exercise built on a surprisingly rational foundation.

Kurzweil-headshotMuch of Kurzweil’s theory grows from his advanced understanding of pattern recognition, the ways we construct digital processing systems, and the (often similar) ways that the neocortex seems to work (nobody is certain how the brain works, but we are gaining a lot of understanding as result of various biological and neurological mapping projects). A common grid structure seems to be shared by the digital and human brains. A tremendous number of pathways turn or or off, at very fast speeds, in order to enable processing, or thought. There is tremendous redundancy, as evidenced by patients who, after brain damage, are able to relearn but who place the new thinking in different (non-damaged) parts of the neocortex.

Where does all of this fanciful thinking lead? Try this:

When we augment our own neocortex with a synthetic version, we won’t have to worry about how much additional neocortex can physically fit into our bodies and brains as most of it will be in the cloud, like most of the computing we use today.”

What’s more:

In order for a digital neocortex to learn a new skill, it will still require many iterations of education, just as a biological neocortex does today, but once a digital neocortex somewhere and at some time learns something, it can share that knowledge with every other digital neocortex without delay. We can each have our own neocortex extenders in the cloud, just as we have our own private stores of personal data today.”

So the obvious question is: how soon is this going to happen?

2023.

TED-neocortex

Skeptical? Click the image and watch the 2009 TED Talk by Henry Markham. It’s called “A Brain in a Supercomputer.”

In terms of our understanding, this video is already quite old. Kurzweil: “The spatial resolution of noninvasive scanning of the brain is improving at an exponential rate.” In other words, new forms of MRI and diffusion tractography (which traces the pathways of fiber bundles inside the brain) are among the many new tools that scientists are using to map the brain and to understand how it works. In isolation, that’s simply fascinating. Taken in combination with equally ambitious, long-term growth in computer processing and storage, our increasing nuanced understanding of brain science makes increasingly human-like computing processes more and more viable. Hence, Watson on Jeopardy! or if you prefer, Google’s driver-less cars that must navigate through so many real-time decisions and seem to be accomplishing these tasks with greater precision and safety than their human counterparts.

Is the mind a computer? This is an old argument, and although Kurzweil provides both the history and the science / psychology behind all sides of the argument, nobody is certain. The tricky question is defining consciousness, and, by extension, defining just what is meant by a human mind. After considering these questions through the Turing Test, ideas proposed by Roger Penrose (video below), faith and free will, and identity, Kurzweil returns to the more comfortable domain of logic and mathematics, filling the closing chapter with charts that promise the necessary growth in computing power to support a digital brain that will, during the first half of this century, redefine the ways we think (or, our digital accessory brains think) about learning, knowledge and understanding.

Closing out, some thoughts from Penrose, then Kurzweil, both on video:

Big Ideas Simply Explained

Three subjects that I can never seem to understand as completely as I would like:

  • Philosophy
  • Economics
  • Psychology

Whenever I read a book about any of these subjects, I feel like a student, which means, I am reading because duty requires me to complete the book. The subjects interest me, but too many of the books I have read on these subjects are dreary, slow-moving, too dense with ideas for any reasonable person to sort out and retain their valuable understanding. Pictures help, but many of the ideas held within these disciplines are difficult to illustrate with anything better than wordy diagrams.

A year or so ago, I noticed a series of three books put together by Dorling Kindersley (DK)’s collaborative teams in the UK and India. They’ve got the formula right, and as a result, I have spent the last year happily browsing, and learning, from:

  • The Philosophy Book: Big Ideas Simply Explained
  • The Economics Book: Big Ideas Simply Explained
  • The Psychology Book: Big Ideas Simply Explained

A month or so ago, the same company released The Politics Book: Big Ideas Simply Explained, and at some point, I’ll get to that one, too. Right now, I’m still working my way through the first three volumes (about 1,000 pages total).

Three DK BooksSo what’s so special?

First,there is no single author. The collaborative approach focuses on presentation, clarity and consistency. This is less the work of a brilliant psychology teacher, more like a good old fashioned browse through, say, The World Book Encyclopedia from days of old. The type treatments are bold. There are pull-out quotes. There is color. No single idea runs more than a few pages. Everything is presented in a logical flow. There are boxes filled with biographical details. There is a clear statement of predecessor ideas and influences for each idea, and there is an equally clear statement about those in the future who built upon each idea. There are color pictures and diagrams. It’s tidy, presented for smart adult readers but certainly suitable research material for any school report.

The Philosophy Book is written by four academics and two writers: Will Buckingham is a philosopher and novelist with a special interest in the interplay between philosophy and narrative storytelling. Marcus Weeks is a writer, and author. Clive Hill is an academic focused on intellectualism in the modern world. Douglas Burnham is a philosophy professor and prolific writer on the subject. Peter J. King is a doctor of Philosophy who lectures at Pembroke College, University of Oxford. John Marenborn is a Fellow of Trinity College, Cambridge, UK, whose expertise is medieval philosophy. Taken as a group, they’ve got their philosophical bases covered (each of the books is put together by a team with similar skills). Marcus Weeks is the connection between all three books.

The bright yellow Philosophy book introduces the whole idea in comfortable language:

Philosophy is…a chance simply to wonder what life and the universe are all about…Philosophy is not so much about coming up with the answers to fundamental questions as it is about the process of trying to find out those answers, using reasoning rather than accepting…conventional views or conventional authority.”

So begins an introductory essay that introduces debate and dialogue, existence and knowledge, logic and language, morality, religion, and systems of thought and beliefs. A red color burst is the bridge into a timeline that begins the conversation in 624 B.C.E. And so, early on, we meet Pythagoras, who should be famous for more than his geometric theorem. In 428 B.C.E.–that’s about 2,500 years ago–Pythagorus developed a remarkable idea, that everything in the universe conforms to mathematical rules and ratios, and determined that this was true both of forms and ideas. Pythagorus was the leader of a religious cult, in which he was the Messiah, and his followers thought of his work as revelations. Here was a man for whom reasoning was the secret of the universe. He wrote, or said:

There is geometry in the humming of the strings, there is music in the spacing of the spheres.”

And:

Reason is immortal. All else is mortal.”

SiddharthaTurn the page and there’s Siddhartha Gautama and Buddhism’s four noble truths, explained in terms that anybody can understand, followed by the Eightfold Path presented in the Dharma Wheel. Siddhartha is covered in four good pages, and then, it’s time for Confucius and his Five Conscious Relationships.

All three of these men–Pythagorus, Siddhartha and Confucius–lived and worked around 500 B.C.E. More or less, they were contemporaries. A century later, philosophy turns to what is later called science, as Democritus and Leucippus come with the idea of atoms and the emptiness of space. (Seemed very early to me, too!) At about the same time, this from Socrates:

The life which is unexamined is not worth living.”

Jumping ahead to the middle of the book, Britain’s David Hume is considering human nature in the mid-1700s, and, in particular, the ways we cobble together facts:

In our reasonings concerning fact, there are all imaginable degrees of assurance. A wise man therefore proportions his beliefs to the evidence.”

Thinking in the present day, Palestinian philosopher Edward Said criticizes imperialism, Australian Peter Singer advocates for animal rights, and Bulgarian-born French philosopher Julia Kristeva questions the relationship between feminism and power. It’s a large field, and with The Philosophy Book, it’s possible for the average person to navigate with greater confidence than before.

The other two books are equally good.

The Economics Book begins with an article about Thomas Aquinas’s thoughts on prices, markets, and morality; the provision of public goods with thoughts by David Hume, whose words from the 1700s certainly resonate today:

Where the riches are engrossed by a few, these must contribute very largely to the supplying of the public necessities.”

Hume is among the few whose ideas appear in more than one of these volumes. And–I just noticed–The Philosophy Book tends to be stories about the people behind the ideas, The Economics Book tends more toward the ideas with less frequent stories about the people behind them (often because economic ideas are credited to multiple sources, I suppose). Making our way through The Age of Reason (“man is a cold, rational calculator;” “the invisible hand of the market brings order”);  on to economic bubbles (beginning with tulip mania in 1640); game theory and John (A Beautiful Mind) Nash; market uncertainty, Asian Tiger economies, the intersection of GDPs and women’s issues, inequality and economic growth, and more. Great book, but a bit slower going than Philosophy.

Psych Book SpreadThird in the trilogy is the bright red volume, The Psychology Book. As early as the year 190 in the current era, Galen of Pergamon (in today’s Turkey) is writing about the four temperaments of personality–melancholic, phlegmatic, choleric, and sanguine. Rene Descartes bridges all three topics–Philosophy, Economics and Psychology overlap with one another–with his thinking on the role of the body and the role of the mind as wholly separate entities. We know the name Binet (Alfred Binet) from the world of standardized testing, but the core of his thinking has nothing whatsoever to do with standardized thinking. Instead, he believed that intelligence and ability change over time. In his early testing, Binet intended to capture a helpful snapshot of one specific moment in a person’s development. And so the tour through human (and animal) behavior continues with Pavlov and his dogs, John B. Watson and his use of research to build the fundamentals of advertising, B.F. Skinner’s birds, Solomon Asch’s experiments to uncover the weirdness of social conformity, Stanley Milgram’s creepy experiments in which people inflict pain on others, Jean Piaget on child development, and work on autism by Simon Baron-Cohen (he’s Sacha Baron Cohen’s cousin).

When I was in high school and college,  I was exposed to all of this stuff, but only a small amount remained in my mind. Perhaps that was because I was also trying to read the complete works of Shakespeare, a book a week of modern utopian fiction, The Canterbury Tales, and studying geology at the same time. In high school and college, these topics were just more stuff to plough through. No context, no life experience, no connection to most of the material. Now, as an adult, it’s different. Like everyone I know, and everyone you know, I’m still juggling way too much in an average week, but I can now read this material with a real hope of understanding and retaining the material. Cover to cover, times three, these books will take you a year or two, but… without a test the next morning, you’ll be surprised how interesting philosophy, psychology and economics turn out to be. Just read them in your spare time, and behold (great word, “behold”) the ways in which humans have put it all together over several millennia. It’s a terrific story!

The Multiplier Effect

Quickly now… If you multiply 633 by 11, what’s the answer?

No doubt, you recognize the pattern, and you may recall the mental math process:

633 x 10, plus 633 x 1, or 6,330 plus 633, or 6,963, which is the answer (or, in terms used by math teachers, the “product”).

There is another way to solve the problem, a faster way that assures fewer computational errors, and does not involve any sort of digital or mechanical device. It does, however, involve a simple rule and a different way to write the problem down.

The rule is: “write down the number, add the neighbor.” The asterisk just above each number is there only to help you to focus. If you prefer, think of it as a small arrow.

Here’s how it works:

Mult by 11

Try multiplying 942 x 11  and you’ll quickly get the hang of it.

Do it once more, this time with a much larger number: 8,562,320 x 11. It goes quickly, as you’ll see.

Multiplying by 12 is just as easy, but the rule changes to: “double the number, add the neighbor.” Here, my explanation includes specific numbers.

Mult by 12

In fact, there is a similar rule for multiplication by any number (1-12). And there are rules for quickly adding long, complicated columns of numbers, as there are for division, square roots and more.

These rules were developed by a man facing his own demise in the Nazi camps during the Second World War. Danger was nothing new to him…this is the story and the enduring legacy of Jakow Trachtenberg, who first escaped the wrath of the Communists as he escaped his native Russia, then became a leading academic voice for world peace. His book, Das Friedensministerium (The Ministry of Peace), was read by FDR and other world leaders. His profile was high; capture was inevitable. He made it out of Austria, got caught in Yugoslavia, and was sentenced to death at a concentration camp. To maintain his sanity, Trachtenberg developed a new system for mathematical calculation. Paper was scarce, so he used it mostly for proofs. The rest, he kept in his head.

Madame Trachtenberg stayed nearby, in safety. She bribed officials, pulled strings, and managed to get Jakow moved to Dresden, which was a mess, allowing him to escape. Then, he was caught again, and was moved to Trieste. More bribes and coercion from Madame. He escaped. The couple maneuvered into a more normal existence beginning at refugee camp in Switzerland. By 1950, they were running the Mathematical Institute in Zurich, teaching young students a new way to think about numbers. A system without multiplication tables. A system based upon logic. A system that somehow survived.

A system that, against all odds, made it into my elementary classroom. One classroom in the New York City school district. For one year. The parents were certain that the teacher was making a terrible mistake, that the people in my class, myself included, would never be able to do math in the conventional way again. Of course, we learned a lot more than an alternative from of arithmetic.

And now, after decades out of print, in an era when arithmetic hardly matters because of calculators and computers, the original book is back in print. The brilliance of system remains awesome, and the book is worth reading just to understand how Trachtenberg conceived an entirely fresh approach under the most extraordinary circumstances.

20130113-222930.jpg

The Key to Fun and Learning

For many years, scholars have debated the aesthetics of film (or, with greater pretense, “cinema”) and the mass culture associated with television (or, with less pretense, “TV” or “the idiot box”). Videogames make for more interesting study because they combine the sound and images with the 21st century version of interactivity. Stories aren’t watched–they’re played. Characters aren’t observed–they’re enacted by the participant. It’s rich stuff.

So here’s my new hero, Constance Steinkuehler, a University of Wisconsin assistant professor who studies the intersection between videogames, science and cognition. Currently she’s on a leave of absence, working at the White House in the Office of Science and Technology Policy. I first encountered Ms. Steinkuehler while listening to NPR’s Tell Me More in April. Then, I found a video, and I realized how much I/we could learn from her.

So I took all of our plans and I threw them out the window. Structured stuff? Not going to work…If I talk at them, they are not going to listen to me. So, we’re just going to do this weird, radical thing. We’re just going to…play next to them. When an interest comes up, we’ll be like, well, you know, the place to read more about that would be “x”…Once we turned it around to a ‘follow their interests’ kind of a model, everything shifted. And it worked.

She’s talking about how learning works. And she’s using videogames as the basis for that learning. Among teen boys who were part of her project, chosen because they did not do well in school. She paid attention to the ways in which they preferred to learn, and here’s what happened:

So for example, we had a reader who was in tenth grade who read at the sixth grade level. [He was not] doing well in school. I handed him a fifteenth grade level text (from the game) and he was reading it with absolutely fine comprehension, 94, maybe 96% accuracy…”

Why?

When they choose the text, when they actually care about it, they actually fix their own comprehension problems…”

These quotes are lifts from the video below.

Steinkuehler is not the only academic who is thinking deeply about videogames and learning. This page does a good job in providing an overview of the videogame industry, and includes several videos that will stimulate your thinking about what games mean and why they are important. (The embedded TED talk is quite good because it covers bits about the industry and bits about game design.)

In this field, one original source of light is James Paul Gee, who explains, simply, that every videogame is a set of problems to be solved in order to win. His excellent book, What video games have to teach us about learning and literacy, is an excellent place to begin thinking seriously about videogames. So, too, is this introductory video:

Carnegie Mellon’s Jesse Schell will take your thinking further. He’s a game designer, an author, and someone who is thinking about games and learning in very exciting new ways. You may have seen Jesse’s TED talk, but you may not have seen his TEDx talk which is, ultimately, about how games (by design) encourage collaboration and shared learning styles, and how well-designed games respect the learner in ways that school often does not.

BTW: Score yourself 100 extra points if you recognized this article’s title, “The Key to Fun and Learning” as the tagline that appeared on most Milton Bradley board games. Double your score if you recognized the bearded man as Milton himself, a pioneer in games that were fun and also provided a learning experience. Triple your score if you knew that Mr. Bradley started out by making game and puzzle kits for Civil War soldiers to occupy their time in camp (remember, those guys were, mostly, teenagers.)

The Mind of Howard Gardner

From his Harvard bio, one of my personal heroes. Few academics have captured my imagination, and affected my thinking, as consistently or as deeply as Howard Gardner.

Harvard Professor Howard Gardner has written more than a dozen books with the word “mind” in the title. Few researchers have spend so much of their professional careers thinking about how our minds work, whether our minds might be better trained, and whether our minds can be put to better use. He’s a brilliant thinker, and I have thoroughly enjoyed reading his evolving work over these past few decades.

Earlier this year, with co-author Emma Laskin, Gardner republished Leading Minds: An Anatomy of Leadership with a new introduction, and that led me to the slimmer 5 Minds for the Future, a slim book that captures his evolving philosophy in a succinct, deeply meaningful way.

From the start, Gardner’s 5 Minds for the Future is more contemporary, acknowledging the tangentially  overlapping work of Daniel Pink, Stephen Colbert (“truthyness“)  and the enormous changes brought about by globalization. Gardner is famous for his theories about multiple intelligences (“M.I.” these days), but M.I. is not what this book is about. Instead, Gardner presents his case as a progression from basic to higher-level thinking, and his hope that we will climb the evolutionary ladder as a collective enterprise.

He begins by revisiting one of his favorite themes, the disciplined mind (which provided both title and subject matter for his 1999 book). Here, the goal is mastery, which requires a minimum of a decade’s intense participation, a thorough examination of all relevant ideas and approaches, deep study to understand both the facts and the underlying fundamentals, and interdisciplinary connections. This is serious work, and it must be accomplished despite the sometimes crazy ways that schools think about learning, and the equally crazy ways that the workplace may value or advance those with growing expertise. The disciplined mind does not simply accept what has been written or taught. Instead, the disciplined mind challenges assumptions, and digs deep so that it may apply intelligence when conventional thinking does not produce valuable results. No surprise that Gardner is deeply critical of those who invest less than a decade in any serious endeavor, or those who fake it in other ways.

Next up the ladder is the synthesizing mind which accomplishes its work by organizing, classifying, expanding its base of knowledge by borrowing from related (and unrelated) fields. Placing ideas into categories is an important step up the ladder because the process requires both (a) a full understanding of  specific disciplines and how they relate to one another, and (b) the means to convey these ideas to others. And so, Gardner views the Bible (a collection of moral stories), Charles Darwin’s theories, Picasso’s Guernica, and Michael Porter’s writings about strategy as related endeavors. At first, this seems to be a stretch. Then again, each of these are bold combinations of ideas based upon a complete understanding of a domain–(a) above–conveyed in a way that connects people to the synthesized ideas (b).

You may know Mihaly Csikszentmihalyi as the author of the excellent book FLOW, but his best work may be a book simply entitled CREATIVITY.

Then, there’s the creating mind. At this stage, the progression begins to make a lot of sense. Novel approaches are not based upon random ideas that may or may not work. Instead, the creating mind grows from deep study of a specific domain in a disciplined manner, followed by various attempts to organize that knowledge in ways that propel an argument forward. At a certain point, the argument has been advanced, and the opportunity for new thinking presents itself. Many creative professionals are required to advance new ideas without the requisite discipline, and so, our society generates lots of ephemeral stuff. In the creative space, Gardner’s thinking has been affected by Mihaly Csikszentmihalyi, who believes:

creativity only occurs when–and only when–an individual or group product is recognized by the relevant field as innovative, and, sooner or later, exerts a genuine, detectible influence on subsequent work in that domain.”

I would argue that the respectful mind ought to precede the disciplined mind as the ladder’s first rung, and Gardner provides ample evidence to support my argument. For one thing, the respectful mind is the only one of Gardner’s five minds that can be nurtured beginning at birth. What’s more, the ability to “understand and work effectively with peers, teachers and staff” would seem to be a prerequisite for any disciplined approach to learning and personal development. The whole chapter is nicely encapsulated by a sentence from renowned preschool teacher Vivian Paley:

You can’t say ‘you can’t play.'”

A decade ago, Gardner, Csikszentmihalyi, and William Damon wrote a book called Good Work, and this effort has expanded into The Good Work Project. Central to this effort is the ethical mind, which carries a meaning well beyond the ethical treatment of others. Here, we begin to touch upon the idea of professional or societal calling, and one’s role within a profession or domain. It begins with doing the best work possible–that is, the work of the highest quality, as well as work of redeeming social value–but it’s not just the work itself, it’s the way that you apply yourself to the job at hand. Here, Gardner covers the diligent newcomer, the mid-life worker who continues to pursue excellence every day, the older mentor or trustee whose role is to encourage others to build beyond what has already been accomplished.

In less than 200 pages, Gardner accomplishes a great deal. If time permits you to read only two Gardner books, I would start with Frames of Mind, which explains his theory about multiple intelligences, then jump to 5 Minds for the Future. After these two, you’ll probably want more. His book about leadership, mentioned above and discussed below, is certainly worthwhile. And Good Work will fill your head with wonderful ideas and inspiration for all you could do to help make the world a better place.

BTW: If you want to watch Gardner discuss 5 Minds for the Future, you’ll find his 45-minute video here.

As for Leading Minds, it’s an extraordinary book, a collection of analytical biographies written as parts of a whole, a cognitive view of leaders and leadership. He examines leaders by taking part their fundamental identity story: who they are, how their domain and influence grew, how and why they succeeded, how and why they were unable to accomplish their ultimate goals. This is not a book whose core ideas can be reduced to a few bullet points. Instead, it’s a few hundred pages of reflection on the nature of leadership shown through the examples of Albert Einstein, Mahatma Gandhi, Martin Luther King, Jr., Alfred P. Sloan, Eleanor Roosevelt, and a half dozen other 20th century figures. The significance of some names is fading; it was disappointing to find that this revised edition of a 1995 work did not include anyone who made his or her mark in the 21st century.

21st Century Debate

Although the series has been on the air for over five years, I discovered Intelligence ² within the past twelve months. Last night, I watched Malcolm Gladwell argue that college football was a bad idea because it involved the bashing of heads, and that, surely, there was some other game these people could play that would not, you know, involve bashing the heads of students (or anybody else, for that matter). On his team: Buzz Bissinger (he created Friday Night Lights, a popular TV series about football). Bissinger (see in the screen shot below) was strident, fierce and passionate in his well-researched beliefs: (a) colleges and universities should not be in the business of entertaining the masses, and (b) they should not be in the business of providing a farm system for professional football. On the other side, predictably, were two articulate football players who have moved on to bright careers (presumably, they, too have been beaten on the head several thousand times, but seemed to be okay with the way things turned out). Both were associated with FOX Sports: Tim Green and Jason Whitlock. In the advanced game of debate, their arguments proved to be less convincing.

Football is not high of my list of things I care about, but the debate was compelling (and, having now watched several episodes, it’s fair to say that some are very passionate and others are not as much fun to watch). The series is called Intelligence Squared. There are two teams and three rounds. First round: each team member presents his case, his ideas in detail. Second round, they mix it up by arguing with one another. Third round: closing arguments. What’s the point? At the start of each show, the audience at NYU’s Skirball Center votes on a straightforward question: “Should college football be banned?” (yes, the question is black-white and there are grey areas, discussed during debate, but not a part of the ultimate vote on the simple question). Panelists answer questions from members of the audience. End of show: now that they have been presented with convincing arguments, the audience votes again. One team wins (Gladwell-Bissinger), the audience applauds, and we’re done for the evening.

The influence of Stanford Professor James Fishkin is evident here. Deliberative Polling also involves a baseline vote, then immersion in fact-based information seasoned by strong opinion, with a re-vote after the information has been received and processed.

A look at the website suggests that this is modern media done properly. Of course, you can watch or listen to the whole debate (or an edited version, audio+video or audio only). You can listen on about 220 NPR radio stations, or watch on some public TV stations. Or, you can watch on fora.tv. For each episode, the site features a comprehensive biography on each of the four debaters, a complete transcript, and a rundown on the key points made by each debater, along with extensive links to relevant research. In short, you can watch an episode, then read a lot more from the debaters and from the thought leaders who influenced the debaters’ opinions. It’s presented in a  clean, easily accessible (non-academic) way. You can easily dive right in, learn a lot in a short time (if you wish), or spend a few hours to deeply consider what was said, why it was said, and why the voting audience did or did not change its collective mind.

The topics are provocative (and always simplified so they can be stated as a yes/no question for voting). Some examples:

BTW: If you like this sort of thing, you should spend some time at fora.tv, which features an abundance of intelligent, well-informed, well-researched lectures and discussions. Much of the material is free (advertiser and foundation supported). Fora.tv goes in directions that TED does not. And isn’t it interesting that there are now hundreds of these smart media outlets now available on the internet? In their way, they are taking the place of the 20st century dream of public television…with a broad range of ideas presented from every part of the world, abundant links to related ideas and research. Much of it is free, much of it is provocative, and very little of it is actually seen on television.

A Great Idea for Great Ideas

Once upon a time, OmniGraffle software was provided free with every Apple Mac computer. That’s how I learned about it. Now, I use OmniGraffle on my iPad and the desktop. When it comes to sketching out ideas, and presenting them in a clear and colorful manner, there is no better (or easier-to-use) product.

So, what does OmniGraffle do? Well, it depends upon what you want it to do. Start with a blank sheet, or some on-screen graph paper, or set yourself up for a cloud cluster (also called a mind map), or a whiteboard, or a chalkboard. There are connected notes, so you can use it as a kind of bulletin board, Whatever works for you, you’ll find the basic template in the full Standard or Professional version for use on the Mac (a great many features are available on the iPad version, which may suffice for some users).

Choose the template, then start drawing. Easy enough to begin with a box, color it, shade it, add text, make a copy, the sorts of things that you do in PowerPoint or Keynote all the time. Here, the tools are more varied, more versatile, including a bezier tool to draw shapes as you would in Adobe Illustrator (if you don’t know how to do this, it’s worth asking someone for help, but once you understand how it works, you’ll find yourself using this tool quite often).

So, let’s say that you begin with a free-form drawing, a visual exploration, a sketch to explain an idea to yourself or to others. It begins to make some sense, so you want to change its form, maybe move into a cloud of connected ideas, or a set of related on-screen index cards, or an organizational chart with colors to indicate levels or positions. Easy to do–this software is designed for versatility, and for intuitive thinking. The results can become quite sophisticated–and yet, they are not difficult to pull together, even under the pressure of time.

Automatic layouts save time, and make everything look a lot tidier, a lot more clear. There’s quick and easy access to frequently used tools, like color palettes and the font selector. There’s a user community called Graffletopia that creates “stencils” that can be used to create, for example, a director’s plan for film, or visualizations for software programmers. Browsing through Graffletopia, the utility of OmniGraffle becomes very clear: this is a visualization tool for working professionals. It’s easy to use, versatile, and, you’ll find, quite popular among certain knowledgable groups.

OmniGraffle is not a drawing tool, but instead, it is a tool for making (and easily revising) diagrams. I like the language from Omni’s website: “OmniGraffle knows what makes a diagram different from a drawing, and gives you the tools to create amazing diagrams quickly and easily. Lines stay connected to their shapes, unlike with illustration programs, where you would have to redraw your diagram every time you moved something.” As someone who often uses visuals to explain–and has become quite tired of the limitations of, say, Keynote or the level of sophisticated required for Adobe Illustrator–OmniGraffle feels just right to me. I find that the interface is intuitive (best if you’re already a Mac user), and that, from time to time, I need to take a moment and figure out how a tool works. That’s good–it’s just a few steps more sophisticated than my current abilities.

Most of the time, I’m sketching a diagram between meetings, capturing the basic idea. And although I can complete a pro-quality diagram on the iPad (and often do), I find myself in need of some certain advanced features, such as import/export from/to Visio (a Windows-only product). Most of the time, my diagram is on the simple side: colored boxes with type, perhaps a cloud to indicate an interesting idea. By holding my finger down, then dragging, I can group my clouds and/or boxes. Better yet, a smart selection tool allows quick selection of, for example, just the blue rectangles. I can create Adobe-style layers, then copy, or turn them on and off. Very handy, qick, and effective. Easily learned, too, in daily use, the iPad version has proven to be extremely useful, in part because it combines some of the best features found in OmniGraffle Professional (such as tables) with a sophisticated automatic diagramming tool, and a freehand tool, too.

To be clear, there are three different OmniGraffle products, each with its own unique set of benefits.

OmniGraffle for iPad costs $49.99 from the AppStore–a high-priced product that turns out to be a very good value because it does so much, so easily. OmniGraffle Standard, for Mac, costs $99.99, and OmniGraffle Professional, also for Mac, costs $199.99. Compare their features here. And, happily, you can get a free trial download for either of the Mac products (and any of the many excellent OmniGroup products). They do things the right way. It’s impressive.

%d bloggers like this: