Welcome to the Connectome

Diffusion spectrum image shows brain wiring in a healthy human adult. The thread-like structures are nerve bundles, each containing hundreds of thousands of nerve fibers. Source: Source: Van J. Wedeen, M.D., MGH/Harvard U. To learn more about the government's new connectome project, click on the brain.

Diffusion spectrum image shows brain wiring in a healthy human adult. The thread-like structures are nerve bundles, each containing hundreds of thousands of nerve fibers.
Source: Source: Van J. Wedeen, M.D., MGH/Harvard U. To learn more about the government’s new connectome project, click on the brain.

You may recall recent coverage of a major White House initiative: mapping the brain. In that statement, there is ambiguity. Do we mean the brain as a body part, or do we mean the brain as the place where the mind resides? Mapping the genome–the sequence of the four types of molecules (nucleotides) that compose your DNA–is so far along that it will soon be possible, for a very reasonable price, to purchase your personal genome pattern.

A connectome is, in the words of the brilliantly clear writer and MIT scientist, Sebastian Seung, is: “the totality of connections between the neurons in [your] nervous system.” Of course, “unlike your genome, which is fixed from the moment of conception, your connectome changes throughout your life. Neurons adjust…their connections (to one another) by strengthening or weakening them. Neurons reconnect by creating and eliminating synapses, and they rewire by growing and retracting branches. Finally, entirely new neurons are created and existing ones are eliminated, through regeneration.”

In other words, the key to who we are is not located in the genome, but instead, in the connections between our brain cells–and those connections are changing all the time.The brain, and, by extension, the mind, is dynamic, constantly evolving based upon both personal need and stimuli.

Connectome BookWith his new book, the author proposes a new field of science for the study of the connectome, the ways in which the brain behaves, and the ways in which we might change the way it behaves in new ways. It isn’t every day that I read a book in which the author proposes a new field of scientific endeavor, and, to be honest, it isn’t every day that I read a book about anything that draws me back into reading even when my eyes (and mind) are too tired to continue. “Connectome” is one of those books that is so provocative, so inherently interesting, so well-written, that I’ve now recommended it to a great many people (and now, to you as well).

Seung is at his best when exploring the space between brain and mind, the overlap between how the brain works and how thinking is made possible. For example, he describes how the idea of Jennifer Aniston, a job that is done not by one neuron, but by a group of them, each recognizing a specific aspect of what makes Jennifer Jennifer. Blue eyes. Blonde hair. Angular chin. Add enough details and the descriptors point to one specific person. The neurons put the puzzle together and trigger a response in the brain (and the mind). What’s more, you need not see Jennifer Aniston. You need only think about her and the neurons respond. And the connection between these various neurons is strengthened, ready for the next Jennifer thought. The more you think about Jennifer Aniston, the more you think about Jennifer Aniston.

From here, it’s a reasonable jump to the question of memory. As Seung describes the process, it’s a matter of strong neural connections becoming even stronger through additional associations (Jennifer and Brad Pitt, for example), repetition (in all of those tabloids?), and ordering (memory is aided by placing, for example, the letters of the alphabet in order). No big revelations here–that’s how we all thought it worked–but Seung describes the ways in which scientists can now measure the relative power (the “spike”) of the strongest impulses. Much of this comes down to the image resolution finally available to long-suffering scientists who had the theories but not the tools necessary for confirmation or further exploration.

Next stop: learning. Here, Seung focuses on the random impulses first experienced by the neurons, and then, through a combination of repetition of patterns (for example), a bird song emerges. Not quickly, nor easily, but as a result of (in the case of the male zebra finches he describes in an elaborate example) of tens of thousands of attempts, the song emerges and can then be repeated because the neurons are, in essence, properly aligned. Human learning has its rote components, too, but our need for complexity is greater, and so, the connectome and its network of connections is far more sophisticated, and measured in far greater quantities, than those of a zebra finch. In both cases, the concept of a chain of neural responses is the key.

Watch the author deliver his 2010 TED Talk.

Watch the author deliver his 2010 TED Talk.

From here, the book becomes more appealing, perhaps, to fans of certain science fiction genres. Seung becomes fascinated with the implications of cryonics, or the freezing of a brain for later use. Here, he covers some of the territory familiar from Ray Kurzweil’s “How to Create a Mind” (recently, a topic of an article here). The topic of fascination: 0nce we understand the brain and its electrical patterns, is it possible to save those patterns of impulses in some digital device for subsequent sharing and/or retrieval? I found myself less taken with this theoretical exploration than the heart and soul of, well, the brain and mind that Seung explains so well. Still, this is what we’re all wondering: at what point does human brain power and computing brain power converge? And when they do, how much control will we (as opposed to, say Amazon or Google) exert over the future of what we think, what’s important enough to save, and what we hope to accomplish.

Outsourcing the Human Brain

(Copyright 2006 by Zelphics [Apple Bushel])

(Copyright 2006 by Zelphics [Apple Bushel])

Before we start outsourcing, let’s prepare an inventory and analysis with this concept in mind:

Our intelligence has enabled us to overcome the restrictions of our biological heritage and to change ourselves in the process. We are the only species that does this.”

And, this one:

We are capable of hierarchical thinking, of understanding a structure composed of diverse elements arranged in a pattern, representing that arrangement with a symbol, and then using that symbol as an element in an even more elaborate configuration.”

Simple though it may sound, we may think in terms of not just one apple, but, say, a bushel filled with, say, 130 medium sized apples, enough to fill about 15 apple pies.

We call this vast array of recursively linked ideas knowledge. Only homo sapiens have a knowledge base that itself evolves, grows exponentially, and is passed from one generation to another.

Remember Watson, the computer whose total Jeopardy! score more than doubled the scores of its two expert competitors? He (she, it?) “will read medical literature (essentially all medical journals and leading medical blogs) to become a master diagnostician and medical consultant. Is Watson smart, or simply capable of storing and accessing vast stores of data? Well, that depends upon what you mean by the word “smart.” You see, “the mathematical techniques that have evolved in the field of artificial intelligence (such as those used in Watson and Siri, the iPhone assistant) are mathematically very similar to the methods that biology evolved in the form of the neocortex (from Science Daily: “[the neocortex is part of the brain and] is involved in higher functions such as sensory perception, generation of motor commands, spatial reasoning, conscious thought, and in humans, language.”

Kurzweil bookGenius author Ray Kurzweil has spent a lifetime studying the human brain, and, in particular, the ways in which the brain processes information. You know his work: it is the basis of the speech recognition we now take for granted in Siri, telephone response systems, Dragon, and other systems. No, it’s not perfect. Human speech and language perception are deeply complicated affairs. In his latest book, How to Create a Mind: The Secret of Human Thought Revealed, Kurzweil first deconstructs the operation of the human brain, then considers the processing and storage resources required to replicate at least some of those operations with digital devices available today or likely to be available in the future. At first, this seems like wildly ridiculous thinking. A hundred pages later, it’s just an elaborate math exercise built on a surprisingly rational foundation.

Kurzweil-headshotMuch of Kurzweil’s theory grows from his advanced understanding of pattern recognition, the ways we construct digital processing systems, and the (often similar) ways that the neocortex seems to work (nobody is certain how the brain works, but we are gaining a lot of understanding as result of various biological and neurological mapping projects). A common grid structure seems to be shared by the digital and human brains. A tremendous number of pathways turn or or off, at very fast speeds, in order to enable processing, or thought. There is tremendous redundancy, as evidenced by patients who, after brain damage, are able to relearn but who place the new thinking in different (non-damaged) parts of the neocortex.

Where does all of this fanciful thinking lead? Try this:

When we augment our own neocortex with a synthetic version, we won’t have to worry about how much additional neocortex can physically fit into our bodies and brains as most of it will be in the cloud, like most of the computing we use today.”

What’s more:

In order for a digital neocortex to learn a new skill, it will still require many iterations of education, just as a biological neocortex does today, but once a digital neocortex somewhere and at some time learns something, it can share that knowledge with every other digital neocortex without delay. We can each have our own neocortex extenders in the cloud, just as we have our own private stores of personal data today.”

So the obvious question is: how soon is this going to happen?

2023.

TED-neocortex

Skeptical? Click the image and watch the 2009 TED Talk by Henry Markham. It’s called “A Brain in a Supercomputer.”

In terms of our understanding, this video is already quite old. Kurzweil: “The spatial resolution of noninvasive scanning of the brain is improving at an exponential rate.” In other words, new forms of MRI and diffusion tractography (which traces the pathways of fiber bundles inside the brain) are among the many new tools that scientists are using to map the brain and to understand how it works. In isolation, that’s simply fascinating. Taken in combination with equally ambitious, long-term growth in computer processing and storage, our increasing nuanced understanding of brain science makes increasingly human-like computing processes more and more viable. Hence, Watson on Jeopardy! or if you prefer, Google’s driver-less cars that must navigate through so many real-time decisions and seem to be accomplishing these tasks with greater precision and safety than their human counterparts.

Is the mind a computer? This is an old argument, and although Kurzweil provides both the history and the science / psychology behind all sides of the argument, nobody is certain. The tricky question is defining consciousness, and, by extension, defining just what is meant by a human mind. After considering these questions through the Turing Test, ideas proposed by Roger Penrose (video below), faith and free will, and identity, Kurzweil returns to the more comfortable domain of logic and mathematics, filling the closing chapter with charts that promise the necessary growth in computing power to support a digital brain that will, during the first half of this century, redefine the ways we think (or, our digital accessory brains think) about learning, knowledge and understanding.

Closing out, some thoughts from Penrose, then Kurzweil, both on video:

The Mind of Howard Gardner

From his Harvard bio, one of my personal heroes. Few academics have captured my imagination, and affected my thinking, as consistently or as deeply as Howard Gardner.

Harvard Professor Howard Gardner has written more than a dozen books with the word “mind” in the title. Few researchers have spend so much of their professional careers thinking about how our minds work, whether our minds might be better trained, and whether our minds can be put to better use. He’s a brilliant thinker, and I have thoroughly enjoyed reading his evolving work over these past few decades.

Earlier this year, with co-author Emma Laskin, Gardner republished Leading Minds: An Anatomy of Leadership with a new introduction, and that led me to the slimmer 5 Minds for the Future, a slim book that captures his evolving philosophy in a succinct, deeply meaningful way.

From the start, Gardner’s 5 Minds for the Future is more contemporary, acknowledging the tangentially  overlapping work of Daniel Pink, Stephen Colbert (“truthyness“)  and the enormous changes brought about by globalization. Gardner is famous for his theories about multiple intelligences (“M.I.” these days), but M.I. is not what this book is about. Instead, Gardner presents his case as a progression from basic to higher-level thinking, and his hope that we will climb the evolutionary ladder as a collective enterprise.

He begins by revisiting one of his favorite themes, the disciplined mind (which provided both title and subject matter for his 1999 book). Here, the goal is mastery, which requires a minimum of a decade’s intense participation, a thorough examination of all relevant ideas and approaches, deep study to understand both the facts and the underlying fundamentals, and interdisciplinary connections. This is serious work, and it must be accomplished despite the sometimes crazy ways that schools think about learning, and the equally crazy ways that the workplace may value or advance those with growing expertise. The disciplined mind does not simply accept what has been written or taught. Instead, the disciplined mind challenges assumptions, and digs deep so that it may apply intelligence when conventional thinking does not produce valuable results. No surprise that Gardner is deeply critical of those who invest less than a decade in any serious endeavor, or those who fake it in other ways.

Next up the ladder is the synthesizing mind which accomplishes its work by organizing, classifying, expanding its base of knowledge by borrowing from related (and unrelated) fields. Placing ideas into categories is an important step up the ladder because the process requires both (a) a full understanding of  specific disciplines and how they relate to one another, and (b) the means to convey these ideas to others. And so, Gardner views the Bible (a collection of moral stories), Charles Darwin’s theories, Picasso’s Guernica, and Michael Porter’s writings about strategy as related endeavors. At first, this seems to be a stretch. Then again, each of these are bold combinations of ideas based upon a complete understanding of a domain–(a) above–conveyed in a way that connects people to the synthesized ideas (b).

You may know Mihaly Csikszentmihalyi as the author of the excellent book FLOW, but his best work may be a book simply entitled CREATIVITY.

Then, there’s the creating mind. At this stage, the progression begins to make a lot of sense. Novel approaches are not based upon random ideas that may or may not work. Instead, the creating mind grows from deep study of a specific domain in a disciplined manner, followed by various attempts to organize that knowledge in ways that propel an argument forward. At a certain point, the argument has been advanced, and the opportunity for new thinking presents itself. Many creative professionals are required to advance new ideas without the requisite discipline, and so, our society generates lots of ephemeral stuff. In the creative space, Gardner’s thinking has been affected by Mihaly Csikszentmihalyi, who believes:

creativity only occurs when–and only when–an individual or group product is recognized by the relevant field as innovative, and, sooner or later, exerts a genuine, detectible influence on subsequent work in that domain.”

I would argue that the respectful mind ought to precede the disciplined mind as the ladder’s first rung, and Gardner provides ample evidence to support my argument. For one thing, the respectful mind is the only one of Gardner’s five minds that can be nurtured beginning at birth. What’s more, the ability to “understand and work effectively with peers, teachers and staff” would seem to be a prerequisite for any disciplined approach to learning and personal development. The whole chapter is nicely encapsulated by a sentence from renowned preschool teacher Vivian Paley:

You can’t say ‘you can’t play.'”

A decade ago, Gardner, Csikszentmihalyi, and William Damon wrote a book called Good Work, and this effort has expanded into The Good Work Project. Central to this effort is the ethical mind, which carries a meaning well beyond the ethical treatment of others. Here, we begin to touch upon the idea of professional or societal calling, and one’s role within a profession or domain. It begins with doing the best work possible–that is, the work of the highest quality, as well as work of redeeming social value–but it’s not just the work itself, it’s the way that you apply yourself to the job at hand. Here, Gardner covers the diligent newcomer, the mid-life worker who continues to pursue excellence every day, the older mentor or trustee whose role is to encourage others to build beyond what has already been accomplished.

In less than 200 pages, Gardner accomplishes a great deal. If time permits you to read only two Gardner books, I would start with Frames of Mind, which explains his theory about multiple intelligences, then jump to 5 Minds for the Future. After these two, you’ll probably want more. His book about leadership, mentioned above and discussed below, is certainly worthwhile. And Good Work will fill your head with wonderful ideas and inspiration for all you could do to help make the world a better place.

BTW: If you want to watch Gardner discuss 5 Minds for the Future, you’ll find his 45-minute video here.

As for Leading Minds, it’s an extraordinary book, a collection of analytical biographies written as parts of a whole, a cognitive view of leaders and leadership. He examines leaders by taking part their fundamental identity story: who they are, how their domain and influence grew, how and why they succeeded, how and why they were unable to accomplish their ultimate goals. This is not a book whose core ideas can be reduced to a few bullet points. Instead, it’s a few hundred pages of reflection on the nature of leadership shown through the examples of Albert Einstein, Mahatma Gandhi, Martin Luther King, Jr., Alfred P. Sloan, Eleanor Roosevelt, and a half dozen other 20th century figures. The significance of some names is fading; it was disappointing to find that this revised edition of a 1995 work did not include anyone who made his or her mark in the 21st century.

%d bloggers like this: