The Warmth of Isabel Wilkerson

cover_bookBeginning around 1915, six million people left their native land hoping for a better life. Nearly all of them were Americans, but they were poor, without prospects. For the next half century, they left the South, many for northern cities where they knew a relative or felt they could find work, some for the west, where they hoped Jim Crow would not be a factor in their lives. They left in faith, and without much information. Three of them were fortunate because their stories were told, in considerable detail, by a compassionate, literate, well-informed journalist named Isabel Wilkerson. Her work, which she completed in 2010, involved thirteen years of her life and over a thousand interviews. the book is a solid ten-hour read (it’s over 500 pages), and you won’t want to miss a single story about her chosen few, the Americans whose stories she tells so well. They are: Ida Mae Brandon Gladney, Robert Joseph Pershing Foster and George Swanson Starling. Ida Mae starts out in Van Vleet, Mississippi in 1928, and survives the completion of the book. Leaving Monroe, Louisiana far behind, Robert survives a punishing trip to the California of his dreams, and becomes a wealthy doctor in Los Angeles with a soft spot for people in need. George is a bit of troublemaker in his native Florida, and ends up working on a New York-Florida train while living a new life in Harlem. (I use their first names because of the kinship that the author kindled in me; I feel as though I knew them from the neighborhood.)

Ida Mae, with flowers in her hair, sharecroppers’ daughter, living in Chicago in the 1930s

Ida Mae, with flowers in her hair, sharecroppers’ daughter, living in Chicago in the 1930s

Wilkerson takes care to paint a full picture of these people, their lives back down South, their struggles in making the decision to leave, the tough times they endured during their period of relocation, family and friends who weave in and out of their lives. The sense of never quite being at home is a constant companion; so is the the sense that they don’t completely belong where they ended up. They resolve these conflicts in their own minds, sometimes rationalizing, sometimes considering just how fortunate their lives became, sometimes trying to untangle the equally tangled thoughts and behaviors of others.

Young Doctor Robert Foster in the years before he made enough money to do anything he pleased.

Young Doctor Robert Foster in the years before he made enough money to do anything he pleased.

George was known as “schoolboy” because he was among the few citrus workers in his area who had attended any college at all. His father talked him out of the idea, and George spent the rest of his life wondering what might have been.

George was known as “schoolboy” because he was among the few citrus workers in his area who had attended any college at all. His father talked him out of the idea, and George spent the rest of his life wondering what might have been.

Wilkerson also scores scholarly points by resolving not to accept common knowledge. Her responsibility to Ida Mae, Robert and George is powerful, and she insists on providing commentary and context to keep the reader on track and clear about what actually happened, and why it matters.

Intrigued? Watch an excellent hour-plus interview with Ms. Wilkerson on the award-winning public affairs series that survived the old New Jersey Network and now resides at Rutgers University. Find it here.

In a Word, “Curious”

CuriousWhat’s the secret of life? Of course, the answer is in a book with a single word title, Curious? The back cover has nine words, 58 characters: “Embrace uncertainty. Attract love and abundance. Master your life.”

All of this makes me want to write an answer book called “Seriously?” but the author, a clinical psychologist and professor at George Mason University deserves more than the Twtr-obessed publisher allows. His name is Todd Kashdan, and although I suspect curiosity may not be, as the subtitle promises, a way to “Discover the Missing Ingredient to a Fulfilling Life” (shouldn’t that “to” be “in” or “for”?), there’s too much good stuff in this book for me to pass it by.

Mostly, like every creative person, I’m curious about curiosity. I seem to have it in larger doses than most people, and I think I like that about myself. My friends tend to be curious, too, and they tend to value this in themselves. In fact, I enjoyed a long telephone conversation with a friend not six months ago on this very subject—he was analyzing generational differences in the workplace, and thought our generation pursued curiosity with greater energy than the current one.

Of course, Dr. Kashdan touches  school as curiosity-killer (“Do it now, ask questions later. Stay away from strangers. Avoid controversial topics and hot-button issues”), but I think he’s better when he’s positive, and consistent with the thinking of the positive psychology movement in academia, where he plays a part. When describing how and why “Curiosity is about recognizing and reaping the rewards of the uncertain, the unknown and the new…,” he explains that there is a “simple story line for how curiosity is the engine of growth.”

By being curious, we explore.

By exploring, we discover.

When this is satisfying, we are more likely to repeat it.

By repeating it, we develop competence and mastery.

By developing competence and mastery, our knowledge and skills grow.

As our knowledge and skills grow, we stretch who we are and what our life is about.

So “curiosity begets more curiosity.” Fair enough. But that’s just the starting place. When he offers curiosity as the opposite of certainty, and broadens the argument to society’s need for closure, specific answers, one way of looking at the world, his arguments become insights:

Curiosity creates possibilities; the need for certainty narrows them.

Curiosity creates energy; the need for certainty depletes.

Curiosity results in exploration; the need for certainty creates closure.

Curiosity creates movement; the need for certainty is about replaying events.

Curiosity creates relationships; the need for certainty creates defensiveness.

Creativity is about discovery; the need for certainty is about being right.

At first, I didn’t think much of this list, but the more I worked on a new project about knowledge and understanding, the more I realized the value of Dr. Kashdan’s insights.

Photo of the author, Todd Kashdan, by Adam Auel

It’s easy to see how this material can be brought into a wider domain: curiosity results in personal fulfillment, happiness, a healthy mental outlook, a purpose to life, and so on. He encourages openness in the style of so many self-help books, and here’s where my fascination begins to wane, mostly because I’ve read it all before: “When walking outside the house, I will gently guide my attention so I can be intrigued by every bodily movement and whatever sights, sounds and smells are within my range.” I don’t understand why anybody who is taking a walk would fill their ears with music, but that’s because I enjoy listening to the natural world. Does experience open my mind to every possibility? Not sure. I think I’m listening to birdsong, looking at autumn leaves and winter branches, and taking whiffs of honeysuckle when it’s in season. That’s enough for me.

If you find self-help books useful, you might add this one to your library. There are chapters about “The Rewards of Relationships” and “The Anxious Mind and the Curious Spirit,” and, almost inevitably, “Discovering Meaning and Purpose in Life.”

I think curiosity is powerful on its own terms: as an antidote to the routine, a door that opens to creative and divergent thought, as a pathway to learning lots of things. Secret of life? Maybe. I’ll leave that one up to you.

Goodbye, Columbus

Juan Ponce de León discovered "America" but Columbus gets the credit!

Juan Ponce de León discovered “America” but Columbus gets the credit!

(Hello, Ponce de León. What a story you have to tell! Those who are impatient may scroll down about 2/3 to the part I’ve marked in red white (grey, really) and blue.

It’s an odd story, one that brings tomatoes to Italy,, and eventually celebrates a favorite son for something he didn’t do.

You know that the Vikings first showed up in what is now North America. That happened about a thousand years ago. Some Vikings stayed for awhile, started families, and settlements.  The first child of European descent born on these shores was probably named “Snoori,” a name I’ve always liked.

For several thousand years before the Vikings visited, there were natives in North America and South America. They probably arrived, well, by taking the l-o-n-g way around, on foot and on animal, working their way up from Africa, then through Asia, and across the land bridge into what is now Alaska. Perhaps they arrived in other ways, but that seems less likely because boats were small and unsophisticated, and oceans were large and dangerous to navigate.

During the 1400s, Europeans were becoming rich by trading goods found in Asia. Mostly, these goods traveled on the Silk Roads, a series of trade routes that were subject to piracy, tribal feuds, and every kind of evil deed. There were all sorts of theories about the best way to travel not by land, but by sea. Nobody was particularly frightened about falling off the earth; the idea that the world was round, and that circumnavigation was possible was accepted long before Columbus showed up. (It’s one of the earliest urban legends, utter nonsense promoted in fanciful children’s books for a time.)

Columbus was an entrepreneur in search of capital for his new enterprise–put together half the necessary funds, and found the rest by sweet-talking King Ferdinand and Queen Isabella of Spain. They promised him a cut of the riches, and a ridiculous title, Emperor of the Ocean Seas. And they agreed to provide three ships. All for the glory of Spain, and the gold that everyone believed he would find. Make no mistake: it was all about the gold.

He took a wrong turn.

He was heading for what he believed was Japan, or, at least, Asia. Instead, he found an island in what is now the Caribbean Sea. (Certainly, Columbus Day should not be celebrated as a milestone in navigation history.)

Remember: Columbus was an entrepreneur. Perhaps it is that spirit that we should celebrate on Columbus Day. Certainly, there are very good reasons not to celebrate him at all, unless, of course, you share a very dark view of America and what it represents to the world.

Columbus kept a diary. Here, he writes about the native people, the Taino or Arawak people who greeted his crew with curiosity and apparent kindness.

They are very simple and honest and exceedingly liberal with all they have, none of them refusing anything he may possess if he is asked for it. They exhibit great love toward all others in preference to themselves.”

You’ll recall the Nina, the Pinta and the Santa Maria–the three ships provided by Spain for the first voyage. The Pinta’s captain defied Columbus’ orders, and abandoned the fleet. The Santa Maria was destroyed on a reef. Columbus high-tailed it back to Spain on the Nina, grabbing a bit of gold, kidnapping some natives. A second voyage was authorized, this time with the specific intention of becoming rich with gold. The Taino people were instructed, in no uncertain terms, to FIND THE GOLD.

Dressed in Taino garb and makeup, two contemporary Dominican girls demonstrate that these were real people with families and traditions. Each year, we celebrate an American hero who killed most of the Taino people.

Dressed in Taino garb and makeup, two contemporary Dominican girls demonstrate that these were real people with families and traditions. Each year, we celebrate an American hero who killed most of the Taino people.

Gold was not to be found. Columbus treated the Taino severely. He cut off their hands (Happy Columbus Day!)

Third Voyage. This time, a Priest named Bartolomé de las Casas joined, and kept a diary. It’s filled with documentation, generally considered reliable, about Columbus’ treatment of the natives: forced labor, brutality, horrific violence against children, babies being murdered by swinging them against trees or feeding them to dogs. From the Priest’s diary:

The Spaniards “thought nothing of knifing Indians by tens and twenties and of cutting slices off them to test the sharpness of their blades”, wrote Las Casas. “My eyes have seen these acts so foreign to human nature, and now I tremble as I write”

We celebrate Columbus Day because it was the beginning of the new world. In a twisted way, this is apt: the United States is the nation that was settled, mostly, by killing the natives who lived in this land. Those who believe that there is a greater reason for the celebration, an uplifting of humankind, the initiation of an era of discovery should probably consider where Mr. Columbus went, and did not go. No account brings Columbus into what is now the U.S.A. He traveled to several Caribbean Islands, notably Hispaniola (now, Haiti and the Dominican Republic,

Who discovered “America?” That’s a very challenging question. Let’s rephrase it: “Who discovered the United States of America” would trap out Canada, Mexico and the Caribbean Islands.

The earliest answer would seem to be the people who crossed Bernicia, the land bridge into Alaska around 16,000 BCE (before current era). Focusing only on the lower 48, there’s evidence dating back to about 13,000 BCE, known as the Clovis Sites.

The Vikings showed up, but probably not in what becomes the U.S.A. Sadly, our early attempts to invade, annex, or build a new country with friends nearby all failed, so Canada become a separate nation. After that, several hundred years (the Dark Ages) go by without much interest in or capability to explore, pretty much until Columbus and his kind.

Juan Ponce de León traveled with Columbus on his second voyage. He was a volunteer, a gentleman from a noble family. There were 200 such gentlemen.

For your reference, here's a map showing Hispaniola (currently occupied by Haiti and Dominican Republic), Puerto Rico, and nearby Florida.

For your reference, here’s a map showing Hispaniola (currently occupied by Haiti and Dominican Republic), Puerto Rico, and nearby Florida.

Columbus and his entourage apparently visited Borinquen, which we now call Puerto Rico. (In fact, when Puerto Rico finally becomes a U.S. state, the Columbus legend will come true: in that case, he would have been the explorer who discovered what become the United States of America. [For those who wish to make a case that Puerto Rico is a territory of the U.S., so technically this is true today, I ask why, if Puerto Rico plays such an important role in American History, it has not been invited to join the club.)

In any case, as a result of his military leadership (de León was involved in a notable native massacre), he become Governor of the Spanish territory. Natives told him of a land to the northwest, a land that could be reached by “crossing many rivers’. He told the King, but remained as Governor until he lost out in a tussle with–who else–the son of Christopher Columbus, who was legally enforcing his father’s rights. Eventually, the King stopped the political nastiness, and after de León returned to Spain, he outfitted three ships and headed for some unexplored lands. He found what is now Florida on April 2, 1513.

Every year, we celebrate Columbus Day in the USA. Many of our Spanish-speaking neighbors in the western hemisphere celebrate Día de la Raza instead; it is, in many places, a celebration of the race, not Columbus the explorer.

Somehow, on April 2, 2013 — exactly 500 years after the first European explorer set foot on what is now a U.S. state, the first moment when Europeans visited the  part of the New World that became our nation–we did nothing.

Welcome to the Connectome

Diffusion spectrum image shows brain wiring in a healthy human adult. The thread-like structures are nerve bundles, each containing hundreds of thousands of nerve fibers. Source: Source: Van J. Wedeen, M.D., MGH/Harvard U. To learn more about the government's new connectome project, click on the brain.

Diffusion spectrum image shows brain wiring in a healthy human adult. The thread-like structures are nerve bundles, each containing hundreds of thousands of nerve fibers.
Source: Source: Van J. Wedeen, M.D., MGH/Harvard U. To learn more about the government’s new connectome project, click on the brain.

You may recall recent coverage of a major White House initiative: mapping the brain. In that statement, there is ambiguity. Do we mean the brain as a body part, or do we mean the brain as the place where the mind resides? Mapping the genome–the sequence of the four types of molecules (nucleotides) that compose your DNA–is so far along that it will soon be possible, for a very reasonable price, to purchase your personal genome pattern.

A connectome is, in the words of the brilliantly clear writer and MIT scientist, Sebastian Seung, is: “the totality of connections between the neurons in [your] nervous system.” Of course, “unlike your genome, which is fixed from the moment of conception, your connectome changes throughout your life. Neurons adjust…their connections (to one another) by strengthening or weakening them. Neurons reconnect by creating and eliminating synapses, and they rewire by growing and retracting branches. Finally, entirely new neurons are created and existing ones are eliminated, through regeneration.”

In other words, the key to who we are is not located in the genome, but instead, in the connections between our brain cells–and those connections are changing all the time.The brain, and, by extension, the mind, is dynamic, constantly evolving based upon both personal need and stimuli.

Connectome BookWith his new book, the author proposes a new field of science for the study of the connectome, the ways in which the brain behaves, and the ways in which we might change the way it behaves in new ways. It isn’t every day that I read a book in which the author proposes a new field of scientific endeavor, and, to be honest, it isn’t every day that I read a book about anything that draws me back into reading even when my eyes (and mind) are too tired to continue. “Connectome” is one of those books that is so provocative, so inherently interesting, so well-written, that I’ve now recommended it to a great many people (and now, to you as well).

Seung is at his best when exploring the space between brain and mind, the overlap between how the brain works and how thinking is made possible. For example, he describes how the idea of Jennifer Aniston, a job that is done not by one neuron, but by a group of them, each recognizing a specific aspect of what makes Jennifer Jennifer. Blue eyes. Blonde hair. Angular chin. Add enough details and the descriptors point to one specific person. The neurons put the puzzle together and trigger a response in the brain (and the mind). What’s more, you need not see Jennifer Aniston. You need only think about her and the neurons respond. And the connection between these various neurons is strengthened, ready for the next Jennifer thought. The more you think about Jennifer Aniston, the more you think about Jennifer Aniston.

From here, it’s a reasonable jump to the question of memory. As Seung describes the process, it’s a matter of strong neural connections becoming even stronger through additional associations (Jennifer and Brad Pitt, for example), repetition (in all of those tabloids?), and ordering (memory is aided by placing, for example, the letters of the alphabet in order). No big revelations here–that’s how we all thought it worked–but Seung describes the ways in which scientists can now measure the relative power (the “spike”) of the strongest impulses. Much of this comes down to the image resolution finally available to long-suffering scientists who had the theories but not the tools necessary for confirmation or further exploration.

Next stop: learning. Here, Seung focuses on the random impulses first experienced by the neurons, and then, through a combination of repetition of patterns (for example), a bird song emerges. Not quickly, nor easily, but as a result of (in the case of the male zebra finches he describes in an elaborate example) of tens of thousands of attempts, the song emerges and can then be repeated because the neurons are, in essence, properly aligned. Human learning has its rote components, too, but our need for complexity is greater, and so, the connectome and its network of connections is far more sophisticated, and measured in far greater quantities, than those of a zebra finch. In both cases, the concept of a chain of neural responses is the key.

Watch the author deliver his 2010 TED Talk.

Watch the author deliver his 2010 TED Talk.

From here, the book becomes more appealing, perhaps, to fans of certain science fiction genres. Seung becomes fascinated with the implications of cryonics, or the freezing of a brain for later use. Here, he covers some of the territory familiar from Ray Kurzweil’s “How to Create a Mind” (recently, a topic of an article here). The topic of fascination: 0nce we understand the brain and its electrical patterns, is it possible to save those patterns of impulses in some digital device for subsequent sharing and/or retrieval? I found myself less taken with this theoretical exploration than the heart and soul of, well, the brain and mind that Seung explains so well. Still, this is what we’re all wondering: at what point does human brain power and computing brain power converge? And when they do, how much control will we (as opposed to, say Amazon or Google) exert over the future of what we think, what’s important enough to save, and what we hope to accomplish.

Encouraging Schools to Join the 21st Century

Darryl WestConventional public schools are “arranged to make things easy for the teacher who wishes quick and tangible results.” Furthermore, “the ordinary school impress[es] the little one into a narrow area, into a melancholy silence, into a forced attitude of mind and body.” No doubt, you’ve had a thought similar to this one: “if we teach today’s students as we taught yesterday’s, we rob them of tomorrow.”

There’s a reason for the old school language. The words were published in 1915 by educator John Dewey. A century later, the situation has begun to change, mostly, according to Brookings Institute vice president Darryl M. West, as a result of the digital revolution. Mr. West advances this theory by offering an ample range of examples in his new book, Digital Schools.

Quite reasonably, he begins by considering various attempts at school reform, education reform, open learning, shared learning, and so on. Forward-thinking educators fill their office shelves with books praising the merits of each new wave of reform, and praise the likes of Institute for Play, but few initiatives taken hold with the broad and deep impact that is beginning to define a digital education.

digital schoolsBlogs, wikis, social media, and other popular formats are obvious, if difficult to manage, innovations more familiar in student homes than in most classrooms, but the ways in which they democratize information–removing control from the curriculum-bound classroom and teacher and allowing students to freely explore–presents a gigantic shift in control.

Similarly, videogames and augmented reality, whether in an intentionally educational context or simply as a different experience requiring critical thinking skills in imaginary domains, are commonplace at home, less so in class, and, increasingly, the stuff of military education, MIT and other advanced academic explorations, and, here and there, the charge of a grant-funded program at a special high school. More is on the way.

Evaluation, assessment, measurement–all baked into the traditional way we think about school–are far more efficient and offer so many additional capabilities. No doubt, traditional thinkers will advance incremental innovation by mapping these new tools onto existing curriculum, perhaps a step in the right direction, however limited and short-sighted those steps may be. The big step–too large for most contemporary U.S. classrooms–is toward personalized learning and personalized assessment, but that would shift the role of the teacher in ways that some union leaders find uncomfortable.

The power behind West’s view is, of course, the velocity of change in the long-promising arena of distance learning. During the past ten years , the percentage of college students who have taken at least one distance learning course has tripled, and  passed 30 percent in 2011. Numbers are not available, but I suspect we’ve now passed the 50 percent mark. The book does not address the stunning growth of, for example, Coursera. Kevin Werbach, a Wharton faculty member, taught over 85,000 students in his first Coursera course (on gamification)–students from all of the world. Indeed, the current run rate is 1.4 million new Coursera sign-ups per month.

Mimi Ito is one of the more influential thinkers about modern education and its future. Click to read her bio.

Mimi Ito is one of the more influential thinkers about modern education and its future. Click to read her bio.

The author quotes education researcher Mimi Ito:

There is increasingly a culture gap between the modes of delivery… between how people learn and what is taught. [In addition to] the perception that classrooms are boring… students [now] ask, ‘Why should I memorize everything if I can just go online? … Students aren’t preparing kids for life.”

Is this a ground-breaking book. No, but it is useful compendium of the digital changes that are beginning to take root in classrooms across America. Yes, we’re behind the times. In many ways, students are far ahead of the institutions funded to teach them. The book serves notice: no longer are digital means experimental. Computer labs are being replaced by mobile devices. Students are taking courses from the best available teachers online, and not only in college. Many students are enrolled nowhere; they are simply taking courses because they want to learn or need to learn for professional reasons. Without formal enrollment, institutions begin to lose their way. The structure is beginning to erode. Just beginning. And it can be fixed, changed, transformed, amended, and otherwise modernized. And so, the helpful author provides an extensive list of printed links for interesting parties to follow.

Just out of curiosity, I called up Darrell M. West’s web page–it’s part of the Brookings Institution’s site–and, as I expected, he is a man of consider intellect and accomplishment.  And so, I hoped I would find the above-cited links as a web resource. I looked for Education under his extensive list of topics of interest but it wasn’t there. (Uh-oh?) I did find a section on his page called “Resources,” but the only available resource on that page was a 10MB photograph of Mr. West. I couldn’t find the links anywhere. Perhaps this can be changed so that all readers, educators and interested parties can make good use of his forward-thinking work.

Sorry–one more item–I just found a recent paper by Dr. West, and I thought you might find both the accompanying article and the link useful.

Here's a look at 42-year-old John Dewey in 1902. To learn more about him, click on the picture and read the Wikipedia article.

Here’s a look at 42-year-old John Dewey in 1902. To learn more about him, click on the picture and read the Wikipedia article.

U.S. Education by the Numbers

Today, more students are enrolled in school than ever before. And the trend is accelerating. In fact, all of the population numbers in this article have increased by about ten percent in the past ten years; in the next decade, the acceleration will increase. For the moment, let’s focus on the U.S., and, in time, in future articles, the view will expand. Note that much of his information comes from the National Center for Education Statistics.

Before we dig deeply, I suppose it’s interesting to note that there are about 99,000 public schools (including just over 5,000 charter schools), plus more than 33,000 private schools.

640px-College_graduate_studentsThis year, there are slightly fewer than 50 million public school students, including about 15 million high school students in public school. Add another 5 million students in private school, including over 1 million in private high schools.

Each year, about 4 million students start high school. (Actually, the number is about 3.7-4.0 million). Remember that number: it’s the basis for some arithmetic below.

There are many ways to calculate high school graduation rates, and the Federal government has been improving the reliability, accuracy and precision of these metrics. Distribution is uneven: students in some ethnic groups, who live in some states or cities or districts, may fare better or worse (as poorly as 1 in 2 graduating, for example).

In our simple (and, perhaps, simplistic) calculation, it would be fair to assume that about 4 million students start high school and about 3 million finish high school.

About 2 in 3 males, and about 3 in 4 female, enroll in college.

Each year, just under 2 million bachelor’s degrees, plus just short of a million earn an associates degree. And although not everybody earns a bachelor’s degree in four years or an associate’s degree in two years, on average, the vast majority of people who graduate high school–that is, about 3 in 4 of the people who started ninth grade–earn a college degree.

What’s more, nearly one million advanced degrees (masters, doctorate) are awarded every year. It’s fair to assume more than a half million people earn these degrees each year–or about 1 in 6 f the people who graduated high school.

Taking this into the workplace, in 2010, nearly 3 out of 4 college graduates were employed, in comparison with just over 1 in 2 people with only a high school diploma. On average, those college graduates also earned more money: over $45,000 for the college graduates compared with just under $30,000 for high school graduates without a college degree.

All of this sounds terrific, but I wonder whether the numbers are correct.

Last spring, The Atlantic published an article that placed just over 40 percent of 18-24 year olds in college, and offered a graduate rate (within a generous six years) of just 56 percent. If I understand this story correctly: roughly 20 percent of 24 year olds earn a college degree. The Atlantic story was inspired by a report prepared by Reuters.

So why does the National Center for Education Statistics report 1.8 million bachelor’s degrees per year? (I may not be comparing [teacher's] apples to apples, but this discrepancy seems to be quite large.)

The purpose of this article is not to challenge these sources, but instead, to try to get a fix on the actual numbers, and the state of U.S. education today. Why? If 3 in 4 people are indeed graduating high school, then we’re working on the right problem, especially if the vast majority of high school grads finish college and earn a good wage. However, if only 3 in 4 high school grads are attending college, and only 1 in 2 of them are actually finishing college, that only 1 in 4 Americans are college graduates.

Gee, those numbers seem wrong to me–the number of college graduates is probably over fifty percent–so why don’t the numbers add up?

(Help?)

Outsourcing the Human Brain

(Copyright 2006 by Zelphics [Apple Bushel])

(Copyright 2006 by Zelphics [Apple Bushel])

Before we start outsourcing, let’s prepare an inventory and analysis with this concept in mind:

Our intelligence has enabled us to overcome the restrictions of our biological heritage and to change ourselves in the process. We are the only species that does this.”

And, this one:

We are capable of hierarchical thinking, of understanding a structure composed of diverse elements arranged in a pattern, representing that arrangement with a symbol, and then using that symbol as an element in an even more elaborate configuration.”

Simple though it may sound, we may think in terms of not just one apple, but, say, a bushel filled with, say, 130 medium sized apples, enough to fill about 15 apple pies.

We call this vast array of recursively linked ideas knowledge. Only homo sapiens have a knowledge base that itself evolves, grows exponentially, and is passed from one generation to another.

Remember Watson, the computer whose total Jeopardy! score more than doubled the scores of its two expert competitors? He (she, it?) “will read medical literature (essentially all medical journals and leading medical blogs) to become a master diagnostician and medical consultant. Is Watson smart, or simply capable of storing and accessing vast stores of data? Well, that depends upon what you mean by the word “smart.” You see, “the mathematical techniques that have evolved in the field of artificial intelligence (such as those used in Watson and Siri, the iPhone assistant) are mathematically very similar to the methods that biology evolved in the form of the neocortex (from Science Daily: “[the neocortex is part of the brain and] is involved in higher functions such as sensory perception, generation of motor commands, spatial reasoning, conscious thought, and in humans, language.”

Kurzweil bookGenius author Ray Kurzweil has spent a lifetime studying the human brain, and, in particular, the ways in which the brain processes information. You know his work: it is the basis of the speech recognition we now take for granted in Siri, telephone response systems, Dragon, and other systems. No, it’s not perfect. Human speech and language perception are deeply complicated affairs. In his latest book, How to Create a Mind: The Secret of Human Thought Revealed, Kurzweil first deconstructs the operation of the human brain, then considers the processing and storage resources required to replicate at least some of those operations with digital devices available today or likely to be available in the future. At first, this seems like wildly ridiculous thinking. A hundred pages later, it’s just an elaborate math exercise built on a surprisingly rational foundation.

Kurzweil-headshotMuch of Kurzweil’s theory grows from his advanced understanding of pattern recognition, the ways we construct digital processing systems, and the (often similar) ways that the neocortex seems to work (nobody is certain how the brain works, but we are gaining a lot of understanding as result of various biological and neurological mapping projects). A common grid structure seems to be shared by the digital and human brains. A tremendous number of pathways turn or or off, at very fast speeds, in order to enable processing, or thought. There is tremendous redundancy, as evidenced by patients who, after brain damage, are able to relearn but who place the new thinking in different (non-damaged) parts of the neocortex.

Where does all of this fanciful thinking lead? Try this:

When we augment our own neocortex with a synthetic version, we won’t have to worry about how much additional neocortex can physically fit into our bodies and brains as most of it will be in the cloud, like most of the computing we use today.”

What’s more:

In order for a digital neocortex to learn a new skill, it will still require many iterations of education, just as a biological neocortex does today, but once a digital neocortex somewhere and at some time learns something, it can share that knowledge with every other digital neocortex without delay. We can each have our own neocortex extenders in the cloud, just as we have our own private stores of personal data today.”

So the obvious question is: how soon is this going to happen?

2023.

TED-neocortex

Skeptical? Click the image and watch the 2009 TED Talk by Henry Markham. It’s called “A Brain in a Supercomputer.”

In terms of our understanding, this video is already quite old. Kurzweil: “The spatial resolution of noninvasive scanning of the brain is improving at an exponential rate.” In other words, new forms of MRI and diffusion tractography (which traces the pathways of fiber bundles inside the brain) are among the many new tools that scientists are using to map the brain and to understand how it works. In isolation, that’s simply fascinating. Taken in combination with equally ambitious, long-term growth in computer processing and storage, our increasing nuanced understanding of brain science makes increasingly human-like computing processes more and more viable. Hence, Watson on Jeopardy! or if you prefer, Google’s driver-less cars that must navigate through so many real-time decisions and seem to be accomplishing these tasks with greater precision and safety than their human counterparts.

Is the mind a computer? This is an old argument, and although Kurzweil provides both the history and the science / psychology behind all sides of the argument, nobody is certain. The tricky question is defining consciousness, and, by extension, defining just what is meant by a human mind. After considering these questions through the Turing Test, ideas proposed by Roger Penrose (video below), faith and free will, and identity, Kurzweil returns to the more comfortable domain of logic and mathematics, filling the closing chapter with charts that promise the necessary growth in computing power to support a digital brain that will, during the first half of this century, redefine the ways we think (or, our digital accessory brains think) about learning, knowledge and understanding.

Closing out, some thoughts from Penrose, then Kurzweil, both on video:

The Multiplier Effect

Quickly now… If you multiply 633 by 11, what’s the answer?

No doubt, you recognize the pattern, and you may recall the mental math process:

633 x 10, plus 633 x 1, or 6,330 plus 633, or 6,963, which is the answer (or, in terms used by math teachers, the “product”).

There is another way to solve the problem, a faster way that assures fewer computational errors, and does not involve any sort of digital or mechanical device. It does, however, involve a simple rule and a different way to write the problem down.

The rule is: “write down the number, add the neighbor.” The asterisk just above each number is there only to help you to focus. If you prefer, think of it as a small arrow.

Here’s how it works:

Mult by 11

Try multiplying 942 x 11  and you’ll quickly get the hang of it.

Do it once more, this time with a much larger number: 8,562,320 x 11. It goes quickly, as you’ll see.

Multiplying by 12 is just as easy, but the rule changes to: “double the number, add the neighbor.” Here, my explanation includes specific numbers.

Mult by 12

In fact, there is a similar rule for multiplication by any number (1-12). And there are rules for quickly adding long, complicated columns of numbers, as there are for division, square roots and more.

These rules were developed by a man facing his own demise in the Nazi camps during the Second World War. Danger was nothing new to him…this is the story and the enduring legacy of Jakow Trachtenberg, who first escaped the wrath of the Communists as he escaped his native Russia, then became a leading academic voice for world peace. His book, Das Friedensministerium (The Ministry of Peace), was read by FDR and other world leaders. His profile was high; capture was inevitable. He made it out of Austria, got caught in Yugoslavia, and was sentenced to death at a concentration camp. To maintain his sanity, Trachtenberg developed a new system for mathematical calculation. Paper was scarce, so he used it mostly for proofs. The rest, he kept in his head.

Madame Trachtenberg stayed nearby, in safety. She bribed officials, pulled strings, and managed to get Jakow moved to Dresden, which was a mess, allowing him to escape. Then, he was caught again, and was moved to Trieste. More bribes and coercion from Madame. He escaped. The couple maneuvered into a more normal existence beginning at refugee camp in Switzerland. By 1950, they were running the Mathematical Institute in Zurich, teaching young students a new way to think about numbers. A system without multiplication tables. A system based upon logic. A system that somehow survived.

A system that, against all odds, made it into my elementary classroom. One classroom in the New York City school district. For one year. The parents were certain that the teacher was making a terrible mistake, that the people in my class, myself included, would never be able to do math in the conventional way again. Of course, we learned a lot more than an alternative from of arithmetic.

And now, after decades out of print, in an era when arithmetic hardly matters because of calculators and computers, the original book is back in print. The brilliance of system remains awesome, and the book is worth reading just to understand how Trachtenberg conceived an entirely fresh approach under the most extraordinary circumstances.

20130113-222930.jpg

The Mind of Howard Gardner

From his Harvard bio, one of my personal heroes. Few academics have captured my imagination, and affected my thinking, as consistently or as deeply as Howard Gardner.

Harvard Professor Howard Gardner has written more than a dozen books with the word “mind” in the title. Few researchers have spend so much of their professional careers thinking about how our minds work, whether our minds might be better trained, and whether our minds can be put to better use. He’s a brilliant thinker, and I have thoroughly enjoyed reading his evolving work over these past few decades.

Earlier this year, with co-author Emma Laskin, Gardner republished Leading Minds: An Anatomy of Leadership with a new introduction, and that led me to the slimmer 5 Minds for the Future, a slim book that captures his evolving philosophy in a succinct, deeply meaningful way.

From the start, Gardner’s 5 Minds for the Future is more contemporary, acknowledging the tangentially  overlapping work of Daniel Pink, Stephen Colbert (“truthyness“)  and the enormous changes brought about by globalization. Gardner is famous for his theories about multiple intelligences (“M.I.” these days), but M.I. is not what this book is about. Instead, Gardner presents his case as a progression from basic to higher-level thinking, and his hope that we will climb the evolutionary ladder as a collective enterprise.

He begins by revisiting one of his favorite themes, the disciplined mind (which provided both title and subject matter for his 1999 book). Here, the goal is mastery, which requires a minimum of a decade’s intense participation, a thorough examination of all relevant ideas and approaches, deep study to understand both the facts and the underlying fundamentals, and interdisciplinary connections. This is serious work, and it must be accomplished despite the sometimes crazy ways that schools think about learning, and the equally crazy ways that the workplace may value or advance those with growing expertise. The disciplined mind does not simply accept what has been written or taught. Instead, the disciplined mind challenges assumptions, and digs deep so that it may apply intelligence when conventional thinking does not produce valuable results. No surprise that Gardner is deeply critical of those who invest less than a decade in any serious endeavor, or those who fake it in other ways.

Next up the ladder is the synthesizing mind which accomplishes its work by organizing, classifying, expanding its base of knowledge by borrowing from related (and unrelated) fields. Placing ideas into categories is an important step up the ladder because the process requires both (a) a full understanding of  specific disciplines and how they relate to one another, and (b) the means to convey these ideas to others. And so, Gardner views the Bible (a collection of moral stories), Charles Darwin’s theories, Picasso’s Guernica, and Michael Porter’s writings about strategy as related endeavors. At first, this seems to be a stretch. Then again, each of these are bold combinations of ideas based upon a complete understanding of a domain–(a) above–conveyed in a way that connects people to the synthesized ideas (b).

You may know Mihaly Csikszentmihalyi as the author of the excellent book FLOW, but his best work may be a book simply entitled CREATIVITY.

Then, there’s the creating mind. At this stage, the progression begins to make a lot of sense. Novel approaches are not based upon random ideas that may or may not work. Instead, the creating mind grows from deep study of a specific domain in a disciplined manner, followed by various attempts to organize that knowledge in ways that propel an argument forward. At a certain point, the argument has been advanced, and the opportunity for new thinking presents itself. Many creative professionals are required to advance new ideas without the requisite discipline, and so, our society generates lots of ephemeral stuff. In the creative space, Gardner’s thinking has been affected by Mihaly Csikszentmihalyi, who believes:

creativity only occurs when–and only when–an individual or group product is recognized by the relevant field as innovative, and, sooner or later, exerts a genuine, detectible influence on subsequent work in that domain.”

I would argue that the respectful mind ought to precede the disciplined mind as the ladder’s first rung, and Gardner provides ample evidence to support my argument. For one thing, the respectful mind is the only one of Gardner’s five minds that can be nurtured beginning at birth. What’s more, the ability to “understand and work effectively with peers, teachers and staff” would seem to be a prerequisite for any disciplined approach to learning and personal development. The whole chapter is nicely encapsulated by a sentence from renowned preschool teacher Vivian Paley:

You can’t say ‘you can’t play.'”

A decade ago, Gardner, Csikszentmihalyi, and William Damon wrote a book called Good Work, and this effort has expanded into The Good Work Project. Central to this effort is the ethical mind, which carries a meaning well beyond the ethical treatment of others. Here, we begin to touch upon the idea of professional or societal calling, and one’s role within a profession or domain. It begins with doing the best work possible–that is, the work of the highest quality, as well as work of redeeming social value–but it’s not just the work itself, it’s the way that you apply yourself to the job at hand. Here, Gardner covers the diligent newcomer, the mid-life worker who continues to pursue excellence every day, the older mentor or trustee whose role is to encourage others to build beyond what has already been accomplished.

In less than 200 pages, Gardner accomplishes a great deal. If time permits you to read only two Gardner books, I would start with Frames of Mind, which explains his theory about multiple intelligences, then jump to 5 Minds for the Future. After these two, you’ll probably want more. His book about leadership, mentioned above and discussed below, is certainly worthwhile. And Good Work will fill your head with wonderful ideas and inspiration for all you could do to help make the world a better place.

BTW: If you want to watch Gardner discuss 5 Minds for the Future, you’ll find his 45-minute video here.

As for Leading Minds, it’s an extraordinary book, a collection of analytical biographies written as parts of a whole, a cognitive view of leaders and leadership. He examines leaders by taking part their fundamental identity story: who they are, how their domain and influence grew, how and why they succeeded, how and why they were unable to accomplish their ultimate goals. This is not a book whose core ideas can be reduced to a few bullet points. Instead, it’s a few hundred pages of reflection on the nature of leadership shown through the examples of Albert Einstein, Mahatma Gandhi, Martin Luther King, Jr., Alfred P. Sloan, Eleanor Roosevelt, and a half dozen other 20th century figures. The significance of some names is fading; it was disappointing to find that this revised edition of a 1995 work did not include anyone who made his or her mark in the 21st century.

Infographic: US Education Spending vs. Results

Doing some research, I came upon this colorful infographic that compares educational investment and results in a dozen different countries. No big surprises, but it’s easy to follow. It’s clear that Mexico spends a very small amount per student and achieves only modest results, and it makes sense to see France in the middle of per-capita spending and also in the middle of the results. Clearly, the US and the UK are out of whack–spending is high, but their results are middling. Why the mismatch? And why is the US’s purple circle so much larger than any other circle? Population accounts for only part of the reason why.

U.S. Education versus the World via Master of Arts in Teaching at USC
Via: MAT@USC | Master’s of Arts in Teaching

Follow

Get every new post delivered to your Inbox.

Join 220 other followers