What Kickstarter Has Kickstarted

kickstarter

If the graphic appears a bit fuzzy, visit the Fast Company site for the article, scroll down, and click on the yellow-and-black infographic. You can magnify the infographic on their site, but nowhere else.

I missed this Fast Company article when it was published in April. Most useful was the infographic. In it, I learned:

  • Among creators, film and video is the most popular type of Kickstarter project, but this category ranks second among backers, and sixth in the list of successes.
  • The category most likely to be funded: projects related to dance (but there aren’t many of them, and there aren’t many people who back dance). Theater projects come in second,  music in third, and and art in fourth. Both music and art are strong in terms of number of projects, and also, in terms of their success rate, ranking fourth and fifth on the list.
  • There are lots of game projects, and lots of people who back game projects, but in terms of project success, odds are not so good. Still, games have generated more revenue than any other category.
  • Video matters. More than four out of five Kickstarter projects are pitched with a video.
  • For the past several years (that is, for as long as Kickstarter has been around), just over 2 in 5 projects are successfully funded.

There’s much more in the article.

Outsourcing the Human Brain

(Copyright 2006 by Zelphics [Apple Bushel])

(Copyright 2006 by Zelphics [Apple Bushel])

Before we start outsourcing, let’s prepare an inventory and analysis with this concept in mind:

Our intelligence has enabled us to overcome the restrictions of our biological heritage and to change ourselves in the process. We are the only species that does this.”

And, this one:

We are capable of hierarchical thinking, of understanding a structure composed of diverse elements arranged in a pattern, representing that arrangement with a symbol, and then using that symbol as an element in an even more elaborate configuration.”

Simple though it may sound, we may think in terms of not just one apple, but, say, a bushel filled with, say, 130 medium sized apples, enough to fill about 15 apple pies.

We call this vast array of recursively linked ideas knowledge. Only homo sapiens have a knowledge base that itself evolves, grows exponentially, and is passed from one generation to another.

Remember Watson, the computer whose total Jeopardy! score more than doubled the scores of its two expert competitors? He (she, it?) “will read medical literature (essentially all medical journals and leading medical blogs) to become a master diagnostician and medical consultant. Is Watson smart, or simply capable of storing and accessing vast stores of data? Well, that depends upon what you mean by the word “smart.” You see, “the mathematical techniques that have evolved in the field of artificial intelligence (such as those used in Watson and Siri, the iPhone assistant) are mathematically very similar to the methods that biology evolved in the form of the neocortex (from Science Daily: “[the neocortex is part of the brain and] is involved in higher functions such as sensory perception, generation of motor commands, spatial reasoning, conscious thought, and in humans, language.”

Kurzweil bookGenius author Ray Kurzweil has spent a lifetime studying the human brain, and, in particular, the ways in which the brain processes information. You know his work: it is the basis of the speech recognition we now take for granted in Siri, telephone response systems, Dragon, and other systems. No, it’s not perfect. Human speech and language perception are deeply complicated affairs. In his latest book, How to Create a Mind: The Secret of Human Thought Revealed, Kurzweil first deconstructs the operation of the human brain, then considers the processing and storage resources required to replicate at least some of those operations with digital devices available today or likely to be available in the future. At first, this seems like wildly ridiculous thinking. A hundred pages later, it’s just an elaborate math exercise built on a surprisingly rational foundation.

Kurzweil-headshotMuch of Kurzweil’s theory grows from his advanced understanding of pattern recognition, the ways we construct digital processing systems, and the (often similar) ways that the neocortex seems to work (nobody is certain how the brain works, but we are gaining a lot of understanding as result of various biological and neurological mapping projects). A common grid structure seems to be shared by the digital and human brains. A tremendous number of pathways turn or or off, at very fast speeds, in order to enable processing, or thought. There is tremendous redundancy, as evidenced by patients who, after brain damage, are able to relearn but who place the new thinking in different (non-damaged) parts of the neocortex.

Where does all of this fanciful thinking lead? Try this:

When we augment our own neocortex with a synthetic version, we won’t have to worry about how much additional neocortex can physically fit into our bodies and brains as most of it will be in the cloud, like most of the computing we use today.”

What’s more:

In order for a digital neocortex to learn a new skill, it will still require many iterations of education, just as a biological neocortex does today, but once a digital neocortex somewhere and at some time learns something, it can share that knowledge with every other digital neocortex without delay. We can each have our own neocortex extenders in the cloud, just as we have our own private stores of personal data today.”

So the obvious question is: how soon is this going to happen?

2023.

TED-neocortex

Skeptical? Click the image and watch the 2009 TED Talk by Henry Markham. It’s called “A Brain in a Supercomputer.”

In terms of our understanding, this video is already quite old. Kurzweil: “The spatial resolution of noninvasive scanning of the brain is improving at an exponential rate.” In other words, new forms of MRI and diffusion tractography (which traces the pathways of fiber bundles inside the brain) are among the many new tools that scientists are using to map the brain and to understand how it works. In isolation, that’s simply fascinating. Taken in combination with equally ambitious, long-term growth in computer processing and storage, our increasing nuanced understanding of brain science makes increasingly human-like computing processes more and more viable. Hence, Watson on Jeopardy! or if you prefer, Google’s driver-less cars that must navigate through so many real-time decisions and seem to be accomplishing these tasks with greater precision and safety than their human counterparts.

Is the mind a computer? This is an old argument, and although Kurzweil provides both the history and the science / psychology behind all sides of the argument, nobody is certain. The tricky question is defining consciousness, and, by extension, defining just what is meant by a human mind. After considering these questions through the Turing Test, ideas proposed by Roger Penrose (video below), faith and free will, and identity, Kurzweil returns to the more comfortable domain of logic and mathematics, filling the closing chapter with charts that promise the necessary growth in computing power to support a digital brain that will, during the first half of this century, redefine the ways we think (or, our digital accessory brains think) about learning, knowledge and understanding.

Closing out, some thoughts from Penrose, then Kurzweil, both on video:

Big Ideas Simply Explained

Three subjects that I can never seem to understand as completely as I would like:

  • Philosophy
  • Economics
  • Psychology

Whenever I read a book about any of these subjects, I feel like a student, which means, I am reading because duty requires me to complete the book. The subjects interest me, but too many of the books I have read on these subjects are dreary, slow-moving, too dense with ideas for any reasonable person to sort out and retain their valuable understanding. Pictures help, but many of the ideas held within these disciplines are difficult to illustrate with anything better than wordy diagrams.

A year or so ago, I noticed a series of three books put together by Dorling Kindersley (DK)’s collaborative teams in the UK and India. They’ve got the formula right, and as a result, I have spent the last year happily browsing, and learning, from:

  • The Philosophy Book: Big Ideas Simply Explained
  • The Economics Book: Big Ideas Simply Explained
  • The Psychology Book: Big Ideas Simply Explained

A month or so ago, the same company released The Politics Book: Big Ideas Simply Explained, and at some point, I’ll get to that one, too. Right now, I’m still working my way through the first three volumes (about 1,000 pages total).

Three DK BooksSo what’s so special?

First,there is no single author. The collaborative approach focuses on presentation, clarity and consistency. This is less the work of a brilliant psychology teacher, more like a good old fashioned browse through, say, The World Book Encyclopedia from days of old. The type treatments are bold. There are pull-out quotes. There is color. No single idea runs more than a few pages. Everything is presented in a logical flow. There are boxes filled with biographical details. There is a clear statement of predecessor ideas and influences for each idea, and there is an equally clear statement about those in the future who built upon each idea. There are color pictures and diagrams. It’s tidy, presented for smart adult readers but certainly suitable research material for any school report.

The Philosophy Book is written by four academics and two writers: Will Buckingham is a philosopher and novelist with a special interest in the interplay between philosophy and narrative storytelling. Marcus Weeks is a writer, and author. Clive Hill is an academic focused on intellectualism in the modern world. Douglas Burnham is a philosophy professor and prolific writer on the subject. Peter J. King is a doctor of Philosophy who lectures at Pembroke College, University of Oxford. John Marenborn is a Fellow of Trinity College, Cambridge, UK, whose expertise is medieval philosophy. Taken as a group, they’ve got their philosophical bases covered (each of the books is put together by a team with similar skills). Marcus Weeks is the connection between all three books.

The bright yellow Philosophy book introduces the whole idea in comfortable language:

Philosophy is…a chance simply to wonder what life and the universe are all about…Philosophy is not so much about coming up with the answers to fundamental questions as it is about the process of trying to find out those answers, using reasoning rather than accepting…conventional views or conventional authority.”

So begins an introductory essay that introduces debate and dialogue, existence and knowledge, logic and language, morality, religion, and systems of thought and beliefs. A red color burst is the bridge into a timeline that begins the conversation in 624 B.C.E. And so, early on, we meet Pythagoras, who should be famous for more than his geometric theorem. In 428 B.C.E.–that’s about 2,500 years ago–Pythagorus developed a remarkable idea, that everything in the universe conforms to mathematical rules and ratios, and determined that this was true both of forms and ideas. Pythagorus was the leader of a religious cult, in which he was the Messiah, and his followers thought of his work as revelations. Here was a man for whom reasoning was the secret of the universe. He wrote, or said:

There is geometry in the humming of the strings, there is music in the spacing of the spheres.”

And:

Reason is immortal. All else is mortal.”

SiddharthaTurn the page and there’s Siddhartha Gautama and Buddhism’s four noble truths, explained in terms that anybody can understand, followed by the Eightfold Path presented in the Dharma Wheel. Siddhartha is covered in four good pages, and then, it’s time for Confucius and his Five Conscious Relationships.

All three of these men–Pythagorus, Siddhartha and Confucius–lived and worked around 500 B.C.E. More or less, they were contemporaries. A century later, philosophy turns to what is later called science, as Democritus and Leucippus come with the idea of atoms and the emptiness of space. (Seemed very early to me, too!) At about the same time, this from Socrates:

The life which is unexamined is not worth living.”

Jumping ahead to the middle of the book, Britain’s David Hume is considering human nature in the mid-1700s, and, in particular, the ways we cobble together facts:

In our reasonings concerning fact, there are all imaginable degrees of assurance. A wise man therefore proportions his beliefs to the evidence.”

Thinking in the present day, Palestinian philosopher Edward Said criticizes imperialism, Australian Peter Singer advocates for animal rights, and Bulgarian-born French philosopher Julia Kristeva questions the relationship between feminism and power. It’s a large field, and with The Philosophy Book, it’s possible for the average person to navigate with greater confidence than before.

The other two books are equally good.

The Economics Book begins with an article about Thomas Aquinas’s thoughts on prices, markets, and morality; the provision of public goods with thoughts by David Hume, whose words from the 1700s certainly resonate today:

Where the riches are engrossed by a few, these must contribute very largely to the supplying of the public necessities.”

Hume is among the few whose ideas appear in more than one of these volumes. And–I just noticed–The Philosophy Book tends to be stories about the people behind the ideas, The Economics Book tends more toward the ideas with less frequent stories about the people behind them (often because economic ideas are credited to multiple sources, I suppose). Making our way through The Age of Reason (“man is a cold, rational calculator;” “the invisible hand of the market brings order”);  on to economic bubbles (beginning with tulip mania in 1640); game theory and John (A Beautiful Mind) Nash; market uncertainty, Asian Tiger economies, the intersection of GDPs and women’s issues, inequality and economic growth, and more. Great book, but a bit slower going than Philosophy.

Psych Book SpreadThird in the trilogy is the bright red volume, The Psychology Book. As early as the year 190 in the current era, Galen of Pergamon (in today’s Turkey) is writing about the four temperaments of personality–melancholic, phlegmatic, choleric, and sanguine. Rene Descartes bridges all three topics–Philosophy, Economics and Psychology overlap with one another–with his thinking on the role of the body and the role of the mind as wholly separate entities. We know the name Binet (Alfred Binet) from the world of standardized testing, but the core of his thinking has nothing whatsoever to do with standardized thinking. Instead, he believed that intelligence and ability change over time. In his early testing, Binet intended to capture a helpful snapshot of one specific moment in a person’s development. And so the tour through human (and animal) behavior continues with Pavlov and his dogs, John B. Watson and his use of research to build the fundamentals of advertising, B.F. Skinner’s birds, Solomon Asch’s experiments to uncover the weirdness of social conformity, Stanley Milgram’s creepy experiments in which people inflict pain on others, Jean Piaget on child development, and work on autism by Simon Baron-Cohen (he’s Sacha Baron Cohen’s cousin).

When I was in high school and college,  I was exposed to all of this stuff, but only a small amount remained in my mind. Perhaps that was because I was also trying to read the complete works of Shakespeare, a book a week of modern utopian fiction, The Canterbury Tales, and studying geology at the same time. In high school and college, these topics were just more stuff to plough through. No context, no life experience, no connection to most of the material. Now, as an adult, it’s different. Like everyone I know, and everyone you know, I’m still juggling way too much in an average week, but I can now read this material with a real hope of understanding and retaining the material. Cover to cover, times three, these books will take you a year or two, but… without a test the next morning, you’ll be surprised how interesting philosophy, psychology and economics turn out to be. Just read them in your spare time, and behold (great word, “behold”) the ways in which humans have put it all together over several millennia. It’s a terrific story!

Big Data, Bigger Ideas

face pic human face

Every animate and inanimate object on earth will soon be generating data, including our homes, our cars, and yes, even our bodies”— Anthony D. Williams on the back of a big book entitled The Human Face of Big Data

From the dawn of civilization until 2003, humankind generated give exabytes of data. Now, we produce five exabytes every two days.” — Eric Schmidt, Executive Chairman, Google

The average person today processes more data in a single day than a person in the 1500s did in an entire lifetime.

Big Data is much more than big data. It’s also the ability to extract meaning: to sort through masses of numbers and find the hidden pattern, the unexpected correlation, th surprising connection. That ability is growing at astonishing speed, it won’t be long before Amazon’s ability to dazzle customers by suggesting just the right book will seem as quaint as our ancestors’s amazement at horseless carriages.– Dan Gardner, from the book’s introduction

human face big dataClearly, big data is a massive idea. Let’s see if we can’t break it down, if not by components, then, at least, by illustrations of classes and contexts.

The connection between data collection and pattern recognition is not new. In fact, we know the earliest example, which still exists, in book form, in a small, private Library of Human Imagination in Ridgefield, Connecticut. The book is called Bills of Mortality, and it records the weekly causes of death for London in 1664. This data was used to study the geographic (block-by-block) growth of the plague, and to take measures to prevent its future growth.

Two hundred gigabytes per day may not seem like much data, not in the days when you can buy a terabyte drive from Staples for a hundred bucks or so, but collect that much data day and day out, for a few years, and the warehouse becomes a busy place. That’s what MIT Media Lab’s Seb Roy did to learn how his newborn son learned language. The work was done at home with eleven cameras and fourteen microphones recording the child’s every move, every sound. The recording part of the project is over–their son is now seven years old–but analysis of “unexpected connections between the routines of everyday life and how one child learned his first words” continues as a research project.

On the other end of the age scale, there’s Magic Carpet, now in prototype. The carpet contains sensors and accelerometers. When installed in the home of, say, a senior, the carpet observes, records, and learns the person’s typical routine, which it uses as a baseline for further analysis. Then, “the system checks constantly for sudden (or gradual) abnormalities. If Mom is moving more slowly than usual, or it’s 11 a.m., And her bedroom door still hasn’t opened, the system sends an alert to a family member or physician.”

Often, big data intersects with some sort of mapping project. Camden, New Jersey’s Doctor Jeffrey Brenner “built a map linking hospital claims to patient addresses. He analyzed patterns of data, and the result took him by complete surprise: just one percent of patients, about 1,000 people, accounted for 30 percent of hospital bills because these patients were showing up in the hospital time after time…a microcosm for what’s going on in the whole country (in) emergency room visits and hospital admissions…” Subsequently, he established the Camden Coalition of Healthcare Providers to help address this “costly dysfunction.” He collected the data, analyzed it, then brought out meaningful change at a local level.

One of the many superb photographs depicting the intersection between human life and technology use. The book was put together by Rick Smolan, an extraordinary photographer, curator and compiler whose past work includes A Day in the The Life of America and other books in that series.

One of the many superb photographs depicting the intersection between human life and technology use. The book was put together by Rick Smolan, an extraordinary photographer, curator and compiler whose past work includes A Day in the The Life of America and other books in that series.

Yes, there’s a very scary dark side. Bad people could turn off 60,000 pacemakers via their Internet connections. A real time, technology enabled 2008 terrorist attack in Mumbai killed 172 people and injured 300 more thanks to Blackberries, night vision goggles, satellite phones and other devices.

If you control the code, you control the world. There has not been an operating system or a technology that has not been hacked.

Fortunately, the good guys have tools on their side, too. The $40 million Domain Awareness System in Manhattan includes “an array of 3,000 cameras known as ‘The Ring of Steel” that monitor lower and midtown Manhattan as well as license plate readers, radiation detectors, relevant 911 calls, arrest records, related crimes, and vast files on characteristics such as tattoos, body marks, teeth, and even limps. They can also track a suspicious vehicle through time to the many locations where it has been over previous days and weeks.”

Google’s self-driving car is safer than a human-controlled vehicle because the digital car can access and process far more information more quickly than today’s humans.

By 2020, China will complete Compass/Beidou-2. This advanced navigation system will outperform the current (and decades old) GPS system. Greater precision will be used for public safety (emergency response, for example), commercial use (fishing, automotive), and, inevitably, for far more productive war.

Data can mean the difference between life death when the weather turns ugly. Thousands of lives are saved each year by weather earnings in wealthier countries. Yet thousands of lives are lost in poor ones when monsoons, tornadoes and other storms strike with little public warning, an intensifying threat as the planet warms,,,

If you’ve ever wondered what Amazon’s true business is, or why it uses the name of a gigantic river, the answer is big data. Ultimately, Amazon intends to become a public utility for computing services. Take a careful look at Amazon Prime and you will see a prototype. The streaming side of PBS and Netflix are among the enterprises enabled by Amazon’s big data operations.

For FedEx, “the information about the package is as important as the package itself.”

human face big data movementsWhether its eliminating malaria or making art, text messaging for blood donors or tracking asteroids, the future will be defined by the collection, analysis and use of big data. It will shape our individual knowledge about our own bodies, our children’s growth and our parents’ health, our collective tendencies for public good, safety, and bad behavior. It will be embedded in robots and intelligent systems that may, soon, control aspects of life that we once considered wholly human endeavors. It is a change of epic proportions and yet, most of us are unaware of its importance.

The book, The Human Face of Big Data, along with its related website and app, provide a useful gateway into this brave new world.

Only Half of This Is True

Maybe not now. But soon.

Turns out, facts are like radioactive materials, and, for that matter, like anything that’s not going to last forever.

arbesmanMore or less, this is half-life principle, developed just over 100 years ago by Ernest Rutherford, applies to facts, or, at least, a great many facts. This persuasive argument is set forth by Samuel Arbesman in a new book called The Half-Life of Facts. I especially like the sub-title: “What Everything We Know Has an Expiration Date.” Arbesman is a math professor and a network scientist, and, as you would expect, this is a smart book. The book seems more like a musing than a fully worked-out theory, but I suspect that’s because facts are not easy to tame. Herding facts is like herding cats.

HalfLifeOfFactsLet’s begin with “doubling times”–the amount of time it takes for something (anything) to double in quantity. The number of important discoveries; the number of chemical elements known; the accuracy of scientific instruments–these  double every twenty years.  The number of engineers in the U.S. doubles every ten years. Using measures fully detailed in the book, the doubling time for knowledge in mathematics is 63 years, in geology it’s 46 years. In technology knowledge, half lives are quiet brief: a 10 month doubling for the advance of wireless (measured in bits per second), a 20 month doubling time for gigabytes per consumer dollar. With sufficient data, it’s possible to visualize the trend and to project the future.

So that’s part of the story. Of course, it’s one thing to know something, and it’s another to disseminate that information. As the speed of communication began to exceed the speed of transportation (think: telegraph), transfer of knowledge in real time (or, pretty close to real time) became the standard. But not all communications media is instantaneous. Take, for example, a science textbook written in 1999. The textbook probably required several years of development, so let’s peg the information in, say, 1997. If that textbook is still around (which seems likely), then the information is 16 years old. If it’s a geology text, the text is probably valid, but if it’s an astronomy text, Pluto is still a planet, and there are a lot of other discoveries that are absent. And, there are facts rapidly degrading, some well past their half life.

Trans-Neptune

Although you can click to make the image bigger, Pluto still won’t be a planet…

And, then, of course, there are errors. Sometimes, we think we’ve got it right, but we don’t. Along with the dissemination of facts, our system of knowledge distribution transfers errors with great efficiency. We see this all the time on the internet: a writer picks up old or never-accurate information, and republishes it (perhaps adding some of his or her own noise along the way). An author who should know better gets lazy and picks up the so-called fact without bothering to double check, or, more tragically, manages to find the same inaccurate information in a second source, and has no reason to dispute its accuracy. Wikipedia’s editors see this phenomenon every day: they correct a finicky fact, and then, it’s uncorrected an hour later!

Precision is also an issue. As we gain technical sophistication, we also benefit from more precise measures. The system previously used for measurement degrades over time–it has its own half-life. Often, errors and misleading information are the result.

The author lists some of his own findings. One that is especially disturbing:

The greater the financial and other interests and prejudices in a scientific field, the less likely the research findings are to be true.

And, here’s another that should make you think twice about what you see or hear as news:

The hotter a scientific field (with more scientific teams involved), the less likely the research findings are to be true.

My favorite word in the book is idiolect. It is used to describe the sphere of human behavior that affects the ways each of us sends and receives information, the ways in which we understand and use vocabulary, grammar, pronunciation, accent, and other aspects of human communication. A fact may begin one way, but cultural overlays may affect the way the message is sent or received. This, too, exerts an impact on accuracy, precision, and, ultimately, the half-life of facts.

Word usage also enters in the picture. He charts the popularity of the (ridiculous) phrase “very fun” and finds very strong increase beginning in 1980 (the graph begins in 1900, when the term was in use, but was not especially popular).

Time is part of the equation, too. The Long Now Foundation encourages people to think in terms of millennia, not years or centuries. Arbesman wrote a nice essay for WIRED to focus attention not only on big data but on long data as well.

Given all of this, I suspect that the knowledge in the brain of an expert is also subject to the half-life phenomenon. Take Isaac Newton–pretty smart guy in his time–but the year he died, most of England believed that Mary Toft had given birth to sixteen rabbits.

Last week, on CBS Sunday Morning, Lewis Michael Seidman, a Georgetown University professor commented about our strong belief in the power and relevance of the U.S. Constitution (signed 1787, since amended, but not substantially altered):

This is our country. We live in it, and we have a right to the kind of country we want. We would not allow the French or the United Nations to rule us, and neither should we allow people who died over two centuries ago and knew nothing of our country as it exists today.

CBS News Constitution

Chopping Down the Tree of Knowledge

So, during the past few weeks, I’ve been thinking a lot about the visual mapping of ideas.

Scott McCloud suggested that I have a look at the animation being done by Cognitive Media. You’ve probably see their work. I especially enjoy their lectures, often associated with TED-Ed, and their RSA work.

Back to mapping. I’ve been struggling with the tree of knowledge–and its modern equivalent, the mind maps now found in so many places on the internet and in classrooms. This means of structuring information provides the basis for the often-awkward corporate organization chart, now as often undermined by concepts of matrix reporting (you report to me, but we both also report to a lot of other people, kinda, sorta). I’m experimenting with several mind mapping programs, and one (Curio) is especially promising. That’s coming in a later post.

A few years ago, I worked with some folks from Wharton on a new approach to organizational design in which everybody is responsible to everybody else. I liked the idea because (a) I thought it represented what happens in a modern organization with greater precision, and (b) it represented the kind of productive, modern place I wanted to work. The design was a simple circle with about 100 points–and every point was connected to every other point. The concept: simple. The illustration: ridiculously complicated and difficult to understand.

So back to Cognitive Media. They’ve produced a nifty cartoon that helped me to understand the inadequacies of the tree-based design, the plusses and minuses of the network design, and the need for a universal design.

Watch it:

Dream of a Nation: Inspiring Ideas for a Better America

DONCover

Happy new year.

We are the ones we have been waiting for.

That sentence, and the ideas below, are parts of a book entitled Dream of a Nation: Inspiring Ideas for a Better America. Here are some of those ideas:

Shift the rules for campaign financing so that most of the money comes from most of the people. Currently, one-third of one percent of the people provide 90% of campaign funds. This drives special interests, and encourages a system based upon lobbyists that was never a good idea. And, while we on this track, let’s reduce the ratio of lobbyists to legislators: the current ratio of 23:1 (lobbyist to legislator) is probably too high by half (or more).

Let’s take control of our Federal budget (and, in time, our state budgets, too). In Porto Allegro, Brazil, a “citizen participation” approach to budgeting resulted in a 400% percent increase in school funding, and a dramatic increase in funds for clean water and sewers. Budgeting by citizen participation is a new movement that we want to encourage.

If Americans cut bottled water consumption by 80%, then the number of bottles, laid end-to-end, would circle the equator just once a day. Right now, we can circle the equator with bottles every 5 hours.

If each of us thinks more clearly about what we spend, and where we spend it, then the people living in an average American city (say, 750,ooo population) can add over 3,000 new local jobs and shift about $300,000 more into the local economy. How? By spending just 20% more on local, not national, businesses. Go to the local hardware store, the farmer’s market; don’t go to Wal-Mart or Walgreens. In the end, you’ll be richer for it. We all will.

Recognize that the high school drop out crisis is costing the U.S. at least half a trillion dollars each year. Every 26 seconds, a student drops out of school in the U.S. Encourage your legislators to take the time to fully understand the problem and to work with states and school districts to end this problem. The problem is not just the schools: it’s the support systems that do not provide sufficient support for lower-income families. An astonishing one in four American children live in poverty. We know how to change this: we need to focus on what worked during the LBJ years and the Clinton years, and do more of it. And, along the way, we need to invest about $360 million to fix crumbling school buildings. This priority pays off in so many ways: GDP, elimination of crime, family stability, reduction in prison population, so much more. We should no longer accept the idea that 25% of earth’s prison population resides in a U.S. prison–an outsized number for a nation with just 8 percent of the world’s people. Similarly, we should no longer accept the high price of education and the middling results that we achieve with those dollars. Other countries do better because their systems are more reasonable. We need to change the way we think about all of this, and we need to make it clear to legislators that this will be their last term if they do not accomplish what we need done.

Let’s get started on two substantial changes in the ways we work with our money. First, let’s start thinking in terms of a V.A.T., as most Western nations do. If the book’s calculations are correct, this should increase our available funds by about 13%. And second, let’s eliminate the 17% (average) payroll tax, reducing hiring costs for employers, as this model is proving to be more effective than our current approach. For more about this, Get America Working! (not the easiest website for clear presentation of ideas; the book is better).

In Canada, they spend $22 per person on noncommercial educational media (we call it public TV, public radio). In England, they spend $80 per person per year. In the U.S., we spend $1.37 per person per year (less than a bottle of water). If we increase funding to a more reasonable level, of, say, $75 per person per year (one bottle of water per week), we get something as good as the BBC for ourselves and our children. Noncommercial matters.

There’s much longer discussion about carbon footprints, waste, overconsumption, and the need for cars that average 100mpg. And another about rethinking just about everything related to the outsized defense budget and its underlying strategies. We haven’t got the health care concept down yet, but moving it into the public goods shopping cart seems to be a step in an appropriate direction.

We should all become familiar with, and promote, the 8 Global Millennium Development Goals that aim to:

  • Eradicate extreme poverty and hunger
  • Achieve universal primary education
  • Promote gender equality and women’s empowerment
  • Reduce child mortality
  • Improve maternal health
  • Combat HIV/AIDS, malaria, tuberculosis, and other diseases
  • Ensure environmental sustainability and better access to water and sanitation
  • Create a global partnership for development

So that’s a start. It’s going to be a busy year. And, I hope, one of our best.

Know What? Why?

New York Times illustration by Viktor Koen

Here’s an article by former Harvard President Lawrence Summers. It was tucked into the Education Life section of the January 22, 2012 issue, so you may have missed it.

http://www.nytimes.com/2012/01/22/education/edlife/the-21st-century-education.html?pagewanted=all

Summers is attempting to change the debate. Some key ideas:

1. Getting information is now the easy part. Twenty-first century education ought to be about processing and contextualizing information.

2. Processing and contextualizing leads to collaboration.

3. New technology allows the best teachers to be connected to every student. Everything else seems to be clutter.

4. Despite best efforts on the interactive side, most learning is passive: watch, listen, learn. Active learning is the future, but we’re just beginning to understand how and why.

5. Learn a language. Travel the world. Be global.

6. Education must shift from information dissemination to analysis.

Although he’s on the elitist side, the ideas make sense. The complete article isn’t long, but it does present ideas worth pondering.

Siri, meet the family

The UK cover is more interesting than the US cover, which is, somewhat appropriately, covered with the repeated words "The Information."

James Gleick nearly won a Pulitzer Prize for a biography about Isaac Newton, and another about Richard Feynmann, a colorful physicist who pioneered nanotechnology and quantum mechanics. His best selling book (more than a million copies sold) is the step-by-step, scientist-by-scientist, idea-by idea story of chaos theory entitled Chaos: Making a New Science.

Gleick’s 2011 book is called The Information. it begins with the European discovery of African talking drums in the 1840s, a percussive idea he eventually connects to Samuel Morse’s dots-and-dashes telegraph code, and, we’re off on a long tale not unlike the best of James Burke’s TV series, Connections. Gleick takes us through the development of letters and alphabets, numbers and mathematics, numerical tables and algorithms, dictionaries and encyclopedias. These stories, and their many tangents, set us up for Charles Babbage whose boredom with the Cambridge curriculum in mathematics leads to an early, impossible-to-build, 25,000 piece machine, awesome in its analog, mechanical, Victorian design. This, then, leads to the further develop of the telegraph, now caught up in a new conception called a “network” that connected much of France, for example.

By the early 20th century, MIT becomes one of several institutions concerned with the training of electrical engineers–then, a new discipline–and with it, machinery to solve second-order differential equations (“rates of change within rates of change: from position to velocity to acceleration”). This, plus the logic associated with relay switches in telegraph networks, provides MIT graduate student Claude Shannon with his thesis idea: connecting electricity with logical interactions in a network. Shannon’s path leads to Bell Labs, where he works on the “transmission of intelligence.” By 1936, a 22-year old Cambridge graduate named Alan Turing had begun thinking about a machine that could compute.

Well, that’s about half the book. Now, things become more complex, harder to follow, dull for all but the most interested reader. The interweaving connects DNA and memes (and, inevitably, memetics, which is the study of memes), cybernetics and randomness, quantification of information, and Jorge Luis Borges’ 1941 conception of an ultimate library with “all books, in all languages, books of apology and prophecy, the gospel and commentary on that gospel, and commentary upon the commentary upon the gospel…”

Eventually we obsolete CD-ROMs (too much information, too little space), and create Wikipedia and the whole of the Internet. In the global googleplex, the term “information overload” becomes inadequate. And yet, Gleick promises, it is not the quantity that matters, it is the meaning that matters. After 420 pages of historical text, I’m still wondering what it all means–and whether the purpose is mere conveyance as opposed to deeper meaning or its hopeful result, understanding.

%d bloggers like this: