Big History

After admitting that we cannot yet answer the  obvious–and seemingly unanswerable–question about how and why everything   began, University of Sydney Professor David Christian begins with the creation of the universe about 13 billion years ago. It’s not every historian who would admit, simply:

About the beginning, we can say nothing with any certainty except that something happened.”

He continues explaining this madness: “We do not know why or how it appeared. We cannot say whether anything existed before. We cannot even say that there was a ‘before’ or a ‘space’ for anything to exist in (in an argument anticipated by St. Augustine in the fifth century CE) time and spare may have been created at the same time as matter and energy.”

After that, the big news is not so much the Big Bang Theory (explained here in detail that can be easily understood), but the shift to a neutral electrical charge, enabling the creation of atoms, first simply (mostly, hydrogen and helium atoms), then, in increasingly complicated ways. I like this quote:

Hydrogen is a light, odorless gas which, given enough time, changes into people.”

Leaping ahead, the sun and the planets show up around 4.56 billion years ago, and, helpfully, Professor Christian helps us to understand an earth bombarded by small planetesimals (excellent word, new to me) and without much atmosphere.

The early earth would indeed have seemed like a hellish place to humans.”

As the mix of gases shifted from methane and hydrogen sulfide to carbon dioxide, the early atmosphere would have appeared red–that is, the sky would have seemed to be red, not blue. The blue sky came about because the new oceans–made possible by a drop in temperatures below 100 degrees celsius–allowed oceans to form, and those oceans absorbed much of the CO2.

How about the question of the beginning of life on earth? Again, Christian offers a coherent answer:

Living organisms are constructed, for the most part, from compounds of hydrogen and carbon. Carbon is critical because of its astonishing flexibility. Add hydrogen, nitrogen, oxygen, phosphorous, and sulfur, and we can account for 99 percent of the dry weight of all organisms. It turns out that when conditions are right and these chemicals are abundant, it is easy to construct simple organic molecules, including amino acids (the building blocks of proteins, the basic structural material of all organisms) and nucleotides (the building blocks of genetic code).”

Of course, it’s one thing to assemble the building blocks and another to assemble these parts into a wooly mammoth, or even an amoeba. Christian admits that this is the tricky part: complexity is the appropriate term that causes contemporary scientists to scratch their heads and wonder. The pieces seem to be there, but the complexity of their union and the spark of life may not be so simple.

Maps of Time

Book: Maps of Time: An Introduction to Big History by David Christian

Sure, multi-cell animals are interesting footnotes, but really, isn’t history all about us? Not exactly, not according to the good professor. Turns out, we are just one of many species, and in the scheme of big history, humans are a kind of, well, a kind of weed. We just keep growing, taking everything over, killing off other species, treating the whole earth as our own private amusement park. Within our lifetimes, there will be 10 billion humans on earth, an astonishing increase given that there were, in 1800, just a billion of us. Every dozen or so years, we add another billion or so.

As humans began to migrate into Europe and Asia from their original home in Africa, large animals became extinct because we hunted them down, ate them, used their hides for clothing, used their bones for tools. Giant sloths in the Americas, giant wombats and kangaroos in Australia, mammoths in Siberia. We killed them faster than they could reproduce, and so, they’re gone.

So what makes us so special? Is it really all about thumbs? Sure, thumbs make a difference, but it’s something else entirely. Professor Christian uses the term “collective learning” to describe our “pooling and sharing of knowledge…the types of knowledge that, over time, have given humans their unique power to manipulate the material world. Two factors stand out: the volume and variety of information being pooled, and the efficiency and speed with which information is shared.” Here, he’s not referring to the digital age, but the era before we developed any meaningful form of writing, drawing, or communicating with anything resembling a modern language.

By now, we’re about halfway through the book. Next will come the domestication of animals–in which humans figure out that an animal that is killed for its meat is useful only in the short-term, but an animal that is kept alive for its milk is useful in the long-term. This concept of domestication applies not only to meat/milk animals, but to others whose wool, or other production, can be used not only to satisfy basics needs, but also for exchange to other humans. In time, it’s the idea of exchange that becomes the driver, resulting first in local trade between tiny settlements, then trade routes as fewer people are tied to subsistence farming or hunting/gathering and more are available (typically, more men are available) for pursuits involving trade, travel, and, in time, the accumulation of wealth.

Along the way, humans attempt to understand how and why their world works. Since the ground, the soil, the earth provides the food we eat, we begin to explain the world in terms of an earth spirit. Similarly, the sky seems to contain the origins, the mystical, the unknowable, and so, this, too, becomes a kind of spirit. In time–and mostly within a period of just a few thousand years, mostly in southwestern Asia–we gather these beliefs in the form of religions.

As we, as contemporary, educated humans with every conceivable benefit, attempt to understand our world, and its big history (now a common term combining history and science, by the way), Professor Christian readily admits to what he has done. He has wrapped our beliefs, our knowledge, our stories into what calls “a modern creation myth.”

Don’t Take Your Work Too Seriously

That’s one of the six creative tips offered by Argentine artist Leandro Erlich in a wonderful, small New York Times article. The others are equally good advice, especially for creative professionals.

It’s all about the picture below, provided to The New York Times by Gar Powell-Evans, courtesy of Barbican Art Gallery. Look closely and you’ll understand what it’s all about.

Leandro_house

Preparing for Chocolate

So here’s my list:

  • Dick Taylor
  • Amano
  • Divine
  • Moonstruck
  • Lake Champlain
  • Valrhona
  • Theo
  • Vosges
  • Jomart
  • John Kira’s
  • Maison Bouche

Those are the high-end chocolates that will become the basis for a future article about the phenomenal growth of high-end chocolates. My question to you: what else should be on this list? Which high-end chocolate bars have I missed–the ones that you see a little too often at Zabar’s, Fairway, Trader Joe’s, Whole Foods Market, Wegman’s, and other foodie emporia.

While you’re munching on that question (which I hope you will answer by adding a comment below), I suppose you’ll want to know that the third edition of the very popular book, The True History of Chocolate, has been published. Written by Sophie D. and Michael D. Coe, it’s been republished since the mid-1990s.

timthumb.php

According to the authors, “cocao is singularly difficult to grow. With few exceptions, it refuses to bear fruit outside a band 20 degrees north and 20 degrees south of the equator. Nor is it happy within this band of tropics if the altitude is so high as to result in temperatures that fall below 60 degrees Fahrenheit.”

I wish I could report that chocolate offers specific psychological or medical benefits, but the authors, whose research is extensive, discount these theories. Still, “some doctors claim it to be an antidepressant.”

As for the early days of chocolate, much of this history is related to the stories of the Maya and Aztec people, and the authors provide lavish accounts of their cultures, and the role of chocolate within those societies–very nearly 100 pages of information, stories, illustrations, and more.

I’ve always been skeptical of the phrase, “Columbus discovered America” but Columbus was, in fact, the very first European to encounter the cacao bean which was considered quite valuable by the natives. Apparently, in 1502, Columbus took a wrong turn, ended up near what we call Guanaja, and took possession of goods, including what Ferdinand Columbus called “almonds”–“They seemed to hold these almonds at a great price,” he wrote, “for when they were brought on board ship together with their goods, I observed that when any of these almonds fell, they all stopped to pick it up as if an eye had fallen.” As the authors ponder who might have been the very first European to actually taste chocolate, it seems certain that the first encounter was sometime in the first half of the 16 century, and that over the course of the next century, chocolate had become very popular among those in the Spanish court, most likely the result of many interactions with their New World explorations. Gradually, chocolate made its way into the noble houses of Italy and France, and eventually, England, where it was the most popular drink until the new hot beverage, coffee, took its place. Around 1700, both chocolate and coffee were routinely served in the coffee houses so despised by royalty because they were (probably quite rightly) as hotbeds of political conversation.

For most of its 28-century existence, chocolate was enjoyed as a hot beverage, and sometimes, as a cold one. It’s only recently that chocolate has been offered in its current form, a solid. The modern chocolate industry began in England with a Quaker entrepreneur named John Fry. They became quite rich as the sole supplier to her majesty’s navy, at the time a formidable force at the core of the British Empire. The rival: another Quaker entrepreneur named John Cadbury, who owned a coffee-and-tea shop in Birmingham. They served ” traditional chocolate drink” at the shop, eventually expanded their operation, and won the patronage of  Queen Victoria. Cadbury was an aggressive businessman, and a clever one. In 1868, Cadbury introduced the first “chocolate box,” decorated with “a painting of his young daughter Jessica holding a kitten in her arms.” Cadbury was also responsible for the first candy box specifically made for Valentine’s Day. All of this transpired in at the heart of England’s Victorian era. Bear in mind that the Quakers despised alcohol–so chocolate was quite the appropriate substitute. At about the same time, the Swiss chocolate industry takes shape with Mr. Lindt and later, Mr. Tobler (think: Toblerone) rising the level of quality ((this time, Swiss Calvinists). In the U.S., the chocolate entrepreneur was “pious Pennsylvania Mennonite” Milton Hershey who concerned himself with production efficiency (think: Henry Ford, a contemporary), and mass production.

So here we are today, and I am beginning to prepare for an article about the world’s best chocolate bars. One certain model will be Valrhona, a small Swiss company with just 150 employees that supplied the restaurant trade, but not consumers, with the some of the world’s finest chocolate. Their best? In the 1980s, it was called “Guanaja 1502” and now, you know why.

Now that you know more than you may have wanted to know about chocolate, please lend a hand and comment on your favorites, especially those high-end bars that no reasonable person would buy or eat in quantity.

Let’s give unreasonable a try.

Valrhona? Just based upon this web advertisement, I'm sold. (And you?)

Valrhona? Just based upon this web advertisement, I’m sold. (And you?)

Literacy in the Era of the Image

The word literacy finds its roots in the eighteenth-century word literatus, which quite literally means ?one who knows the letters. But it has come to refer to much more than the ability to read an alphabet or other script. We think of literacy today as meaning “proficiency”–or more broadly, the ability to comprehend and to express or articulate.”

That’s the just the beginning of an interesting book by Stephen Apkon entitled The Age of the Image: Redefining Literacy in a World of Screens. As the title suggests, and as the introduction by director Martin Scorsese illustrates, there is more to 21st century literacy than comfort with the printed word. Apkon directs the Jacob Burns Center for Film and the Media Arts Lab, and so, he spends a fair amount of time thinking about the ways we exchange stories, ideas, and, of course, images.

ageoftheimage_255pxTrying to understand multimedia literacy by reading a book is, of course, absurd, but Apkon does the best he can within the limitations of the printed word. This adventure is made more complicated because of the necessary stops along the way: in order to understand moving images, one must first understand still images, and so, there is the obligatory tour through Civil War-era photography, and so on. I’m geeky on these subjects, so I found these chapters interesting, but the book doesn’t really take off until we get to the chapter about the brain’s responses to visual images, the one that’s called “The Brain Sees Pictures First.” The bottom line message: context is king. Individual images without connection to a story are filtered by the brain and rarely provoke any long-term impact. They may capture attention (the brain is constantly on the lookout for potential danger), but they are quickly and efficiently filtered out and almost always forgotten. Showing portions of Charlie Chaplin’s “City Lights” to an audience, researchers found that “…when you connect images in a fashion that creates a narrative story in a literate way, you elicit powerful responses.”

Apkon further illuminates and magnifies his arguments through extensive conversations with researchers, discussions about the latest MRIs and their ability to measure brain impulses, and considers our image culture from many perspectives. And yet, so much of what he writes, I think we already know from daily experience. We ignore most of the images that we see, but we recall memorable stories. With digital technology, we are as much the creator as the consumer.

Yesterday, at a wedding, I was struck by the number of photographers, and their interaction with the one professional in the room. The pro would set up a shot–a crowd shot of all of the bride and groom’s college alums–and then, he would step back so that twenty other people could take the same picture using their phone/cameras. I’ve become a fan of watching the images that people capture, in real time, on their phones. Often, the results are excellent–the technology takes care of itself so there is no focus or exposure issue (most of the time). Instead, there is only composition, and because everyone see so many images, the composition is often strikingly good.

The interesting theories explored in the first half of the book fade into a discussion of production in the second half. I suppose this is inevitable because, these days, we are all producers, directors, and cinematographers.

That’s the hard part, of course. Here, it’s expressed in book form, but we’re facing the same issue in every classroom, and with every book we read. We’ve become literate consumers, and literate creators. I read a book and then I write about it. I think about what I’ve read, and then I generate additional media. You read what I write, and perhaps, what Stephen Apkon writes, and pass it along to friends where these ideas may take on a life of their own. Memes (old usage) floating around in internet space. Some are images, some are just ideas not yet captured in visual form. Which is the relevant impetus for literacy? Is it the words I wrote so easily by punching buttons on a keyboard without leaving my chair, or is the images that I create by lifting my phone to my eye, pressing just one button to shoot and another to send it to the world? Or is the new proficiency of literacy the ability to discern whether any of this babble is worth even a nanosecond of your time and attention?

(No good way to end this one. Feel free to write your own ending.)

Thoughts on Mobile, Part Three: Connecting Dots 4, 5, 6

Yesterday’s post ran long, so I decided to cut it in half. Here’s the rest of it, or the third in a series of two articles. (Something like that…)

A group video call on Skype.

A group video call on Skype.

Dot #4: Connectivity and Sharing. Here in the 21st century, we demand not only connectivity but sharing of information in real time. We fall short in whiteboard-type environments where we can see ideas and people simultaneously, and when we do, the interaction is sub-par, but this will steadily improve through Skype, Google, and new ventures. All portable devices must connect anywhere, at any time–this is a shortcoming of some apps (Evernote, for example) and some devices (most portable computers, unless a separate wireless hot spot is generated by a nearby cell phone). This is foolish retro-thinking. The next generation of computers, tablets, all devices should include built-in connectivity for WiFi, 3G, 4G, and so on. Fortunately, these devices and their related systems work very well. And, fortunately, the technology is constantly improving to allow more throughput, faster speeds, fewer problems, and increased security. What we don’t have quite yet is a kind of super-DropBox where it’s easy to share any document on any device, regardless of whether it’s in the cloud or on a specific device. VPN (Virtual Private Network) technology resembles a solution, but what we need is a more robust, full-featured, easy-to-use system. I suspect Apple and Google are hard at work developing something to do this job–they’re already on the way with Google Docs and the new iWork set for release later this year.

illo_newworld

Dot #5: Output. This one is confusing. I own an iPad which doesn’t do well in an environment where printed documents are the standard. Most printers won’t talk to a tablet–though some now have email addresses for that purpose (yes, some printers have email addresses–seems confusing, I know). When I was using a portable computer, I often printed documents. With the tablet, I find myself storing documents and reading them on the tablet’s screen. Far less printing. Almost none, in fact. My output is, typically, an email to someone who wants or needs to read something I wrote. I do print some documents for reference, but printed documents are difficult to revise, so I tend to focus on digital copies. The file folder in my briefcase were once filled with paper, but now, not so much. Even handwritten notes are being replaced by the notes that I take on the tablet–when they’re in Evernote, they’re very easily shared with my other devices and with other people via email or shared settings.

Dot #6: Portable. For me, this means the device goes just about everywhere I go. In that regard, the iPhone (any smartphone, really) is a suitable solution, if one with a too-small screen. There is access to web and email, phone, messaging, internet, iWork documents, Evernote, the list goes on. The tablet does not go everywhere because it’s a little too big, even for someone like me who is rarely seen outside my home without a shoulder bag. There’s some minor conflict here about size: the phone ought to be larger, the iPad needs to be both small enough to carry everywhere (the iPad Mini) but large enough to provide a full page of printed material or to create diagrams or word processing documents or spreadsheets or presentations (the iPad full-size model). At first, I was sure I would need a keyboard, so I bought one and thought I’d carry it everywhere. I don’t. In fact, I use the portable keyboard only when I have a lot of writing to do away from home–not so often, as it turns out.

How long does the device need to run between recharges? Eight hours seems pretty reasonable, more is nice.

GoalZero's external solar charger is convenient, but this technology should be built into every portable device.

GoalZero’s external solar charger is convenient, but this technology should be built into every portable device.

Any accessories required, as one might carry with a portable computer? Absolutely not.

One further notion about portability: the device must be easily used anywhere. With an iPad or tablet of sufficient size, that’s anywhere at all, standing, sitting, lying down. With a portable computer, a desktop surface makes the process so much more comfortable–though some people can work with the computer on their lap (I need a fat pillow to do that, and the computer tends to slide around). The tablet can be raised or lowered to adjust for eye position and lighting; this is difficult to do with a portable computer.

Of course, everyone’s needs are different, and some people use their portable device as a power tool. For most users, I suspect this is overkill–just like a gigantic SUV might be for local grocery runs and soccer practice.

What’s next? I think we’ll see keyboards becoming vestigial, and improved touch screens as the standard for portable devices. I know the devices will become faster, contain more storage, offer better screens and longer battery power, and we all know that prices will remain quite low, but will slowly rise. There will be more pocketable devices, and attempts to move away from a traditional flat screen. OLED technology, for example, allows a screen to roll up for storage. This will be the next frontier, worthy because the size of the screen is the key determinant for portability. Once that dot becomes more flexibly defines, all of the other dots line up in support. That’s the longer-term future.

itri-6-inch-color-flexible-amoled-img_assist-300x315

For the shorter-term future, I’d look to combining my tablet and phone into a single device that works and plays nicely with a more powerful computer (which will also evolve) in my home or office.

And what about power? Since they can be charged almost anywhere, I like solar cells. They’re small, flat, and becoming affordable. I also like charging mats. AC adapters are probably unavoidable, but better batteries make them less essential.

Sorry for the long post, and for the multiple parts. This was interesting to write, so I just kept going.

Thoughts on Mobile, Part Two: Connecting Dots

Dot #1: Input. In order to operate any sort of computer, you need to provide it with the information floating around in your brain.

Dot #2: Display. In order to process the information that you’re pouring into the computer, you need to see, hear, or otherwise sense your work-in-progress.

Dot #3: Storage. Whatever you input and display, you need to be able to keep it, and, change it. Also, it would be best if there was a second copy, preferably somewhere safe.

Dot #4: Connection and Sharing. Seems as though every 21st century device needs to be able to send, receive, and share information, often in a collaborative way.

Dot #5: Output. In some ways, this concept is losing relevance. Once displayed, stored and shared, the need to generate anything beyond a screen image is beginning to seem very twentieth century. But it’s still around and it needs to be part of the package.

Dot #6: Portable. Truly portable devices must be sufficiently small and lightweight, serve the other needs in dots 1-5, and also, carry or collect their own power, preferably sufficient for a full day’s (or a full week’s use) between refueling stops.

Let’s take these ideas one at a time and see where the path leads.

Dot #1: Input. Basically, the “man-machine” interface can be achieved in about five different mays. Mostly, these days, we use our hands, and in particular, our fingertips, and to date, this has served us well both on keyboards (which require special skill and practice, but seem to keep pace with the speed of thinking in detail), and on touch screens (which are not yet perfect, but tend to be surprisingly good if the screen is large enough). ThinkGeek sells a tiny Bluetooth projector that displays a working keyboard on any surface.

20130616-213558.jpg

There is the often under-rated Wacom tablets, which use a digital pen, but this, like a trackpad, requires abstract thinking–draw here, and the image appears there. It’s better, more efficient, and ultimately, probably more precise, to use a stylus directly on the display surface. So far, touch screens are the best we can do. Insofar as portable computing goes, this is probably a good thing because the combination of input (Dot #1) and display (Dot #2) reduce weight, and allow the user direct interaction with the work.

20130616-215054.jpg

This combination is becoming popular not only on tablets (and phones), but on newer touch-screen laptops, such as the HP Envy x2 (visit Staples to try similar models). The combination is useful on a computer, but more successfully deployed on a tablet because the tablet can be more easily manipulated–brought closer to the eyes, handled at convenient angles, and so on.

Moving from the fingers to other body parts, speaking with a computer has always seemed like a good idea. In practice, Dragon’s voice recognition works, as does Siri, both based upon language pattern recognition developed by Ray Kurzweil. So far, there are limitations, and most are made more challenging by the needs of of a mobile user: a not-quiet environment, the need for a reliable microphone and digital processing with superior sensitivity and selectivity, artificial intelligence superior to the auto-correct feature on mobile systems–lots to consider, which makes me think voice will be a secondary approach.

20130616-215041.jpg

Eyes are more promising. Some digital cameras read movement in the eye (retinal scanning), but it’s difficult to input words or images this way–the science has a ways to go. The intersection between Google Glass and eye movement is also promising, but early stage. Better still would be some form of direct brain output–thinking generates electrical impulses, but we’re not yet ready to transmit or decode those impulses into messages suitable for input into a digital device. This is coming, but probably not for a decade or two. Also, keep an eye on the glass industry–innovation will lead us to devices that are flexible, lightweight, and surprising in other ways.

So: the best solution, although still improving, is probably the combination tablet design with a touch-screen display, supported, as needed on an individual basis, by some sort of keyboard, mouse, stylus, or other device for convenience or precision.

(BTW: Wikipedia’s survey of input systems is excellent.)

As for display, projection is an interesting idea, but lumens (brightness) and the need for a proper surface are limiting factors. I have more confidence in a screen whose size can be adjusted. (If you’re still thinking in terms of an inflexible, rigid glass rectangle, you might reconsider and instead think about something thinner, perhaps foldable or rollable, if that’s a word.

Dot #3: Storage has already been transformed. For local storage, we’re moving away from spinning disks (however tiny) and into solid state storage. This is the secret behind the small size of the Apple MacBook Air, and all tablets. These devices demand less power, and they respond very, very quickly to every command. They are not easily swapped out for larger storage devices, but they can be easily enhanced with SD cards (size, speed, and storage capacity vary). Internal “SSD” (Solid State Device) storage will continue to increase in size and decrease in cost, so this path seems likely to be the one we follow for the foreseeable future. Add cloud storage, which is inexpensive, mostly reliable (we think), mostly private and secure (we think), the opportunity for low-cost storage for portable devices becomes that much richer. Of course, the latter requires a connection to Dot #4: Storage. Connecting these two dots is the core of Google’s Chrome strategy.

Thoughts on Mobile Computing, Part One

It’s risky to generalize, but I suspect the following is true for most people, most of the time:

  • Higher-stakes projects involving significant amounts of concentration require a quiet work environment with a more powerful computer and a larger screen; and
  • Lower-stakes projects, initial planning, and work-on-the-go require a lightweight computing device, often with a smaller screen

Certainly, some people must work on the go, or prefer the flexibility of a more powerful computer on the go, and others, quite sensibly, prefer just one device, not two (or three, or more). Seems to me, the high-stakes machine ought to be a versatile notebook connected to a 20-inch or larger screen, with proper backup, and the low-stakes machine ought to weigh as close to two pounds as possible, offer all-day battery life, and easily connect to any WiFi, 3G, 4G, or whatever other service may be available. That is: the portable really ought to be portable, and no so much a full-scale machine unless you feel the need to combine functions into a single box.

iPad and iPhoneWhen the latest upgrades to the MacBook Air were released last week, I thought I might finally break my pattern–iPad for portability, iMac for serious work in the home office–with an in-between machine that could do both. After hours of research and experimentation with the Air in various settings, I decided to wait until the autumn to upgrade the iPad, once again leaving the portable out of the mix. Why? The Air does not connect via 3G/4G, but instead requires a separate network to be established on my iPhone (clunky solution, but it works). And, to my astonishment, I actually prefer the touch screen to the keyboard when computing in a mobile environment. I sacrifice a degree of functionality for the reduced weight and increased connectivity, but then, most of my mobile work does not result in an elaborate finished product–this, I do on a computer.

I suppose that’s why the call from HP was so intriguing. Here was an opportunity to experiment with a portable computer in my daily life–something I have not done in several years, and an opportunity to experiment with a Windows computer, something I had not done in a decade or more. And, the computer would be running the intriguing Windows 8 operating system, the one with the cool colored tiles. What’s more, my sample model offered 3G/4G capability.

At the same time, I decided to learn more about the $250 Google Chrome portable computer sold by Samsung. It, too, offered the connectivity that the Air sadly lacks.

Keeping an open mind about new and better ways to work, I tried the HP EliteBook 2170p. The specs are similar to a MacBook Air, and the cost is about the same (around $1,000 for the basic model). It weighs less than 3 pounds–more than that seems too heavy, at least for me, to be carried everywhere–and the feature set is similar, too. There’s a light-up keyboard, an SD card slot (more versatile here, and, BTW, absent on even the latest MacBook Air), similar processor options, no HDMI slot (odd to see a VGA port on a contemporary computer, but this one is designed for older-style business use). Screen resolution is about the same, but the images on the Air are more vivid, and the type is easier to read. The 11-inch screen size is comfortable for light work, but challenging for serious word processing, spreadsheets, even word processing–and this is true for the Air as well. It’s possible to use this computer with a 3G/4G network; this feature is sadly lacking on the Air.

Windows8Today is Sunday the 16th, and I have lunch at noon. That’s easy to see on the colorful Windows 8 interface. Right now, it’s 68 degrees and it’s going to rain today. Click through for details, and the weird non-intutitive interface design returns. It’s unclear what to do next, the brief instructions are unclear and the type is often too small to read. Click once or twice more, and the whole deal looks like Windows from the turn of the century. For reasons I do not understand, several “chickets” appear on the right side of the screen. These offer a combination of settings, search, and device access–not sure why these are shown separately, but the more I dive into Windows 8, the more I come up with “why would they do it that way?” questions. I’ve now spent several hours with Windows 8. Overall, I’d give it a “meh.”

HP-Elite-BookHow about the HP Elite as an example of a contemporary portable computer? It’s okay, but the design is boxy, it’s a little heavy for the 11-inch screen it carries (the 13-inch MacBook Air also weighs 3 pounds). It offers just one operating system (Air offers both Windows and Mac for about the same price).

homepage-promo

For one-quarter of the price, I think most people would be able to accomplish most of their tasks on Samsung’s Chromebook, which costs $250 ($329 with 3G, which is very useful). No fuss: buy one today at neighborhood Staples store. This is a basic, 2.4 pound (lightweight!) portable–not fancy, but it is reasonably well-built and functional, if you limit your desire for functionality to word processing, web browsing, spreadsheets, presentations, email, watching movies, listening to music, and a few dozen other activities. The Chrome Web Store makes the selection and installation of a great many Chrome apps available for use on any Chrome computer, and on any computer with a Chrome browser installed. This level of flexibility is hard to find in the Apple world and nearly impossible to find in the Windows world–Google and its users benefit from a design approach that is totally 21st century, and, in fact, totally new in the 2010s. It’s fresh, inexpensive, and it works.

Here's a small sample of the many apps available in the Google Chrome store.

Here’s a small sample of the many apps available in the Google Chrome store.

It’s not easy being a Windows computer maker in 2013. There is so much legacy–so many enterprise interests to be served–that there is limited available space for innovation. Easy of use, portability, interoperability, slick interfaces, web app stores, these are not ideas that fit comfortably into an enterprise structure that demands standardization (new approach is focused, mostly, upon customization), a work-anywhere approach, high levels of security and reliability, rock-solid applications, and more. HP is one of many Windows-based computer makers who struggle with these issues. This situation has been made much more challenging by Apple’s elegant design and passionate user base, and, now, things are even more difficult because Google is changing the game with a far lower cost structure. And in here, somewhere, is the growing Android ecosystem–not quite as well-positioned but a significant force just the same.

Swing back around to the simple demands of getting work done in the office and at home, I think I’ll stand pat with the iPad because it weighs about a pound-and-a-half and easily connects to either wifi or 3G (my next one will be 4G), and an iMac at home with a larger screen. No, the iPad is not perfect (but I have surprised myself with its flexibility, and with my comfort level in using the touch screen almost all of the time and the accessory keyboard almost not at all). Yes, I pay more for the privilege of using the integrated Apple system. Comparables are emerging, sometimes offering features that Apple cannot or will not, but in the horserace, it’s Apple, Google, and perhaps Android, with Windows off in the distance in a post 20th century haze.

Coming in Part 2: thinking a few years into the future.

Welcome to the Connectome

Diffusion spectrum image shows brain wiring in a healthy human adult. The thread-like structures are nerve bundles, each containing hundreds of thousands of nerve fibers. Source: Source: Van J. Wedeen, M.D., MGH/Harvard U. To learn more about the government's new connectome project, click on the brain.

Diffusion spectrum image shows brain wiring in a healthy human adult. The thread-like structures are nerve bundles, each containing hundreds of thousands of nerve fibers.
Source: Source: Van J. Wedeen, M.D., MGH/Harvard U. To learn more about the government’s new connectome project, click on the brain.

You may recall recent coverage of a major White House initiative: mapping the brain. In that statement, there is ambiguity. Do we mean the brain as a body part, or do we mean the brain as the place where the mind resides? Mapping the genome–the sequence of the four types of molecules (nucleotides) that compose your DNA–is so far along that it will soon be possible, for a very reasonable price, to purchase your personal genome pattern.

A connectome is, in the words of the brilliantly clear writer and MIT scientist, Sebastian Seung, is: “the totality of connections between the neurons in [your] nervous system.” Of course, “unlike your genome, which is fixed from the moment of conception, your connectome changes throughout your life. Neurons adjust…their connections (to one another) by strengthening or weakening them. Neurons reconnect by creating and eliminating synapses, and they rewire by growing and retracting branches. Finally, entirely new neurons are created and existing ones are eliminated, through regeneration.”

In other words, the key to who we are is not located in the genome, but instead, in the connections between our brain cells–and those connections are changing all the time.The brain, and, by extension, the mind, is dynamic, constantly evolving based upon both personal need and stimuli.

Connectome BookWith his new book, the author proposes a new field of science for the study of the connectome, the ways in which the brain behaves, and the ways in which we might change the way it behaves in new ways. It isn’t every day that I read a book in which the author proposes a new field of scientific endeavor, and, to be honest, it isn’t every day that I read a book about anything that draws me back into reading even when my eyes (and mind) are too tired to continue. “Connectome” is one of those books that is so provocative, so inherently interesting, so well-written, that I’ve now recommended it to a great many people (and now, to you as well).

Seung is at his best when exploring the space between brain and mind, the overlap between how the brain works and how thinking is made possible. For example, he describes how the idea of Jennifer Aniston, a job that is done not by one neuron, but by a group of them, each recognizing a specific aspect of what makes Jennifer Jennifer. Blue eyes. Blonde hair. Angular chin. Add enough details and the descriptors point to one specific person. The neurons put the puzzle together and trigger a response in the brain (and the mind). What’s more, you need not see Jennifer Aniston. You need only think about her and the neurons respond. And the connection between these various neurons is strengthened, ready for the next Jennifer thought. The more you think about Jennifer Aniston, the more you think about Jennifer Aniston.

From here, it’s a reasonable jump to the question of memory. As Seung describes the process, it’s a matter of strong neural connections becoming even stronger through additional associations (Jennifer and Brad Pitt, for example), repetition (in all of those tabloids?), and ordering (memory is aided by placing, for example, the letters of the alphabet in order). No big revelations here–that’s how we all thought it worked–but Seung describes the ways in which scientists can now measure the relative power (the “spike”) of the strongest impulses. Much of this comes down to the image resolution finally available to long-suffering scientists who had the theories but not the tools necessary for confirmation or further exploration.

Next stop: learning. Here, Seung focuses on the random impulses first experienced by the neurons, and then, through a combination of repetition of patterns (for example), a bird song emerges. Not quickly, nor easily, but as a result of (in the case of the male zebra finches he describes in an elaborate example) of tens of thousands of attempts, the song emerges and can then be repeated because the neurons are, in essence, properly aligned. Human learning has its rote components, too, but our need for complexity is greater, and so, the connectome and its network of connections is far more sophisticated, and measured in far greater quantities, than those of a zebra finch. In both cases, the concept of a chain of neural responses is the key.

Watch the author deliver his 2010 TED Talk.

Watch the author deliver his 2010 TED Talk.

From here, the book becomes more appealing, perhaps, to fans of certain science fiction genres. Seung becomes fascinated with the implications of cryonics, or the freezing of a brain for later use. Here, he covers some of the territory familiar from Ray Kurzweil’s “How to Create a Mind” (recently, a topic of an article here). The topic of fascination: 0nce we understand the brain and its electrical patterns, is it possible to save those patterns of impulses in some digital device for subsequent sharing and/or retrieval? I found myself less taken with this theoretical exploration than the heart and soul of, well, the brain and mind that Seung explains so well. Still, this is what we’re all wondering: at what point does human brain power and computing brain power converge? And when they do, how much control will we (as opposed to, say Amazon or Google) exert over the future of what we think, what’s important enough to save, and what we hope to accomplish.

What About Those Other Countries?

For this blog, most readers are located in the U.S., and Canada. The countries with the fewest readers are in the countries indicated in white. I suspect there is more happening, or not happening, in those nations, and that's what this particular blog article will address.

For this blog, most readers are located in the U.S., and Canada. The countries with the fewest readers are in the countries indicated in white. I suspect there is more happening, or not happening, in those nations, and that’s what this particular blog article will address.

It would be easy for me to dismiss nations with no readers as simply uninterested in the issues, or, in some cases, unable to read the blog in its native English language, but this article about a lot more than this particular blog (though it would be fun to claim readers in every nation on the planet). Before I get into the research, and related thoughts, here’s a list of where this blog is not read. In the case of Africa and Asia, I’m surprised by number of nations where people have read this blog.

In South America, only French Guiana, and in Europe, only Kosovo tallies at zero blog readers to date. No surprise that North Korea is also on that list; the other Asian nations are Kazakhstan, Tajikistan, Turkmenistan. In Africa, there are many counties–probably about half the countries on the continent, not yet in the fold: Western Sahara, Mauritania, Mali, Niger, Chad, Sudan, Eritrea, Ethiopia, Somalia, Burkina Faso, Guinea, Sierra Leone, Liberia, Cote d’Ivoire, Togo, Benin, Cameroon, Central African Republic, Republic of the Congo, Gabon, Equatorial Guinea, Sao Tome and Principe, Democratic Republic of the Congo, Tanzania, Botswana, Mozambique, and Madagascar.

Seeking reliable statistics about some or most of these nations as some sort of a cluster, I discovered a useful United Nations site that listed most of these nations, along with many smaller ones (in Oceania, for example), in category 199, “Least Developed Countries.”

I then reviewed the 2012 report on the UN’s Millennium Development Goals. The goals are focused on poverty, human rights and infrastructure, disease prevention, hunger, gender equality and education, and although education is the only item on this list with an undeniable direct connection to Internet use, much of Africa continues to face severe challenges in labor productivity, one link in the chain to open and available Internet access. Furthermore, more than half of the world’s children not in school live in sub-Saharan Africa, another suggestive indicator. What’s more, only about 1 in 4 people are literate in this region, and only about 1 in 3 are literate in southern Asia, so limited Internet use in these regions may be of lesser importance than sheer literacy.

In 2011, there were 7 billion people on earth. Two thirds of them had no Internet access. Once again, sub-Saharan Africa and other developing regions posted the lowest rates.

So that’s the official global view. I wondered about the local view, and found a site called Edge Kazakhstan with a story about the local popularity of Facebook, and about the popularity of the Internet, generally, in Kazakhstan.

Statistics say social media sites are among the most accessed in the country… Number one is Russian social networking page VKontakte (http://vkontakte.ru), second is world leader Facebook (www.facebook.com) and in the third place is another Russian site, Odnoklassniki (www.odnoklassniki.ru)…Askar Zhumagaliyev, Kazakhstan’s Minister of Communications and Information, in a June Twitter posting said that 34.4 percent of the nation was using the internet as of early 2011 – compared to 18.2 percent in early 2010. He has also tweeted that he plans for the entire country to be covered by high-speed internet by 2015.

Social Bakers keeps track of social media use in nations throughout the world. I checked in Tanzania because Facebook use in Africa is growing very rapidly, despite a relatively low rate of Internet penetration, which is also growing fast, especially in the centers of population. Five years from now, connectivity in all but the most challenged or remote areas of Africa and Asia will not reach international averages, but they will be far higher than today (reliable statistics are hard to find, but Vodafone, a large international supplier, will likely serve between one-third and one-half of the technically available population).

I suspect that what I write may not be what most people want to read in Kazakhstan or Tanzania, but I would be surprised to find the list of nations who have never experienced the pleasure of reading this blog to be reduced by half within the next year (or so). The majority of my readers will continue to be found in U.S., Canada, the U.K., but I know that future map will show a wider distribution than the one I published today.

By the way, if you are reading this blog in a nation other than the U.S., I wonder if you would just comment and tell us where you are in the world. Thanks!

Dreaming of a Newer Deal

New_DealI just finished reading a book about the New Deal, that remarkable FDR-era transformation of America for the average American. Certainly, I knew and understood pieces and parts of the story, but there were so many factors, I needed a good writer (the author won a Pulitzer Prize) to put the whole thing into context for me. The author is Michael Hiltzik, and the book is called, simply, “The New Deal: A Modern History.”

What struck me about the story was just how bumpy the ride turned out to be. There was no master plan, only a sense from FDR’s Brain Trust that things were bad, and, rather than wasting a perfectly useful crisis, they ought to do powerful good. FDR was not the mastermind, but instead, the political driver, the leader who maintained the vision and  maneuvered around lots of political messes, and–nothing new here–other people in Washington who offered little assistance and, sometimes, difficult obstacles.

Mostly, though, the book made me wonder about our need, and our ability, to bring something like a New Deal into focus in this century. Roosevelt and his team worked their miracles in the 1930s, so that’s nearly 80 years ago. There was a lot of activity in the 1960s, too, beginning under Kennedy, and then, on a significant scale, under Johnson, and, since then, Obama has accomplished some good things that may last.

Given the vision, the opportunity, the need, the political will, the right circumstances, and, as with Roosevelt, the better part of a decade to get the work done, what might we hope to accomplish? I am, by no means, an expert, but I thought I’d get the conversation going with a list that seems, well, obvious. Here goes:

  1. The elimination of poverty in the U.S. As the theoretical administration begins to work on issues, high on that list ought to be urban poverty (1 in 3 children of color in the Philadelphia area live below the poverty line).
  2. Equal pay and equal opportunity for all Americans. Yes, there are laws. Now, we need programs to make those laws do the intended work.
  3. A rational retirement program so that all Americans can retire without fear of poverty. The New Deal got this ball rolling, but the current reality is terrifying: half of people over 70 unlikely to be able to feed themselves within the next decade.
  4. A modernization of the American approach to education. Too much money spent for uninspiring results, too much control in the hands of the unions, irrelevant curriculum, nearly half of high school students dropping out in the most troubled areas, out-of-control student loans and college costs, only about 1 in 4 Americans graduating college, massive shifts in technology, lack of resources, crumbling infrastructure, more.
  5. A modernization of the American approach to transportation. In the digital age, it’s time to rethink cars, highways, fuel consumption, pollution, driving, lack of public transportation in so many regions, lack of high-speed rail connections now available in so many other nations, lack of innovative new urban and suburban solutions.
  6. Government under the control of lobbyists, big money, and lifetime politicians. This entrenched thinking, these outmoded ways of operating, this political deadlock, those campaign funding rules, this list alone can keep a new Brain Trust busy for the entire decade.
  7. Controlling the  debt. Policies and practices in this financial realm are probably just the beginning of serious rethinking our financial policies. Of course, this ought to begin at home; a few good programs might help Americans shift from a life built on credit cards to a life build on savings and investments.
  8. A modernization of crime and punishment. Like several of the other agenda items, this one will require a lot of interaction with state governments. The number of people in prison, and the reasons leading to their incarceration, provide sufficient ammunition for serious government programs.
  9. Reducing the size and complexity of government. Physician, heal thyself.
  10. And, swinging back to the old New Deal… Reworking Social Security for the next generations. It’s time for some serious work so that everyone, or just about everyone, can live safe and secure, especially as we are living longer, healthier lives.

No, I didn’t touch international relations, and yes, I probably missed a lot of important ideas. Still, I think we ought to get this started.