I’m reading philosopher Thomas Nagel’s most recent book, Mind and Cosmos
in which he argues that the “neo-DarwinianMind and Cosmos Cover conception of nature is almost certainly false” – in fact, that is the subtitle of the book. Key to his argument is that it doesn’t appear that the cornerstone of neo-Darwinism, namely random mutation to genes that turn out to be fitness enhancing, could ever come up with the vast variety of large scale variations in body morphology and physiological systems we see in the world, many of which are argued to be “irreducibly complex” (i.e. all-or-nothing from an evolutionary fitness perspective). The architecture of the eye, and the molecular motor that powers flagellum in bacteria are examples of these complex biological structures that would seem (to some) impossible to evolve through simple random mutation.

Nagel, and others (and not just  intelligent design (ID) folks, some of whom think God orchestrates evolution) see the need for some more directed form of evolution to explain the diversity and complexity of life on our planet. Nagel seems to think mind / consciousness, and not God in the traditional conception of the term, might fit the bill. But to me that seems rather extreme, and goes against “reductive materialism / naturalism” that has been so successful at explaining how the world works over the last few hundred years.

One wonders if a less drastic solution that tweaks the mechanism of neo-Darwinian evolution, might be invoked to save the day for materialism / naturalism.

As discussed elsewhere (see this post for details), I’ve recently been studying epigenetics, where gene expression can be modulated by methylation (among other mechanisms). In methylation, a methyl groups can attach to a particular DNA base pair, causing the gene to “wrap up” around a histone, preventing it from being transcribed into RNA, thereby suppressing expression of the protein that the gene codes for. This methylation can be driven by environmental factors, is quite localized, specific, and repeatable, and can occur not only in somatic cells, but also in germ-line cells (eggs and sperm), and thereby get passed down to several subsequent generations.

While the epigenetic changes can be adaptive both for the organism in which they first occur, as well as their progeny, they aren’t permanent changes to the base-pair sequence of genes, so they aren’t heritable variations over thousands or millions of years, like we see across species in the world. So they are “Lamarkian” to a point, but not in the true sense of the world – giraffe necks could get longer for a generation or two after (hypothetical) epigenetic changes occurred as a result of a giraffe stretching to reach the high leaves on a tree, but eventually the epigenetic changes would “wear off” and subsequent generations would go back to having short necks.

But what if epigenetic changes via methylation not only silences genes, but also made those silenced genes more prone to mutation? The methylation would not only be a signal that “this gene isn’t worth expressing in the current environment”, it would also be signaling “this gene is not very useful in is current form in the current environment, so target it for mutation”. With an elevated mutation rate specific to maladaptive genes lasting several generations, new variations should more readily arise in subsequent generations, accelerating experimentation with parts of the genome where changes would be mostly likely to be beneficial in a rapidly changing environment.

This sort of elevated mutation rate in parts of genes that have been methylated (silenced) is exactly what this study [1] found. To quote the abstract:

Our results … provid[e] the first supporting evidence of mutation rate variation at human methylated CpG sites using the genome-wide sing-base resolution methylation data.

It’s not clear that this targeting of random mutations to specific maladaptive genes could result in the type of big changes Nagel and others point to when criticizing neo-Darwinian evolution. But it seems like a way to facilitate a sort of “semi-Intelligent Design”, without an explicit designer, by focusing “random tinkering” with the genome in places where genetic changes could do the most good in the current environment.

[1] BMC Genomics. 2012;13 Suppl 8:S7. doi: 10.1186/1471-2164-13-S8-S7. Epub 2012 Dec 17.

Investigating the relationship of DNA methylation with mutation rate and allele frequency in the human genome.

Xia J1, Han LZhao Z.

Full text: http://www.biomedcen…1-2164/13/S8/S7


BACKGROUND:DNA methylation, which mainly occurs at CpG dinucleotides, is a dynamic epigenetic regulation mechanism in most eukaryotic genomes. It is already known that methylated CpG dinucleotides can lead to a high rate of C to T mutation at these sites. However, less is known about whether and how the methylation level causes a different mutation rate, especially at the single-base resolution.

RESULTS:In this study, we used genome-wide single-base resolution methylation data to perform a comprehensive analysis of the mutation rate of methylated cytosines from human embryonic stem cell. Through the analysis of the density of single nucleotide polymorphisms, we first confirmed that the mutation rate in methylated CpG sites is greater than that in unmethylated CpG sites. Then, we showed that among methylated CpG sites, the mutation rate is markedly increased in low-intermediately (20-40% methylation level) to intermediately methylated CpG sites (40-60% methylation level) of the human genome. This mutation pattern was observed regardless of DNA strand direction and the sequence coverage over the site on which the methylation level was calculated. Moreover, this highly non-random mutation pattern was found more apparent in intergenic and intronic regions than in promoter regions and CpG islands. Our investigation suggested this pattern appears primarily in autosomes rather than sex chromosomes. Further analysis based on human-chimpanzee divergence confirmed these observations. Finally, we observed a significant correlation between the methylation level and cytosine allele frequency.


Our results showed a high mutation rate in low-intermediately to intermediately methylated CpG sites at different scales, from the categorized genomic region, whole chromosome, to the whole genome level, thereby providing the first supporting evidence of mutation rate variation at human methylated CpG sites using the genome-wide sing-base resolution methylation data.

PMID: 23281708

Are We Alone?This recent blog post on Scientific American by Caleb Scharf suggests that the hypothetical existence of the multiverse makes Fermi’s paradox harder to explain. The author suggests:

If reality is actually composed of a vast, vast number of realities, and if ‘anything’ can, does, and must happen, and happen many, many, times, this presumably has to include the possibility of living things (whatever they’re composed of) skipping between universes willy-nilly.

Therefore, if travel between universes in the multiverse is a possibility, then we should have been visited by aliens from other universes, not just from other planets in our own universe. And yet we see no evidence of other intelligent life, either native or foreign to our universe. Hence the author postulates that perhaps the lack of aliens can be used as evidence against the multiverse theory.

Ironically, I was promoted to do a web search for “multiverse Fermi paradox” and discover this SciAm article by an argument I was formulating that goes in exactly the opposite direction, namely that the existence of the multiverse might be an natural explanation for the Fermi Paradox.

Here is how the argument goes.

Let’s assume that the inflationary multiverse model is correct, and a huge number of bubble universes are being birthed every moment, creating a (nearly) infinite ensemble of bubble universes, each with very different physical constants and therefore very different laws of physics. Scientists agree that if this is the case, the space of possibilities for the laws of physics is vast, and that only an infinitesimal fraction of them will have laws that are conducive to galaxy/planet formation, and therefore have the possibility of intelligent life. But there will be a few, and naturally (per the weak anthropic principle) we’d find ourselves in one such hospitable universe – i.e. one that appears “fine-tuned” for (intelligent) life to emerge. So far this is just the standard argument to explain fine-tuning via the inflationary multiverse theory.

But perhaps we can push this line of reasoning further by asking just how hospitable we’d expect an “average” universe that contains intelligent life – i.e. that is fine-tuned at least as well as the minimum required for intelligent life to emerge.

Here is a relevant section from the wikipedia entry on the Anthropic Principle:

Probabilistic predictions of parameter values can be made given:

  • a particular multiverse with a “measure“, i.e. a well defined “density of universes” (so, for parameter X, one can calculate the prior probability P(X0) dX that X is in the range X0 < X < X0 + dX), and
  • an estimate of the number of observers in each universe, N(X) (e.g., this might be taken as proportional to the number of stars in the universe).

The probability of observing value X is then proportional to N(XP(X). (A more sophisticated analysis is that of Nick Bostrom.)[42] A generic feature of an analysis of this nature is that the expected values of the fundamental physical constants should not be “over-tuned,” i.e. if there is some perfectly tuned predicted value (e.g. zero), the observed value need be no closer to that predicted value than what is required to make life possible. The small but finite value of the cosmological constant can be regarded as a successful prediction in this sense.

Of particular relevance is the sentence I’ve bolded about “over-tuning”. Let’s consider the “prediction” mentioned – that the cosmological constant should be small, but not too small and certainly not exactly zero. There is a range for the cosmological constant within which galaxies/planets can form, and therefore be compatible with life. Too big a cosmological constant and the matter in the universe would fly apart too quickly for galaxies/planets, and therefore life, to form. A (too?) negative cosmological constant would result in the universe not expanding for a long enough time before shrinking back into a “Big Crunch” for galaxies/planets to form and evolve life.

A cosmological constant of zero falls within the range of life-compatible values, but it is just one of a very large (theoretically infinite) number of cosmological constants that fall within the “life-compatible” range. So if the cosmological constant was randomly selected from within the acceptable range, it would be extremely unlikely to be exactly zero, or even very close to zero relative to the size of the full life-compatible range. If it turned out to be exactly zero when scientists measured it, that would pretty much rule out the theory of the inflationary multiverse – it would just be too big a coincidence that it turned out to be exactly zero if the explanation for why it is so small (i.e. random selection limited to within the life-compatible range of values) affords a vast range of possible (small) values, including zero. In that case, there would have to be some other explanation – e.g. intelligent design or a unique solution to the laws of physics that only allows for a cosmological constant of exactly zero.

But thankfully for the proponents of the inflationary multiverse theory, it turns out the cosmological constant is within the life-compatible range, but not exactly zero and in fact fairly close to the upper bound of allowable values that allow galaxies, planets and therefore life, to exist. In other words, the cosmological constant appears tuned, but not over-tuned, to allow for life to exist.

So how does this line of reasoning apply to the Fermi Paradox?

Suppose the scientists are right that there are a fair number (~30 or so) fundamental constants of physics like the cosmological constant that must all fall within respective relatively narrow ranges for the resulting universe to be compatible with the emergence of life. Since we exist to observe our universe, by definition they all must fall within their respective life-compatible ranges. But here is the crucial point – we wouldn’t expect any of them to fall smack dab in the center, or even very close to the center of the compatible range – per the same argument as made above for the cosmological constant, namely that would be too big a coincidence. Instead, you’d expect each of them to be randomly sampled from within their respective compatible ranges – with some of them closer to the center “sweet spot” of the life-compatible range and some of them towards the extreme, where conditions would make it possible, but not optimal, for life to emerge.

If a large number of such samplings were done to create many life-compatible universes (as would naturally happen in a quickly inflating multiverse), only an infinitesimally small number of them would have all 30 parameters falling very close to the center of their compatible ranges, resulting in what might be called “Garden of Eden Universes” – super-fecund universes where life evolves very quickly, easily, and prolifically. The vast majority of life-compatible universes would instead be barely compatible with life – due to the fact that statistically its very likely at at least a few of their 30 parameters would be like the cosmological constant in our universe, and fall relatively close to the upper or lower boundary of its life-compatible range. So by this argument, we would expect the vast majority of life-compatible universes to not be super-compatible, but instead to be just barely compatible with the existence of life, just like the cosmological constant should be tuned, but not over-tuned to be compatible with life.

So if the vast majority of life-compatible universes are just barely life-compatible, we would naturally expect to find ourselves in one of those, rather than in one of the very rare Garden of Eden Universes (although see Note 1 for subtle caveat). In a universe that is just barely life-compatible, you’d expect life to be possible, but extremely rare. Which is exactly what we see – life on Earth, but nowhere else as far as our instruments can see. Hence this gives a logical explanation for Fermi’s paradox – life is extremely rare in our universe because our universe is a typical example drawn from the set of all life-compatible universes, and therefore just barely compatible with the existence of life.

An analogy for this explanation of the Fermi paradox can be made with human abundance and productivity. There is a set of parameters characterizing life events and circumstances (e.g gender, race, country of origin, native intelligence, work ethic, education, family circumstances, proclivity for risk taking, creativity, mental and physical health, charismatic personality, supportive spouse, spark of a world-changing idea, etc) that, if perfectly tuned, lead to an incredibly abundant and productive life, like that of Bill Gates, Mark Zuckerberg or Elon Musk. But its only for the extremely rare individual that every one of these parameters line up perfectly. For the vast majority of people, at least a few of these parameters aren’t tuned so well, and therefore they end up with a much less abundant, less productive life. Missing out on even a small number of these important parameters can make a huge difference in the abundance and productivity one enjoys. From the average observer’s perspective, he/she is incredibly more likely to find him/herself as someone with very low (or moderate) success than someone who is super-successful.

But perhaps we should look at the average dollar, rather than the average person, to make the analogy more accurate with observers in universes (i.e. Bill Gates is equivalent to a Garden of Eden Universe and the dollars in his bank account are equivalent to observers in that Garden of Eden Universe.) What this explanation for the Fermi Paradox is suggesting by analogy is that given the incredibly large size of the population, and the incredible rarity of super-successful individuals like Bill Gates, the average dollar will be overwhelmingly more likely to be found in the bank account of an unsuccessful or moderately successful person, rather than in the bank account of a Bill Gates-like super-successful person. Despite Bill Gates’ massive wealth, the extra dollars that are concentrated in his bank account are swamped by the vast number of people with relatively few dollars in each of their bank accounts.

This analogy points out that the argument dependents on the sum of the wealth of Bill Gates-like people to be much smaller than the combined wealth of all the unsuccessful or moderately successful people in the population – which in fact isn’t really the case in the US. According to a recent article in the New York Times, the “richest 1 percent in the United States now own more wealth than the bottom 90 percent”. By analogy, there must be MANY more universes with a small number of observers in order to swamp the number of observers summed over the very rare super-hospitable universes which have many more observers per universe than is typical of the set of life-compatible universes. But given the range of possible sets of values for the ~30 physical parameters that define a universe, and how narrow a range is required for each of them to create even a (seemingly) barely hospitable universe like our own, its not hard to imagine that the multiverse will contain many orders of magnitude more barely hospitable universes than universes which are almost perfectly tuned to support life and therefore contains life in super-abundance.

In this very amusing TED video, Jim Holt expounds on a related theme – he suggests its much more likely (and less intimidating) that we live in a mediocre universe, than one of the superlative ones (e.g. the best of all possible universes) – think of the responsibility of having to live up to the standards of conduct in the best of all possible universes!

This explanation for the Fermi paradox makes a number of predictions:

  1.  If we carefully analyze the physical constants to determine which range of values for each of them is compatible with life (holding the others constants – see Note 2), we will find at least a few that are like the cosmological constant, namely their value falls close to the boundary of the life compatible range, thereby making life possible, but rare.
  2. Conversely, if we find that almost all of the physical constants are near the sweet spot of the life compatible range, or that life is not sensitive to where in the range the constant falls, but we continue to see that life is very rare in our universe, than this is likely not the main explanation for the Fermi paradox.
  3. We will also observe that not all values within the life-compatible range for a physical parameter are equally good at promoting the emergence of life. Some values within the life-compatible range will result in more hospitable universes than others, with some sort of distribution (perhaps gaussian?), centered on the “sweet spot” at the center of the life-compatible range. The distributions for different parameters will likely have different shapes – i.e. standard deviations, with some forming very sharp peaks with long tails, and others having a rather wide range of nearly equivalently life-friendly values within the life-compatible range. Again, this theory predicts that at least a few of the parameters in our universe will have values fairly far out on the “tails” of the distribution of life-compatible values, making life possible, but rare.
  4. If, contrary to current evidence, we discover that life is very common in our universe, that should be considered evidence against the inflationary multiverse theory, since by this argument life should be rare in a typical element of the ensemble of universes in an inflationary multiverse.

In this paper, Alan Guth, the inventor of the Inflationary Universe theory, makes an interesting, but different argument than this one regarding how an inflationary universe could explain the Fermi Paradox. He calls it the “Youngness Paradox”. Here is his argument, in a nutshell:

If eternal inflation is true, the region of space that is inflating is continuously growing at an exponential rate. In fact he says that his theory predicts that the inflating space increases in volume by e^(10^37) every second. Now that is pretty darn fast!  Since the number of “pocket universes” that condense out of this space every second is proportional to its volume, that means right now there exist e^(10^37) more pocket universes than existed one second ago. Therefore there are nearly infinitely more universes our universe’s age than there are universes that are one second younger. If it takes some minimum time for life to evolve, then there should be infinitely more young universes that have just crossed that age threshold and generated their first intelligent life than there are older universes where life has had time to evolve more than once. So we should be overwhelmingly more likely to live in a universe that is just old enough to have evolved it’s first intelligent life – us but not yet had time to evolve another intelligence species. Guth says he doubts this argument, because of the measure problem. Here is a paper by Guth’s on his possible solution to this measure problem, that might avoid the Youngness Paradox. My problem with Guth’s argument is that it seems pretty clear that in our universe we appear to be well beyond the minimum time required for life to evolve – in fact life could have evolved millions or even billions of years earlier, so an argument that says our universe shouldn’t even be one second older than it needs to be for life to evolve seems very implausible.

In conclusion, If this or any other explanation for the Fermi paradox shows that life is indeed exceedingly rare in our universe it would mean that we have an awesome responsibility to ensure the rare spark of life that we represent is not extinguished by our own ignorance or carelessness.

Note 1: There is one caveat to the argument that must be addressed – namely that number of observers in the Garden of Eden Universes where lots of different life forms emerge would be much greater than in any single, barely life-compatible universe like our own (where very few life forms exist), increasing the probability that a random observer would find him/herself in a Garden of Eden Universe. rather than a barely life-compatible universe.

But if the number of universes in the multiverse is growing as quickly as the inflationary model suggests, the number of barely life-compatible universes should grow at a rate the quickly outstrips the advantage of the Garden of Eden Universes have in the number of observers – making it so that a random observer would still expect to find him/herself in a barely life-compatible universe. In the extreme, the number of observers a finite-sized Garden of Eden Universe could support would be finite, while the number of barely life-compatible universes would be (nearly) infinite. Of course the number of Garden of Eden Universes would also be (nearly) infinite, but it seems logical to suppose the ratio (# barely life-compatible universes / # Garden of Eden Universes) would be much greater than the ratio (# observers in average Garden of Eden Universe / # observers average in barely life-compatible universe). So the total number of observers across the multiverse who find themselves in a barely compatible universe (which equals the number of observers per barely compatible universe * the # of barely compatible universes) would be much greater than the total number of observers across the multiverse who find themselves in a Garden of Eden Universe (which equals the number of observers per Garden of Eden Universe * the # of Garden of Eden Universes). So the average observer would be overwhelmingly likely to find him/herself in a barely life-compatible universe.

Note 2: The ~30 parameters that need to be tuned for life to be possible in a universe need not be independent of each other. For example, a slightly higher cosmological constant (universe expansion rate) could still support life if the gravitational constant was greater as well, allowing matter to clump into galaxies and planets despite the faster expansion rate. While the range of life-compatible values for a parameter might shift as a result of these interdependencies, the argument above still holds, namely that if the parameters for a new life-compatible universe are randomly sampled from within these (interdependent) acceptable ranges, at least a few of them will be near the extreme of their respective life-compatible ranges, making life barely possible in the vast majority of newly created pocket universes.

The Edge.org question for 2010 is How is the Internet Changing the Way We Think? The site has lots of interesting answers including quite a bit doom and gloom about how we’re distracting ourselves to death, penned by smart people like Clay Shirky, Danny Hillis, and Dan Dennett.  But I was particularly intrigued by a couple passages from the response of evolutionary biologist Richard Dawkins.

Like many of the other respondents, Dawkins has observed a dumbing down of the individual as a result of the lower quality of media we’re exposed to, and the information firehouse that seems to be preventing us from focusing too hard or too long on anything that requires deep thought.  But at the same time, Dawkins (and other respondents) see room for optimism. As Dawkins put it:

But I want to leave negativity and nay saying and end with some speculative — perhaps more positive — observations. The unplanned worldwide unification that the Web is achieving (a science-fiction enthusiast might discern the embryonic stirrings of a new life form) mirrors the evolution of the nervous system in multicellular animals. …

I am reminded of an insight that comes from Fred Hoyle’s science fiction novel, The Black Cloud. The cloud is a superhuman interstellar traveller, whose ‘nervous system’ consists of units that communicate with each other by radio — orders of magnitude faster than our puttering nerve impulses.

But in what sense is the cloud to be seen as a single individual rather than a society? The answer is that interconnectedness that is sufficiently fast blurs the distinction. A human society would effectively become one individual if we could read each other’s thoughts through direct, high speed, brain-to-brain radio transmission. Something like that may eventually meld the various units that constitute the Internet.

I agree with Dawkins and many of the other experts who give their opinion to the Edge.org question. The jury is still out on just how the Internet is impacting the thinking of individuals. It gives us the opportunity to be aware of so much more than has ever been possible. But whether this will translate into knowledge individuals can employ to lead better lives isn’t yet certain.

What is indisputable is that the Internet is affording opportunities for collective intelligence, and coordinated action on a scale that has  never before been possible.  But is less certain is whether we will find ways to effectively nurture and harness this collective energy.  That seems to be what Web 2.0 is all about.  At the moment, we appear to be going through the equivalent of a Cambrian Explosion of projects & startups trying to capitalize on web-enabled collaborative systems. There are literally hundreds of big and small apps trying to leverage Twitter alone.

As Jeff Stibel (@Stibel) suggests in his new book Wired for Thought (which I highly recommend), we are likely to soon see a period of mass extinction of social media startups as the novelty of this new form of collaboration and communication wears off.  Such a die off will resemble  the massive pruning of connections that occurs in the human brain to eliminate redundant and unhelpful connections during childhood.  The human brain’s synaptic down selection during maturation is astonishing and quite draconian, going from about 10 quadrillion connections in a three year-old to a mere 100 trillion by adulthood, which means than only 1 in 100 synapses survive (source: Edelman’s book Neural Darwinism).

Hopefully the fittest & most useful (as opposed to the most amusing) will survive, and the result will be a set of sites and services that will facilitate true collective intelligence and collaborative action to move humanity forward, pulling we overstimulated and distracted individuals along with it.


Black HoleI’m on a plane to Boston to attend a workshop sponsored by the XPrize Foundation, and hosted by MIT & the renown futurist Ray Kurzweil, head of Singularity University. The workshop is being held to discuss the merits of creating an $10M XPrize to accelerate the development of direct brain-computer interfaces or BCI for short.

I’ve always thought BCI was a great idea. I’m not the first to call it the “next step in human evolution”.  I’ve thought, “wouldn’t it be empowering to be able to surf the web with your thoughts?” When this vision of BCI becomes a reality, you would have all the knowledge of the world much closer than your fingertips, since the knowledge would be inside your head, and instantly accessible, just like your ‘native’ thoughts and memories, .

This optimism has prompted me to engage in research to make BCI a reality, as described in this recent Computer World article, in which I was (mis)quoted as saying that people will be interacting with computers via chips implanted in their brain by the year 2020.  It will likely take longer than that, given the technical hurdles and regulatory approval process, but there don’t appear to be any show-stoppers that would prevent it from happening within the next 20 or 30 years at the outset.

Now, after spending little over a month as an active user of Twitter, I’ve got serious reservations about this wisdom of this vision of the future. Don’t get me wrong. Twitter is an amazing service. But its nothing like surfing the web.  Surfing the web is a ‘pull’ activity. I’m in control of what I’m looking for, and what I’m reading. In contrast, Twitter is a ‘push’ service. Once I’ve set up a list of people to follow (I’ve got about 80 now), posts and links come at me relentlessly.

The perpetual stream of information wouldn’t be a big deal, except for one simple fact – its addicting. I’m fascinated by the information contained in a large fraction of the links the people I follow are posting. As a result, I find myself spending hours reading their posts, and when I’m done there is a whole new series of tweets, and the process repeats.  And better semantic filtering technology seemingly wouldn’t help. Its not that I’m overwhelmed by crap. There is too much interesting stuff to read, and better filters would probably just point out more stuff that I’m missing, making the whole thing even more addicting!

I’m no slacker, and I’m usually a very self-disciplined person. Just ask my friends, family and co-workers. But the stream of information coming at me from Twitter is just so interesting and so distracting, it is hard to focus on other things.  I don’t think I could describe the experience as well as Jim Stogdilll (@stogdill) has done in his post Skinner Box? – There’s an App for That. I’ll just quote a couple passages, but anyone interested in the addictive side of Twitter, and what it can do to your thinking ability, should read it in its entirety.

In describing his conflicted attitude towards Twitter, Jim says:

I can either drink liberally from the fire hose and stimulate my intellect with quick-cutting trends, discoveries, and memes; but struggle to focus. Or I can sign off, deactivate, and opt out. Then focus blissfully and completely on the rapidly aging and increasingly entropic contents of my brain, but maybe finish stuff. Stuff of rapidly declining relevance.

This rings so true with me, and like Jim, I find it hard sometimes to willfully opt out – the stream is just too enticing. As Jim observes, we’re like rats in a Skinner box, self stimulating with our reward of choice, real-time information.  We’re ‘digital stimulusaholics’. Jim goes on to say:

For the last couple of years I’ve jacked in to this increasing bit rate of downloadable intellectual breadth and I’ve traded away the slow conscious depth of my previous life. And you know what? Now I’m losing my self. I used to be a free standing independent cerebral cortex. My own self. But not any more. Now I’m a dumb node in some uber-net’s basal ganglia. Tweet, twitch, brief repose; repeat. My autonomic nervous system is plugged in, in charge, and interrupt ready while the gray wrinkly stuff is white knuckled from holding on.

What if Twitter is turning us into mindless cogs in a big machine, and the machine turns out to be dumb?  As Jim describes it:

What if the singularity already happened, we are its neurons, and it’s no smarter than a C. elegans worm?

Now imagine just how much more addicting the stream would be if it was coming at us in real-time through a two-way link hooked directly to our brains. Sure there would be an ‘off’ button – responsible scientists (like me!) will make sure of that.  But would anyone be able to push it?  I’m far from certain.

So I’m stuck in a difficult position.

Tomorrow I’m meeting with Mr. Singularity himself, Ray Kurzweil and a bunch of other proponents of brain-computer interfaces to brainstorm about offering a big cash XPrize for the first group to make high-bandwidth BCI a reality.  And I’m thinking it may not be such a good idea for the future of humanity.

I expect Kurzweil will argue that merging our slow squishy brains with our machines is the only option we have, and that rather than turning our brains to mush, it will jack them up to runs thousands of times more efficiently than they do today, since transistors are so much faster than neurons.

Recent studies have shown that humans aren’t very good at multi-tasking, and paradoxically, people who multi-task the most are worse at multi-tasking than people who usual focus on one thing and occasionally multi-task.  So much for the learning / brain-plasticity argument that ‘we’ll adapt’.

Perhaps our brains could be reconfigured to be better at multi-tasking if augmented with silicon?  Perhaps with a BCI, we could be reading an article and talking to our spouse at the same time. How weird would that be?  And with such a significant change in my cognition, would I still feel like me?  Would it feel like there were more than one of me?  Talk about schizophrenia!

Call me a conservative, but I know enough about the brain and human psychology to realize that it maintains its hold on reality by a rather tenuous rope, carefully woven from many strands over millennia by evolution. That rope is bound to get seriously frayed if we try to jack up our neural wiring to run many times faster, or to be truly multi-threaded in the time frame Kurzweil is talking about for the singularity, i.e. 2030 to 2050.

But on the other hand, one might conclude we’re damned if we do and damned if we don’t.  Whether we like it or not, things aren’t slowing down. The amount of information in the stream is doubling every year. If instead of jacking in with BCI, we take the conservative route and leave our brains alone, the Twitter experience shows us we’re likely to be sucked in to the increasingly voluminous and addicting flood of information, left with only our meager cognitive skill set with which to cope with the torrent.  I’m afraid our native, relatively feeble minds may not stand a chance against the selfish memes lurking in the information stream.

Sigh. Maybe I’m over-reacting…  If I don’t chicken out, I will try to bring up these concerns during the BCI XPrize discussions starting tomorrow.  I may even tweet about it. The official hashtag for the workshop is #bcixprize. Just click the link to follow along – it should be fascinating…


  1. Here is another interesting perspective by Todd Geist on what it might be like to be a small part of a global information network, like the organisms on Pandora in the movie Avatar.  As I pointed out in my comment to Todd’s post, the difference between Pandora’s creatures and humans is that they had millions of years of evolution to cope with the direct mind-to-mind linkages, while its happening to us in the course of at most a few generations.
  2. Here is a skeptical perspective on the whole idea of the singularity.

So far, social media seems to have a lot of roar, but very little teeth when it comes to facilitating social change.  Users of services like Twitter and Facebook seem more interested (sometimes compulsively) in entertainment, ‘branding’ & connecting with friends than about initiating positive social change. The always-insightful Venessa Miemis (@venessamiemis) hit the nail on the head in the comments to her blog post What is Social Media? [the 2010 edition] when she said:

Does all this online talking matter if nothing comes of it in the real world?

Neal Gorenflo (@gorenflo) elaborates on the potential pitfalls of conversation:

Connecting and conversing is necessary, but  again, the danger is that we get stuck in conversation. There is such a thing as being too connected. We have cognitive and time limits. Web 2.0 can overload us with messages, shrink attention spans, absorb our time, erode focus, and thus disrupt our ability as citizens to find common ground and take action together. It’s possible that through Web 2.0 we may be, as in the title of cultural critic Neil Postman’s influential book, amusing ourselves to death.

Venessa goes on to ask the big question:

How do we make something happen? What are small things we can start doing to get the hang of real coordination, collaboration, and action?

I’m all for starting with something small but nonetheless tangible – to give us something to build on and learn from.  Why not shoot first, and aim later?  The worst that can happen is we fail fast and learn from our mistakes.

With that goal in mind, I’m fascinated by an initiative by my Carnegie Mellon University colleague Priya Narasimhan (@priyacmu) to use crowdsourcing and social media to help locate, assess & repair potholes around Pittsburgh [see news story w/ video].

Pittsburghers are given three options for reporting potholes – dial 311 on their mobile phone, log it at the website pittsburghpothole.com, or best of all, report it using a free iPhone app called iBurgh.

The iBurgh app is cool because of it is so easy to use. Simply snap a photo of a pothole with your iPhone. The image is automatically geotagged with its location, and sent to the city’s public works department. Once  three pictures of the same pothole are logged, the city promises to repair it within five days.  Granted its not an instantaneous response, but we’ve got a lot of potholes in Pittsburgh!  The tool can also be used to report issues like needed snow removal – a big problem around here this time of year…

Pittsburgh City Council member Bill Peduto said the program makes Pittsburgh the nation’s first large city to implement a government integrated iPhone app.  He goes on to say:

“This type of technology that merges social media with democracy is going to boom within the next year.”

This is exciting for me partly because it is being done by a friend.  But more importantly, it illustrates something we saw emerging with the DARPA Red Balloon Challenge which might be called crowdsensing – using a distributed network of tech-enabled individuals to track and report on significant (and sometimes not-so-significant) events happening in their world.

Another nice example is the Twitter Earthquake Detection Program, which encourages people to report when the earth moves via Twitter or on a dedicated “Did You Feel It?” website.

I’m hopeful an even bigger and better example will happen soon in the form of a regime change in Iran, thanks in part to Twitter. As I observed recently, Twitter has given the citizens of Iran a way to tell the story of their quest for freedom to the world in real-time and in a way that engages public interest, at a time when traditional media channels have been locked out by their oppressive government.  I wish them the best of luck, and will be tracking the events on Twitter as they unfold.  When (not if) they succeed, it will be an important milestone for the emerging Global Brain.

Until then, I’m happy to start small.  Excuse me while I go report a few potholes…

We may have just witnessed an important milestone in the awakening of the web.

While this point may be controversial, I content that future exponential growth of the digital economy will eventually require getting humans out of the loop.  If computing power continues to double every 18 months in accordance with Moore’s Law,  utilizing all those cycles will eventually require computers to start talking directly to other computers, without the goal of assisting, informing or entertaining human beings.

Why? Because human population is virtually flat and limits to human cognition mean there is only so much digital content people can effectively digest.

According to a recent University of California study,  the average US citizen consumes an estimated 34Gb of data daily, mostly in the form of  TV & video games. Collectively, American households consumed 3.6 zettabytes of information of all kinds in 2008, the researchers estimated. While this seems like a lot and is  likely to continue growing for some time as video resolution gets higher, our appetite for bytes will inevitably flatten out, particularly if we continue to get more of our information through mobile devices.

If machine-to-machine communication will eventually need to pick up the slack in demand for the ever increasing bandwidth, how and when will it happen and what form will it take?

To some degree it is happening already. For quite some time there has been a shaky alliance between news aggregators (e.g. Google News) and machine-driven decision tools, best exemplified by automated financial trading systems.  The widely reported United Airlines incident last year showed just how risky this combination can be. For anyone who missed it, United Airlines stock plummeted from $12 to $3, losing 75% of its value over the course of a few minutes on Sept. 8th 2008, and in the process wiped out over $1B in shareholder value.

Insider trading?  Nope.

It turns out the trigger was a small mistake by a Florida newspaper that accidentally reran a story from 2002 about UAL’s bankruptcy without a date, making it appear like it was fresh news.  Within a minute, the automated scanning system of Google News, which visits more than 7,500 news sites every 15 minutes, found the story and thinking it new, added it to its breaking news stream.  An employee at Bloomberg financial news saw the story and rebroadcast it to thousands of readers, quite many of whom follow United Airlines.  Within minutes United’s stock tanked, largely as a result of automated trading programs that saw the price dropping and sold the stock to prevent additional losses.

Once the mistake was cleared up and trading resumed, UAL’s stock recovered most of the $1B it had lost, but the incident was an important lesson for the burgeoning industry of automated news scanning and financial trading. What went wrong during the United Airline incident was a combination of human error and runaway automation that both propagated and acted upon the mistake.

You could try to blame the human element of the equation since in this case without the human error of resurrecting an out-of-date story, the incident would never have happened. But Scott Moore, head of Yahoo News, hit the nail on the head when he said:

This is what happens when everything goes on autopilot and there are no human controls in place or those controls fail.

Now in what could be an important (but potentially risky) step further. we are beginning to see computers acting as both the producers and consumers of content, without a human in the loop.  In this case it is called computational journalism and it consists of content generated by computers for the express purpose of consumption by other computers.

Academics at Georgia Tech and Duke University have been speculating about computational journalism for some time. But now, the folks at Thomson Reuters, the world’s largest news agency, have made the ideas a reality with a new service they call NewsScope. A recent Wired article has a good description of NewsScope:

NewsScope is a machine-readable news service designed for financial institutions that make their money from automated, event-driven, trading. Triggered by signals detected by algorithms within vast mountains of real-time data, trading of this kind now accounts for a significant proportion of turnover in the world’s financial centres.

Reuters’ algorithms parse news stories. Then they assign “sentiment scores” to words and phrases. The company argues that its systems are able to do this “faster and more consistently than human operators”.

Millisecond by millisecond, the aim is to calculate “prevailing sentiment” surrounding specific companies, sectors, indices and markets. Untouched by human hand, these measurements of sentiment feed into the pools of raw data that trigger trading strategies.

One can easily imagine that with machines deciding what events are significant and what they mean, and other machines using that information to make important decisions, we have the makings of an information ecosystem that is free of human input or supervision. A weather report suggesting a hurricane may be heading towards central America could be interpreted by the automated news scanners as a risk to the coffee crop, causing automated commodity trading programs to cause a rise on coffee futures. Machines at coffee producing companies could see the price jump, and trigger release of stockpiled coffee beans onto the market, all without a human hand in the whole process. Machines will be making predictions and acting on them in what amounts to a fully autonomous economy.

This could be an alternative route to the Global Brain I previously envisioned as the end result of the TweetStream application.  By whichever route get there (and there are likely others yet to be identified), the emergence of a viable, worldwide, fully-automated information exchange network will represent an historic moment.  It will be the instant our machines no longer depend entirely on humans for their purpose. It will be a critical milestone in the evolution of intelligence on our planet, and a potentially very risky juncture in human history.

The development of NewsScope is appears to be an important step in that direction. We live in interesting times.

1/4/09 Update

Thomson Reuters, the developers of NewsScope, today acquired Discovery Logic, a company whose motto is “Turning Data into Knowledge”.  Among its several products, is Synapse, designed to help automate the process of technology tranfer of government-sponsored healthcare research by the NIH Office of Technology Transfer (OTT).  They describe Synapse as:

An automated system to match high-potential technologies with potential licensees. Using Synapse, OTT now uses a “market-push” approach, locating specific companies that are most likely to license a certain technology and then notifying the companies of the opportunity.

Using the same product, OTT also found it could also successfully perform “market pull,” in which OTT can identify multiple technologies in its inventory in which a company may find interest.

Apparently Reuters isn’t interested in just automating the process of generating and disseminating news, but technology as well.

I’m sitting on the couch at my in-laws connected to the global network via my cell phone, and mesmerized by events unfolding in real-time in Iran.  While I sit relaxing with family in the afterglow of Christmas, half way around the world people like this man:

with rocks in both hands and his cell phone in his mouth, are serving simultaneously as fighters and reporters.  And I’m doing my tiny part, as observer and cheerleader, spreading the word with tweets like this one:

My fascination is as much with the process as with the events themselves. CNN, Reuters and the BBC are relying almost exclusively on unconfirmed posts by ‘citizen reporters’ sharing news, pictures & videos on services like Twitter, Twitpic & YouTube.

We are experiencing the future of news, with the line forever blurred between those who make the news and those who share the news.  For the first time we can experience news anywhere and anytime, as it happens. We are all so much more intimately connected than ever before. Global consciousness is awakening. We live in interesting times.

Read more about Twitter’s critical role in the unfolding drama in Iran, and the potential downsides of using social media to instigate change.

Stowe Boyd and Freddy Snijder have posted an interesting dialog about the streams and the “global sensorium”.  Freddy’s original post, Stowe’s reply, and Freddy’s reply to Stowe, are all worth reading.

I like what both have to say, and the fact that dialogs like this is occurring is a sign that a collective intelligence is already emerging.  But I believe the two have missed several important points.

First both Boyd & Snijder seem resigned to our current set of individual cognitive capabilities. As a neuroscience researcher, I’m confident that one day advances in our understanding of the brain and in particular brain-computer interfaces, will endow individuals with new cognitive capabilities.  Virtual telepathy, infallible memory, vision at a distance, are all within the realm of possibility, and could redefine with at means to be human. In fact, I’m participating in a workshop at MIT on January 7-8th sponsored by the XPrize Foundation and Ray Kurzweil’s Singularity University to discuss creating an XPrize competition to turbo charge progress in brain-computer interfaces. So big advances may be in store for our future…

But for now at least, both Boyd & Snijder are correct in observing that we’re stuck with our rather limited individual cognitive capabilities. Given these cognitive limitations, there is a serious question about just how individuals can best cope with the exponential growth of both information and societal complexity.  Freddy Snijder poses it this way:

The question remains how this global sensorium can be effectively used by all the individuals that make it up.

A minor point –  ALL individuals are unlikely to ever effectively use any technology or service. They’ll always be those who resist or are denied access to new technology. A big question is how to manage this digital divide.

But more fundamentally, I don’t believe any technology can possibly exist that will restore the degree of individual understanding and agency that it seems we crave as human beings.  Lets face it, the global knowledge base and real-time information stream are growing at such a rapid pace that even with the best collaborative filtering technology, it inevitable that individuals will continue to know more and more about less and less. At some point, it seems inevitable that we reach a point where we know almost everything about next to nothing!

The unavoidable reality of information overload doesn’t sit well with people, particularly folks who pride themselves on keeping up with the latest in information technology. We are programmed by evolution with the drive to understand and control all aspects of our environment. As a result, there are many hot start-ups today promising to tame the torrent of information and return each of us to a idyllic state of information mastery.

I’d love it if this were the case – I too am an information junkie and have always hoped to find a way to change the world through personal engagement.  But my gut tells me that the global society is quickly becoming far too complex for any single individuals to understand, to say nothing of  influence, the global sweep of human events.

If the organization of biological brains is any indication (and I’m betting it is), the Global Mind will be an emergent phenomena, and its workings will likely be incomprehensible to individual humans, just like individual neurons are oblivious about the thoughts to which their activity contributes. Like the neurons in our brain, individual people  participating in the functioning of the global sensorium may see little evidence of the part they are playing, and may not even realize the questions that the collective intelligence is working to solve.

The parallel growth of collective intelligence and decrease in individual agency raises fundamental questions that will need to answered if humanity is to survive and prosper:

  • Can we overcome the egocentric perspective that drives each of us to want to stand out and get ahead, often at the expense of our neighbor?
  • Can we transcend our self-centered tendencies and accept playing a small, largely unsung role in the workings of the whole?

In short, can we find a way to leverage technology to allow individuals to coordinate their modest local activities (both on-line & off) into a global, decentralized intelligence while remaining engaged in the process, despite realizing that their individual contributions will inevitably be tiny in the grand scheme of things?

The path is far from clear, but I remain hopeful.


Humans, like many species, are highly social creatures. The process of natural selection has instilled in us a drive to connect with other people. Those ancestors that were well connnected got support from their community and prospered, allowing them to pass on their gregariousness down to their offspring.

Facebook Addiction

With the advent of modern communication technology we’ve developed more and more effective ways to ‘scratch the itch’ to connect with others at greater speeds and distances. Social networks like Facebook and Twitter are the latest in the line of personal connectivity technology.

While these services can provide much value by allowing people to link with friends, ideas and events in new ways, they are not without a dark side.  As their popularity has mushroomed, it has become increasingly apparently that these services can be addictive, and this tendency is especially prevalent among youth of today, for whom fitting into the social fabric has always seemed  critically important.

This New York Times article, “Driven to Distraction, Some Teenagers Unfriend Facebook” documents some of the troubles teenagers are having with Facebook addiction, and managing their compulsion to connect with their social network. Psychology Professor  Walter Mischel of Columbia University says:

Facebook is the marshmallow for these teenagers

referring to the treat young kids found irresistible in his now famous series of experiments probing how young children cope with, and often succumb to, temptation.

Professor Mischel  found that kids who could not delay gratification, but instead snatched the marshmallows at the earliest opportunity, turned out to be under-achievers as adults.

So the big question seems to be:

Is the 24/7 connected culture we find ourselves embedded within today serving us, or is it driving us (and our kids) to distraction?

My guess – probably both.

One thing seems clear – Driven by our compulsion to connect, we humans are beginning to serve the global network at least as much as the global network is serving us. It remains to be seen whether the emerging collective intelligence will help steer humanity towards healthy and creative forms of social networking, or undermine the well-being of the very nodes that form it…

Update 2/03/2010: A new study in the Journal Psychopathology found a strong correlation between excessive internet activity (especially at social media sites) and depression. The study authors day:

“Our research indicates that excessive internet use is associated with depression, but what we don’t know is which comes first — are depressed people drawn to the internet or does the internet cause depression?

“What is clear, is that for a small subset of people, excessive use of the internet could be a warning signal for depressive tendencies.”

My hobby is analyzing real-time social media from the perspective of neuroscience. I’m fascinated by the analogy between Twitter and the brain. The recent discussions about the etiquette of ‘thank you’ posts on Twitter got me thinking – how do neurons in the brain handle thank yous?Pat on Back

At first it seems like a silly question. Upstream neurons don’t thank downstream neurons for passing on the message they sent. A pre-synaptic neuron sends neurotransmitters to the post-synaptic neuron and that would seem like the end of it. Right? Or is it?

In fact, if the post-synaptic neuron fires soon after the pre-synaptic neuron sends it a message, the strength of the synapse between the two neurons is strengthened according to the spike time dependent plasticity (STDP) rule I discussed previously.  So while there is no explicit acknowledgment or ‘thank you’ by the pre-synaptic neuron for the equivalent of a retweet by the post-synaptic neuron, the pre-synaptic neuron’s gratitude (to stretch the analogy) manifests itself as a strengthening of the synapse, the equivalent of the ‘social bond’ between the two neurons.

The equivalent response on Twitter would be if I started posting more content that I think will be appreciated by someone who has shown a tendency to retweet my posts in the past – in other words, sending more good stuff their way.  In a sense, the neurons are just ‘doing their job’ of passing on the best information they can find to those other neurons they think will listen, rather than explicitly greasing the skids of communication by exchanging extra messages expressing gratitude.

Perhaps this disconnect between how effective communication appears to happen in the brain (without thank yous) and how messages are passed on Twitter today is part of my ambivalence about ‘thx for the RT!’ posts.

But as we see from the wild west nature of real-time social media today, and last nights successful attack that took down Twitter, the Global Brain is still in the early process of development.  Maybe before it got so complex and sophisticated, the brain was more like Twitter?

This is purely speculation, but I strongly suspect that when brains were more primitive, and proto-neurons in those primitive brains were trying to figure out whether or not it was worth talking to their neighbors, there must have been something that was ‘in it for them’ to encourage message passing.

Perhaps like vampire bats share blood to build social ties, early neurons might have shared chemicals to help nourish each other and build supportive networks. The survival value of this ‘cellular food’ might have encourage the initial exchanges, which got co-opted later by natural selection for communication purposes as multi-cellular organisms evolved.

But a more likely possibility seems to be that proto-synapses between proto-neurons served a communication function from the start. A rather dense 2009 paper from Nature Neuroscience by neuroscientists at the the University of Cambridge on the evolution of the synapse seems to support this idea:

“Strikingly, quintessential players that mediate changes dependant on neuronal activity during synaptic plasticity are also essential components of the yeast’s response to environmental changes.”

In other words, these scientists appear to be suggesting that early semi-independent single cell organisms may have developed proto-synapses to communication information about their shared environment, like the presence of food or toxins nearby. Perhaps through communication, these early colonies of cells might have reacted in concert, and thereby coped more effectively with threats or opportunities presented by their shared environment.  Such ‘communication for a common cause’ would have had survival advantage for the cell colony, encouraging its elaboration through the process of natural selection.  Anthropomorphically speaking, the cells would have been saying ‘if we listen to each other, we can all get ahead.’

All this points to the content of the message itself as as the carrier of value in these early colonies of cells, without need for explicit exchange of ‘thank you’ messages. Listening and being heard were both of intrisic value to individual primitive cells, and to the colony as a whole.

So can we get away with such a ‘thankless’ model on Twitter, or is a virtual pat on the back in return for digital kindness, in the form of a thank you post for retweets, still necessary to grease the skids of communication in the rapidly evolving global brain?

Ever feel like you're part of a big machine?

This blog is an exploration of what being part of a collective might mean for each of us as individuals, and for society.

What is it that is struggling to emerge from the convergence of people and technology?

How can each of us play a role, as a thoughtful cog in the big machine?

Dean Pomerleau


Twitter Updates