Nagel, and others (and not just intelligent design (ID) folks, some of whom think God orchestrates evolution) see the need for some more directed form of evolution to explain the diversity and complexity of life on our planet. Nagel seems to think mind / consciousness, and not God in the traditional conception of the term, might fit the bill. But to me that seems rather extreme, and goes against “reductive materialism / naturalism” that has been so successful at explaining how the world works over the last few hundred years.
One wonders if a less drastic solution that tweaks the mechanism of neo-Darwinian evolution, might be invoked to save the day for materialism / naturalism.
As discussed elsewhere (see this post for details), I’ve recently been studying epigenetics, where gene expression can be modulated by methylation (among other mechanisms). In methylation, a methyl groups can attach to a particular DNA base pair, causing the gene to “wrap up” around a histone, preventing it from being transcribed into RNA, thereby suppressing expression of the protein that the gene codes for. This methylation can be driven by environmental factors, is quite localized, specific, and repeatable, and can occur not only in somatic cells, but also in germ-line cells (eggs and sperm), and thereby get passed down to several subsequent generations.
While the epigenetic changes can be adaptive both for the organism in which they first occur, as well as their progeny, they aren’t permanent changes to the base-pair sequence of genes, so they aren’t heritable variations over thousands or millions of years, like we see across species in the world. So they are “Lamarkian” to a point, but not in the true sense of the world – giraffe necks could get longer for a generation or two after (hypothetical) epigenetic changes occurred as a result of a giraffe stretching to reach the high leaves on a tree, but eventually the epigenetic changes would “wear off” and subsequent generations would go back to having short necks.
But what if epigenetic changes via methylation not only silences genes, but also made those silenced genes more prone to mutation? The methylation would not only be a signal that “this gene isn’t worth expressing in the current environment”, it would also be signaling “this gene is not very useful in is current form in the current environment, so target it for mutation”. With an elevated mutation rate specific to maladaptive genes lasting several generations, new variations should more readily arise in subsequent generations, accelerating experimentation with parts of the genome where changes would be mostly likely to be beneficial in a rapidly changing environment.
This sort of elevated mutation rate in parts of genes that have been methylated (silenced) is exactly what this study [1] found. To quote the abstract:
Our results … provid[e] the first supporting evidence of mutation rate variation at human methylated CpG sites using the genome-wide sing-base resolution methylation data.
It’s not clear that this targeting of random mutations to specific maladaptive genes could result in the type of big changes Nagel and others point to when criticizing neo-Darwinian evolution. But it seems like a way to facilitate a sort of “semi-Intelligent Design”, without an explicit designer, by focusing “random tinkering” with the genome in places where genetic changes could do the most good in the current environment.
———-
[1] BMC Genomics. 2012;13 Suppl 8:S7. doi: 10.1186/1471-2164-13-S8-S7. Epub 2012 Dec 17.
Investigating the relationship of DNA methylation with mutation rate and allele frequency in the human genome.
Abstract
BACKGROUND:DNA methylation, which mainly occurs at CpG dinucleotides, is a dynamic epigenetic regulation mechanism in most eukaryotic genomes. It is already known that methylated CpG dinucleotides can lead to a high rate of C to T mutation at these sites. However, less is known about whether and how the methylation level causes a different mutation rate, especially at the single-base resolution.
RESULTS:In this study, we used genome-wide single-base resolution methylation data to perform a comprehensive analysis of the mutation rate of methylated cytosines from human embryonic stem cell. Through the analysis of the density of single nucleotide polymorphisms, we first confirmed that the mutation rate in methylated CpG sites is greater than that in unmethylated CpG sites. Then, we showed that among methylated CpG sites, the mutation rate is markedly increased in low-intermediately (20-40% methylation level) to intermediately methylated CpG sites (40-60% methylation level) of the human genome. This mutation pattern was observed regardless of DNA strand direction and the sequence coverage over the site on which the methylation level was calculated. Moreover, this highly non-random mutation pattern was found more apparent in intergenic and intronic regions than in promoter regions and CpG islands. Our investigation suggested this pattern appears primarily in autosomes rather than sex chromosomes. Further analysis based on human-chimpanzee divergence confirmed these observations. Finally, we observed a significant correlation between the methylation level and cytosine allele frequency.
CONCLUSIONS:
PMID: 23281708
This recent blog post on Scientific American by Caleb Scharf suggests that the hypothetical existence of the multiverse makes Fermi’s paradox harder to explain. The author suggests:
If reality is actually composed of a vast, vast number of realities, and if ‘anything’ can, does, and must happen, and happen many, many, times, this presumably has to include the possibility of living things (whatever they’re composed of) skipping between universes willy-nilly.
Therefore, if travel between universes in the multiverse is a possibility, then we should have been visited by aliens from other universes, not just from other planets in our own universe. And yet we see no evidence of other intelligent life, either native or foreign to our universe. Hence the author postulates that perhaps the lack of aliens can be used as evidence against the multiverse theory.
Ironically, I was promoted to do a web search for “multiverse Fermi paradox” and discover this SciAm article by an argument I was formulating that goes in exactly the opposite direction, namely that the existence of the multiverse might be an natural explanation for the Fermi Paradox.
Here is how the argument goes.
Let’s assume that the inflationary multiverse model is correct, and a huge number of bubble universes are being birthed every moment, creating a (nearly) infinite ensemble of bubble universes, each with very different physical constants and therefore very different laws of physics. Scientists agree that if this is the case, the space of possibilities for the laws of physics is vast, and that only an infinitesimal fraction of them will have laws that are conducive to galaxy/planet formation, and therefore have the possibility of intelligent life. But there will be a few, and naturally (per the weak anthropic principle) we’d find ourselves in one such hospitable universe – i.e. one that appears “fine-tuned” for (intelligent) life to emerge. So far this is just the standard argument to explain fine-tuning via the inflationary multiverse theory.
But perhaps we can push this line of reasoning further by asking just how hospitable we’d expect an “average” universe that contains intelligent life – i.e. that is fine-tuned at least as well as the minimum required for intelligent life to emerge.
Here is a relevant section from the wikipedia entry on the Anthropic Principle:
Probabilistic predictions of parameter values can be made given:
- a particular multiverse with a “measure“, i.e. a well defined “density of universes” (so, for parameter X, one can calculate the prior probability P(X0) dX that X is in the range X0 < X < X0 + dX), and
- an estimate of the number of observers in each universe, N(X) (e.g., this might be taken as proportional to the number of stars in the universe).
The probability of observing value X is then proportional to N(X) P(X). (A more sophisticated analysis is that of Nick Bostrom.)[42] A generic feature of an analysis of this nature is that the expected values of the fundamental physical constants should not be “over-tuned,” i.e. if there is some perfectly tuned predicted value (e.g. zero), the observed value need be no closer to that predicted value than what is required to make life possible. The small but finite value of the cosmological constant can be regarded as a successful prediction in this sense.
Of particular relevance is the sentence I’ve bolded about “over-tuning”. Let’s consider the “prediction” mentioned – that the cosmological constant should be small, but not too small and certainly not exactly zero. There is a range for the cosmological constant within which galaxies/planets can form, and therefore be compatible with life. Too big a cosmological constant and the matter in the universe would fly apart too quickly for galaxies/planets, and therefore life, to form. A (too?) negative cosmological constant would result in the universe not expanding for a long enough time before shrinking back into a “Big Crunch” for galaxies/planets to form and evolve life.
A cosmological constant of zero falls within the range of life-compatible values, but it is just one of a very large (theoretically infinite) number of cosmological constants that fall within the “life-compatible” range. So if the cosmological constant was randomly selected from within the acceptable range, it would be extremely unlikely to be exactly zero, or even very close to zero relative to the size of the full life-compatible range. If it turned out to be exactly zero when scientists measured it, that would pretty much rule out the theory of the inflationary multiverse – it would just be too big a coincidence that it turned out to be exactly zero if the explanation for why it is so small (i.e. random selection limited to within the life-compatible range of values) affords a vast range of possible (small) values, including zero. In that case, there would have to be some other explanation – e.g. intelligent design or a unique solution to the laws of physics that only allows for a cosmological constant of exactly zero.
But thankfully for the proponents of the inflationary multiverse theory, it turns out the cosmological constant is within the life-compatible range, but not exactly zero and in fact fairly close to the upper bound of allowable values that allow galaxies, planets and therefore life, to exist. In other words, the cosmological constant appears tuned, but not over-tuned, to allow for life to exist.
So how does this line of reasoning apply to the Fermi Paradox?
Suppose the scientists are right that there are a fair number (~30 or so) fundamental constants of physics like the cosmological constant that must all fall within respective relatively narrow ranges for the resulting universe to be compatible with the emergence of life. Since we exist to observe our universe, by definition they all must fall within their respective life-compatible ranges. But here is the crucial point – we wouldn’t expect any of them to fall smack dab in the center, or even very close to the center of the compatible range – per the same argument as made above for the cosmological constant, namely that would be too big a coincidence. Instead, you’d expect each of them to be randomly sampled from within their respective compatible ranges – with some of them closer to the center “sweet spot” of the life-compatible range and some of them towards the extreme, where conditions would make it possible, but not optimal, for life to emerge.
If a large number of such samplings were done to create many life-compatible universes (as would naturally happen in a quickly inflating multiverse), only an infinitesimally small number of them would have all 30 parameters falling very close to the center of their compatible ranges, resulting in what might be called “Garden of Eden Universes” – super-fecund universes where life evolves very quickly, easily, and prolifically. The vast majority of life-compatible universes would instead be barely compatible with life – due to the fact that statistically its very likely at at least a few of their 30 parameters would be like the cosmological constant in our universe, and fall relatively close to the upper or lower boundary of its life-compatible range. So by this argument, we would expect the vast majority of life-compatible universes to not be super-compatible, but instead to be just barely compatible with the existence of life, just like the cosmological constant should be tuned, but not over-tuned to be compatible with life.
So if the vast majority of life-compatible universes are just barely life-compatible, we would naturally expect to find ourselves in one of those, rather than in one of the very rare Garden of Eden Universes (although see Note 1 for subtle caveat). In a universe that is just barely life-compatible, you’d expect life to be possible, but extremely rare. Which is exactly what we see – life on Earth, but nowhere else as far as our instruments can see. Hence this gives a logical explanation for Fermi’s paradox – life is extremely rare in our universe because our universe is a typical example drawn from the set of all life-compatible universes, and therefore just barely compatible with the existence of life.
An analogy for this explanation of the Fermi paradox can be made with human abundance and productivity. There is a set of parameters characterizing life events and circumstances (e.g gender, race, country of origin, native intelligence, work ethic, education, family circumstances, proclivity for risk taking, creativity, mental and physical health, charismatic personality, supportive spouse, spark of a world-changing idea, etc) that, if perfectly tuned, lead to an incredibly abundant and productive life, like that of Bill Gates, Mark Zuckerberg or Elon Musk. But its only for the extremely rare individual that every one of these parameters line up perfectly. For the vast majority of people, at least a few of these parameters aren’t tuned so well, and therefore they end up with a much less abundant, less productive life. Missing out on even a small number of these important parameters can make a huge difference in the abundance and productivity one enjoys. From the average observer’s perspective, he/she is incredibly more likely to find him/herself as someone with very low (or moderate) success than someone who is super-successful.
But perhaps we should look at the average dollar, rather than the average person, to make the analogy more accurate with observers in universes (i.e. Bill Gates is equivalent to a Garden of Eden Universe and the dollars in his bank account are equivalent to observers in that Garden of Eden Universe.) What this explanation for the Fermi Paradox is suggesting by analogy is that given the incredibly large size of the population, and the incredible rarity of super-successful individuals like Bill Gates, the average dollar will be overwhelmingly more likely to be found in the bank account of an unsuccessful or moderately successful person, rather than in the bank account of a Bill Gates-like super-successful person. Despite Bill Gates’ massive wealth, the extra dollars that are concentrated in his bank account are swamped by the vast number of people with relatively few dollars in each of their bank accounts.
This analogy points out that the argument dependents on the sum of the wealth of Bill Gates-like people to be much smaller than the combined wealth of all the unsuccessful or moderately successful people in the population – which in fact isn’t really the case in the US. According to a recent article in the New York Times, the “richest 1 percent in the United States now own more wealth than the bottom 90 percent”. By analogy, there must be MANY more universes with a small number of observers in order to swamp the number of observers summed over the very rare super-hospitable universes which have many more observers per universe than is typical of the set of life-compatible universes. But given the range of possible sets of values for the ~30 physical parameters that define a universe, and how narrow a range is required for each of them to create even a (seemingly) barely hospitable universe like our own, its not hard to imagine that the multiverse will contain many orders of magnitude more barely hospitable universes than universes which are almost perfectly tuned to support life and therefore contains life in super-abundance.
In this very amusing TED video, Jim Holt expounds on a related theme – he suggests its much more likely (and less intimidating) that we live in a mediocre universe, than one of the superlative ones (e.g. the best of all possible universes) – think of the responsibility of having to live up to the standards of conduct in the best of all possible universes!
This explanation for the Fermi paradox makes a number of predictions:
- If we carefully analyze the physical constants to determine which range of values for each of them is compatible with life (holding the others constants – see Note 2), we will find at least a few that are like the cosmological constant, namely their value falls close to the boundary of the life compatible range, thereby making life possible, but rare.
- Conversely, if we find that almost all of the physical constants are near the sweet spot of the life compatible range, or that life is not sensitive to where in the range the constant falls, but we continue to see that life is very rare in our universe, than this is likely not the main explanation for the Fermi paradox.
- We will also observe that not all values within the life-compatible range for a physical parameter are equally good at promoting the emergence of life. Some values within the life-compatible range will result in more hospitable universes than others, with some sort of distribution (perhaps gaussian?), centered on the “sweet spot” at the center of the life-compatible range. The distributions for different parameters will likely have different shapes – i.e. standard deviations, with some forming very sharp peaks with long tails, and others having a rather wide range of nearly equivalently life-friendly values within the life-compatible range. Again, this theory predicts that at least a few of the parameters in our universe will have values fairly far out on the “tails” of the distribution of life-compatible values, making life possible, but rare.
- If, contrary to current evidence, we discover that life is very common in our universe, that should be considered evidence against the inflationary multiverse theory, since by this argument life should be rare in a typical element of the ensemble of universes in an inflationary multiverse.
In this paper, Alan Guth, the inventor of the Inflationary Universe theory, makes an interesting, but different argument than this one regarding how an inflationary universe could explain the Fermi Paradox. He calls it the “Youngness Paradox”. Here is his argument, in a nutshell:
If eternal inflation is true, the region of space that is inflating is continuously growing at an exponential rate. In fact he says that his theory predicts that the inflating space increases in volume by e^(10^37) every second. Now that is pretty darn fast! Since the number of “pocket universes” that condense out of this space every second is proportional to its volume, that means right now there exist e^(10^37) more pocket universes than existed one second ago. Therefore there are nearly infinitely more universes our universe’s age than there are universes that are one second younger. If it takes some minimum time for life to evolve, then there should be infinitely more young universes that have just crossed that age threshold and generated their first intelligent life than there are older universes where life has had time to evolve more than once. So we should be overwhelmingly more likely to live in a universe that is just old enough to have evolved it’s first intelligent life – us but not yet had time to evolve another intelligence species. Guth says he doubts this argument, because of the measure problem. Here is a paper by Guth’s on his possible solution to this measure problem, that might avoid the Youngness Paradox. My problem with Guth’s argument is that it seems pretty clear that in our universe we appear to be well beyond the minimum time required for life to evolve – in fact life could have evolved millions or even billions of years earlier, so an argument that says our universe shouldn’t even be one second older than it needs to be for life to evolve seems very implausible.
In conclusion, If this or any other explanation for the Fermi paradox shows that life is indeed exceedingly rare in our universe it would mean that we have an awesome responsibility to ensure the rare spark of life that we represent is not extinguished by our own ignorance or carelessness.
——————–
Note 1: There is one caveat to the argument that must be addressed – namely that number of observers in the Garden of Eden Universes where lots of different life forms emerge would be much greater than in any single, barely life-compatible universe like our own (where very few life forms exist), increasing the probability that a random observer would find him/herself in a Garden of Eden Universe. rather than a barely life-compatible universe.
But if the number of universes in the multiverse is growing as quickly as the inflationary model suggests, the number of barely life-compatible universes should grow at a rate the quickly outstrips the advantage of the Garden of Eden Universes have in the number of observers – making it so that a random observer would still expect to find him/herself in a barely life-compatible universe. In the extreme, the number of observers a finite-sized Garden of Eden Universe could support would be finite, while the number of barely life-compatible universes would be (nearly) infinite. Of course the number of Garden of Eden Universes would also be (nearly) infinite, but it seems logical to suppose the ratio (# barely life-compatible universes / # Garden of Eden Universes) would be much greater than the ratio (# observers in average Garden of Eden Universe / # observers average in barely life-compatible universe). So the total number of observers across the multiverse who find themselves in a barely compatible universe (which equals the number of observers per barely compatible universe * the # of barely compatible universes) would be much greater than the total number of observers across the multiverse who find themselves in a Garden of Eden Universe (which equals the number of observers per Garden of Eden Universe * the # of Garden of Eden Universes). So the average observer would be overwhelmingly likely to find him/herself in a barely life-compatible universe.
Note 2: The ~30 parameters that need to be tuned for life to be possible in a universe need not be independent of each other. For example, a slightly higher cosmological constant (universe expansion rate) could still support life if the gravitational constant was greater as well, allowing matter to clump into galaxies and planets despite the faster expansion rate. While the range of life-compatible values for a parameter might shift as a result of these interdependencies, the argument above still holds, namely that if the parameters for a new life-compatible universe are randomly sampled from within these (interdependent) acceptable ranges, at least a few of them will be near the extreme of their respective life-compatible ranges, making life barely possible in the vast majority of newly created pocket universes.
We may have just witnessed an important milestone in the awakening of the web.
While this point may be controversial, I content that future exponential growth of the digital economy will eventually require getting humans out of the loop. If computing power continues to double every 18 months in accordance with Moore’s Law, utilizing all those cycles will eventually require computers to start talking directly to other computers, without the goal of assisting, informing or entertaining human beings.
Why? Because human population is virtually flat and limits to human cognition mean there is only so much digital content people can effectively digest.
According to a recent University of California study, the average US citizen consumes an estimated 34Gb of data daily, mostly in the form of TV & video games. Collectively, American households consumed 3.6 zettabytes of information of all kinds in 2008, the researchers estimated. While this seems like a lot and is likely to continue growing for some time as video resolution gets higher, our appetite for bytes will inevitably flatten out, particularly if we continue to get more of our information through mobile devices.
If machine-to-machine communication will eventually need to pick up the slack in demand for the ever increasing bandwidth, how and when will it happen and what form will it take?
To some degree it is happening already. For quite some time there has been a shaky alliance between news aggregators (e.g. Google News) and machine-driven decision tools, best exemplified by automated financial trading systems. The widely reported United Airlines incident last year showed just how risky this combination can be. For anyone who missed it, United Airlines stock plummeted from $12 to $3, losing 75% of its value over the course of a few minutes on Sept. 8th 2008, and in the process wiped out over $1B in shareholder value.
Insider trading? Nope.
It turns out the trigger was a small mistake by a Florida newspaper that accidentally reran a story from 2002 about UAL’s bankruptcy without a date, making it appear like it was fresh news. Within a minute, the automated scanning system of Google News, which visits more than 7,500 news sites every 15 minutes, found the story and thinking it new, added it to its breaking news stream. An employee at Bloomberg financial news saw the story and rebroadcast it to thousands of readers, quite many of whom follow United Airlines. Within minutes United’s stock tanked, largely as a result of automated trading programs that saw the price dropping and sold the stock to prevent additional losses.
Once the mistake was cleared up and trading resumed, UAL’s stock recovered most of the $1B it had lost, but the incident was an important lesson for the burgeoning industry of automated news scanning and financial trading. What went wrong during the United Airline incident was a combination of human error and runaway automation that both propagated and acted upon the mistake.
You could try to blame the human element of the equation since in this case without the human error of resurrecting an out-of-date story, the incident would never have happened. But Scott Moore, head of Yahoo News, hit the nail on the head when he said:
This is what happens when everything goes on autopilot and there are no human controls in place or those controls fail.
Now in what could be an important (but potentially risky) step further. we are beginning to see computers acting as both the producers and consumers of content, without a human in the loop. In this case it is called computational journalism and it consists of content generated by computers for the express purpose of consumption by other computers.
Academics at Georgia Tech and Duke University have been speculating about computational journalism for some time. But now, the folks at Thomson Reuters, the world’s largest news agency, have made the ideas a reality with a new service they call NewsScope. A recent Wired article has a good description of NewsScope:
NewsScope is a machine-readable news service designed for financial institutions that make their money from automated, event-driven, trading. Triggered by signals detected by algorithms within vast mountains of real-time data, trading of this kind now accounts for a significant proportion of turnover in the world’s financial centres.
Reuters’ algorithms parse news stories. Then they assign “sentiment scores” to words and phrases. The company argues that its systems are able to do this “faster and more consistently than human operators”.
Millisecond by millisecond, the aim is to calculate “prevailing sentiment” surrounding specific companies, sectors, indices and markets. Untouched by human hand, these measurements of sentiment feed into the pools of raw data that trigger trading strategies.
One can easily imagine that with machines deciding what events are significant and what they mean, and other machines using that information to make important decisions, we have the makings of an information ecosystem that is free of human input or supervision. A weather report suggesting a hurricane may be heading towards central America could be interpreted by the automated news scanners as a risk to the coffee crop, causing automated commodity trading programs to cause a rise on coffee futures. Machines at coffee producing companies could see the price jump, and trigger release of stockpiled coffee beans onto the market, all without a human hand in the whole process. Machines will be making predictions and acting on them in what amounts to a fully autonomous economy.
This could be an alternative route to the Global Brain I previously envisioned as the end result of the TweetStream application. By whichever route get there (and there are likely others yet to be identified), the emergence of a viable, worldwide, fully-automated information exchange network will represent an historic moment. It will be the instant our machines no longer depend entirely on humans for their purpose. It will be a critical milestone in the evolution of intelligence on our planet, and a potentially very risky juncture in human history.
The development of NewsScope is appears to be an important step in that direction. We live in interesting times.
1/4/09 Update
Thomson Reuters, the developers of NewsScope, today acquired Discovery Logic, a company whose motto is “Turning Data into Knowledge”. Among its several products, is Synapse, designed to help automate the process of technology tranfer of government-sponsored healthcare research by the NIH Office of Technology Transfer (OTT). They describe Synapse as:
An automated system to match high-potential technologies with potential licensees. Using Synapse, OTT now uses a “market-push” approach, locating specific companies that are most likely to license a certain technology and then notifying the companies of the opportunity.
Using the same product, OTT also found it could also successfully perform “market pull,” in which OTT can identify multiple technologies in its inventory in which a company may find interest.
Apparently Reuters isn’t interested in just automating the process of generating and disseminating news, but technology as well.
My hobby is analyzing real-time social media from the perspective of neuroscience. I’m fascinated by the analogy between Twitter and the brain. The recent discussions about the etiquette of ‘thank you’ posts on Twitter got me thinking – how do neurons in the brain handle thank yous?
At first it seems like a silly question. Upstream neurons don’t thank downstream neurons for passing on the message they sent. A pre-synaptic neuron sends neurotransmitters to the post-synaptic neuron and that would seem like the end of it. Right? Or is it?
In fact, if the post-synaptic neuron fires soon after the pre-synaptic neuron sends it a message, the strength of the synapse between the two neurons is strengthened according to the spike time dependent plasticity (STDP) rule I discussed previously. So while there is no explicit acknowledgment or ‘thank you’ by the pre-synaptic neuron for the equivalent of a retweet by the post-synaptic neuron, the pre-synaptic neuron’s gratitude (to stretch the analogy) manifests itself as a strengthening of the synapse, the equivalent of the ‘social bond’ between the two neurons.
The equivalent response on Twitter would be if I started posting more content that I think will be appreciated by someone who has shown a tendency to retweet my posts in the past – in other words, sending more good stuff their way. In a sense, the neurons are just ‘doing their job’ of passing on the best information they can find to those other neurons they think will listen, rather than explicitly greasing the skids of communication by exchanging extra messages expressing gratitude.
Perhaps this disconnect between how effective communication appears to happen in the brain (without thank yous) and how messages are passed on Twitter today is part of my ambivalence about ‘thx for the RT!’ posts.
But as we see from the wild west nature of real-time social media today, and last nights successful attack that took down Twitter, the Global Brain is still in the early process of development. Maybe before it got so complex and sophisticated, the brain was more like Twitter?
This is purely speculation, but I strongly suspect that when brains were more primitive, and proto-neurons in those primitive brains were trying to figure out whether or not it was worth talking to their neighbors, there must have been something that was ‘in it for them’ to encourage message passing.
Perhaps like vampire bats share blood to build social ties, early neurons might have shared chemicals to help nourish each other and build supportive networks. The survival value of this ‘cellular food’ might have encourage the initial exchanges, which got co-opted later by natural selection for communication purposes as multi-cellular organisms evolved.
But a more likely possibility seems to be that proto-synapses between proto-neurons served a communication function from the start. A rather dense 2009 paper from Nature Neuroscience by neuroscientists at the the University of Cambridge on the evolution of the synapse seems to support this idea:
“Strikingly, quintessential players that mediate changes dependant on neuronal activity during synaptic plasticity are also essential components of the yeast’s response to environmental changes.”
In other words, these scientists appear to be suggesting that early semi-independent single cell organisms may have developed proto-synapses to communication information about their shared environment, like the presence of food or toxins nearby. Perhaps through communication, these early colonies of cells might have reacted in concert, and thereby coped more effectively with threats or opportunities presented by their shared environment. Such ‘communication for a common cause’ would have had survival advantage for the cell colony, encouraging its elaboration through the process of natural selection. Anthropomorphically speaking, the cells would have been saying ‘if we listen to each other, we can all get ahead.’
All this points to the content of the message itself as as the carrier of value in these early colonies of cells, without need for explicit exchange of ‘thank you’ messages. Listening and being heard were both of intrisic value to individual primitive cells, and to the colony as a whole.
So can we get away with such a ‘thankless’ model on Twitter, or is a virtual pat on the back in return for digital kindness, in the form of a thank you post for retweets, still necessary to grease the skids of communication in the rapidly evolving global brain?