You are currently browsing the tag archive for the ‘Global Mind’ tag.

The Edge.org question for 2010 is How is the Internet Changing the Way We Think? The site has lots of interesting answers including quite a bit doom and gloom about how we’re distracting ourselves to death, penned by smart people like Clay Shirky, Danny Hillis, and Dan Dennett.  But I was particularly intrigued by a couple passages from the response of evolutionary biologist Richard Dawkins.

Like many of the other respondents, Dawkins has observed a dumbing down of the individual as a result of the lower quality of media we’re exposed to, and the information firehouse that seems to be preventing us from focusing too hard or too long on anything that requires deep thought.  But at the same time, Dawkins (and other respondents) see room for optimism. As Dawkins put it:

But I want to leave negativity and nay saying and end with some speculative — perhaps more positive — observations. The unplanned worldwide unification that the Web is achieving (a science-fiction enthusiast might discern the embryonic stirrings of a new life form) mirrors the evolution of the nervous system in multicellular animals. …

I am reminded of an insight that comes from Fred Hoyle’s science fiction novel, The Black Cloud. The cloud is a superhuman interstellar traveller, whose ‘nervous system’ consists of units that communicate with each other by radio — orders of magnitude faster than our puttering nerve impulses.

But in what sense is the cloud to be seen as a single individual rather than a society? The answer is that interconnectedness that is sufficiently fast blurs the distinction. A human society would effectively become one individual if we could read each other’s thoughts through direct, high speed, brain-to-brain radio transmission. Something like that may eventually meld the various units that constitute the Internet.

I agree with Dawkins and many of the other experts who give their opinion to the Edge.org question. The jury is still out on just how the Internet is impacting the thinking of individuals. It gives us the opportunity to be aware of so much more than has ever been possible. But whether this will translate into knowledge individuals can employ to lead better lives isn’t yet certain.

What is indisputable is that the Internet is affording opportunities for collective intelligence, and coordinated action on a scale that has  never before been possible.  But is less certain is whether we will find ways to effectively nurture and harness this collective energy.  That seems to be what Web 2.0 is all about.  At the moment, we appear to be going through the equivalent of a Cambrian Explosion of projects & startups trying to capitalize on web-enabled collaborative systems. There are literally hundreds of big and small apps trying to leverage Twitter alone.

As Jeff Stibel (@Stibel) suggests in his new book Wired for Thought (which I highly recommend), we are likely to soon see a period of mass extinction of social media startups as the novelty of this new form of collaboration and communication wears off.  Such a die off will resemble  the massive pruning of connections that occurs in the human brain to eliminate redundant and unhelpful connections during childhood.  The human brain’s synaptic down selection during maturation is astonishing and quite draconian, going from about 10 quadrillion connections in a three year-old to a mere 100 trillion by adulthood, which means than only 1 in 100 synapses survive (source: Edelman’s book Neural Darwinism).

Hopefully the fittest & most useful (as opposed to the most amusing) will survive, and the result will be a set of sites and services that will facilitate true collective intelligence and collaborative action to move humanity forward, pulling we overstimulated and distracted individuals along with it.

information

Black HoleI’m on a plane to Boston to attend a workshop sponsored by the XPrize Foundation, and hosted by MIT & the renown futurist Ray Kurzweil, head of Singularity University. The workshop is being held to discuss the merits of creating an $10M XPrize to accelerate the development of direct brain-computer interfaces or BCI for short.

I’ve always thought BCI was a great idea. I’m not the first to call it the “next step in human evolution”.  I’ve thought, “wouldn’t it be empowering to be able to surf the web with your thoughts?” When this vision of BCI becomes a reality, you would have all the knowledge of the world much closer than your fingertips, since the knowledge would be inside your head, and instantly accessible, just like your ‘native’ thoughts and memories, .

This optimism has prompted me to engage in research to make BCI a reality, as described in this recent Computer World article, in which I was (mis)quoted as saying that people will be interacting with computers via chips implanted in their brain by the year 2020.  It will likely take longer than that, given the technical hurdles and regulatory approval process, but there don’t appear to be any show-stoppers that would prevent it from happening within the next 20 or 30 years at the outset.

Now, after spending little over a month as an active user of Twitter, I’ve got serious reservations about this wisdom of this vision of the future. Don’t get me wrong. Twitter is an amazing service. But its nothing like surfing the web.  Surfing the web is a ‘pull’ activity. I’m in control of what I’m looking for, and what I’m reading. In contrast, Twitter is a ‘push’ service. Once I’ve set up a list of people to follow (I’ve got about 80 now), posts and links come at me relentlessly.

The perpetual stream of information wouldn’t be a big deal, except for one simple fact – its addicting. I’m fascinated by the information contained in a large fraction of the links the people I follow are posting. As a result, I find myself spending hours reading their posts, and when I’m done there is a whole new series of tweets, and the process repeats.  And better semantic filtering technology seemingly wouldn’t help. Its not that I’m overwhelmed by crap. There is too much interesting stuff to read, and better filters would probably just point out more stuff that I’m missing, making the whole thing even more addicting!

I’m no slacker, and I’m usually a very self-disciplined person. Just ask my friends, family and co-workers. But the stream of information coming at me from Twitter is just so interesting and so distracting, it is hard to focus on other things.  I don’t think I could describe the experience as well as Jim Stogdilll (@stogdill) has done in his post Skinner Box? – There’s an App for That. I’ll just quote a couple passages, but anyone interested in the addictive side of Twitter, and what it can do to your thinking ability, should read it in its entirety.

In describing his conflicted attitude towards Twitter, Jim says:

I can either drink liberally from the fire hose and stimulate my intellect with quick-cutting trends, discoveries, and memes; but struggle to focus. Or I can sign off, deactivate, and opt out. Then focus blissfully and completely on the rapidly aging and increasingly entropic contents of my brain, but maybe finish stuff. Stuff of rapidly declining relevance.

This rings so true with me, and like Jim, I find it hard sometimes to willfully opt out – the stream is just too enticing. As Jim observes, we’re like rats in a Skinner box, self stimulating with our reward of choice, real-time information.  We’re ‘digital stimulusaholics’. Jim goes on to say:

For the last couple of years I’ve jacked in to this increasing bit rate of downloadable intellectual breadth and I’ve traded away the slow conscious depth of my previous life. And you know what? Now I’m losing my self. I used to be a free standing independent cerebral cortex. My own self. But not any more. Now I’m a dumb node in some uber-net’s basal ganglia. Tweet, twitch, brief repose; repeat. My autonomic nervous system is plugged in, in charge, and interrupt ready while the gray wrinkly stuff is white knuckled from holding on.

What if Twitter is turning us into mindless cogs in a big machine, and the machine turns out to be dumb?  As Jim describes it:

What if the singularity already happened, we are its neurons, and it’s no smarter than a C. elegans worm?

Now imagine just how much more addicting the stream would be if it was coming at us in real-time through a two-way link hooked directly to our brains. Sure there would be an ‘off’ button – responsible scientists (like me!) will make sure of that.  But would anyone be able to push it?  I’m far from certain.

So I’m stuck in a difficult position.

Tomorrow I’m meeting with Mr. Singularity himself, Ray Kurzweil and a bunch of other proponents of brain-computer interfaces to brainstorm about offering a big cash XPrize for the first group to make high-bandwidth BCI a reality.  And I’m thinking it may not be such a good idea for the future of humanity.

I expect Kurzweil will argue that merging our slow squishy brains with our machines is the only option we have, and that rather than turning our brains to mush, it will jack them up to runs thousands of times more efficiently than they do today, since transistors are so much faster than neurons.

Recent studies have shown that humans aren’t very good at multi-tasking, and paradoxically, people who multi-task the most are worse at multi-tasking than people who usual focus on one thing and occasionally multi-task.  So much for the learning / brain-plasticity argument that ‘we’ll adapt’.

Perhaps our brains could be reconfigured to be better at multi-tasking if augmented with silicon?  Perhaps with a BCI, we could be reading an article and talking to our spouse at the same time. How weird would that be?  And with such a significant change in my cognition, would I still feel like me?  Would it feel like there were more than one of me?  Talk about schizophrenia!

Call me a conservative, but I know enough about the brain and human psychology to realize that it maintains its hold on reality by a rather tenuous rope, carefully woven from many strands over millennia by evolution. That rope is bound to get seriously frayed if we try to jack up our neural wiring to run many times faster, or to be truly multi-threaded in the time frame Kurzweil is talking about for the singularity, i.e. 2030 to 2050.

But on the other hand, one might conclude we’re damned if we do and damned if we don’t.  Whether we like it or not, things aren’t slowing down. The amount of information in the stream is doubling every year. If instead of jacking in with BCI, we take the conservative route and leave our brains alone, the Twitter experience shows us we’re likely to be sucked in to the increasingly voluminous and addicting flood of information, left with only our meager cognitive skill set with which to cope with the torrent.  I’m afraid our native, relatively feeble minds may not stand a chance against the selfish memes lurking in the information stream.

Sigh. Maybe I’m over-reacting…  If I don’t chicken out, I will try to bring up these concerns during the BCI XPrize discussions starting tomorrow.  I may even tweet about it. The official hashtag for the workshop is #bcixprize. Just click the link to follow along – it should be fascinating…

Updates:

  1. Here is another interesting perspective by Todd Geist on what it might be like to be a small part of a global information network, like the organisms on Pandora in the movie Avatar.  As I pointed out in my comment to Todd’s post, the difference between Pandora’s creatures and humans is that they had millions of years of evolution to cope with the direct mind-to-mind linkages, while its happening to us in the course of at most a few generations.
  2. Here is a skeptical perspective on the whole idea of the singularity.

So far, social media seems to have a lot of roar, but very little teeth when it comes to facilitating social change.  Users of services like Twitter and Facebook seem more interested (sometimes compulsively) in entertainment, ‘branding’ & connecting with friends than about initiating positive social change. The always-insightful Venessa Miemis (@venessamiemis) hit the nail on the head in the comments to her blog post What is Social Media? [the 2010 edition] when she said:

Does all this online talking matter if nothing comes of it in the real world?

Neal Gorenflo (@gorenflo) elaborates on the potential pitfalls of conversation:

Connecting and conversing is necessary, but  again, the danger is that we get stuck in conversation. There is such a thing as being too connected. We have cognitive and time limits. Web 2.0 can overload us with messages, shrink attention spans, absorb our time, erode focus, and thus disrupt our ability as citizens to find common ground and take action together. It’s possible that through Web 2.0 we may be, as in the title of cultural critic Neil Postman’s influential book, amusing ourselves to death.

Venessa goes on to ask the big question:

How do we make something happen? What are small things we can start doing to get the hang of real coordination, collaboration, and action?

I’m all for starting with something small but nonetheless tangible – to give us something to build on and learn from.  Why not shoot first, and aim later?  The worst that can happen is we fail fast and learn from our mistakes.

With that goal in mind, I’m fascinated by an initiative by my Carnegie Mellon University colleague Priya Narasimhan (@priyacmu) to use crowdsourcing and social media to help locate, assess & repair potholes around Pittsburgh [see news story w/ video].

Pittsburghers are given three options for reporting potholes – dial 311 on their mobile phone, log it at the website pittsburghpothole.com, or best of all, report it using a free iPhone app called iBurgh.

The iBurgh app is cool because of it is so easy to use. Simply snap a photo of a pothole with your iPhone. The image is automatically geotagged with its location, and sent to the city’s public works department. Once  three pictures of the same pothole are logged, the city promises to repair it within five days.  Granted its not an instantaneous response, but we’ve got a lot of potholes in Pittsburgh!  The tool can also be used to report issues like needed snow removal – a big problem around here this time of year…

Pittsburgh City Council member Bill Peduto said the program makes Pittsburgh the nation’s first large city to implement a government integrated iPhone app.  He goes on to say:

“This type of technology that merges social media with democracy is going to boom within the next year.”

This is exciting for me partly because it is being done by a friend.  But more importantly, it illustrates something we saw emerging with the DARPA Red Balloon Challenge which might be called crowdsensing – using a distributed network of tech-enabled individuals to track and report on significant (and sometimes not-so-significant) events happening in their world.

Another nice example is the Twitter Earthquake Detection Program, which encourages people to report when the earth moves via Twitter or on a dedicated “Did You Feel It?” website.

I’m hopeful an even bigger and better example will happen soon in the form of a regime change in Iran, thanks in part to Twitter. As I observed recently, Twitter has given the citizens of Iran a way to tell the story of their quest for freedom to the world in real-time and in a way that engages public interest, at a time when traditional media channels have been locked out by their oppressive government.  I wish them the best of luck, and will be tracking the events on Twitter as they unfold.  When (not if) they succeed, it will be an important milestone for the emerging Global Brain.

Until then, I’m happy to start small.  Excuse me while I go report a few potholes…

We may have just witnessed an important milestone in the awakening of the web.

While this point may be controversial, I content that future exponential growth of the digital economy will eventually require getting humans out of the loop.  If computing power continues to double every 18 months in accordance with Moore’s Law,  utilizing all those cycles will eventually require computers to start talking directly to other computers, without the goal of assisting, informing or entertaining human beings.

Why? Because human population is virtually flat and limits to human cognition mean there is only so much digital content people can effectively digest.

According to a recent University of California study,  the average US citizen consumes an estimated 34Gb of data daily, mostly in the form of  TV & video games. Collectively, American households consumed 3.6 zettabytes of information of all kinds in 2008, the researchers estimated. While this seems like a lot and is  likely to continue growing for some time as video resolution gets higher, our appetite for bytes will inevitably flatten out, particularly if we continue to get more of our information through mobile devices.

If machine-to-machine communication will eventually need to pick up the slack in demand for the ever increasing bandwidth, how and when will it happen and what form will it take?

To some degree it is happening already. For quite some time there has been a shaky alliance between news aggregators (e.g. Google News) and machine-driven decision tools, best exemplified by automated financial trading systems.  The widely reported United Airlines incident last year showed just how risky this combination can be. For anyone who missed it, United Airlines stock plummeted from $12 to $3, losing 75% of its value over the course of a few minutes on Sept. 8th 2008, and in the process wiped out over $1B in shareholder value.

Insider trading?  Nope.

It turns out the trigger was a small mistake by a Florida newspaper that accidentally reran a story from 2002 about UAL’s bankruptcy without a date, making it appear like it was fresh news.  Within a minute, the automated scanning system of Google News, which visits more than 7,500 news sites every 15 minutes, found the story and thinking it new, added it to its breaking news stream.  An employee at Bloomberg financial news saw the story and rebroadcast it to thousands of readers, quite many of whom follow United Airlines.  Within minutes United’s stock tanked, largely as a result of automated trading programs that saw the price dropping and sold the stock to prevent additional losses.

Once the mistake was cleared up and trading resumed, UAL’s stock recovered most of the $1B it had lost, but the incident was an important lesson for the burgeoning industry of automated news scanning and financial trading. What went wrong during the United Airline incident was a combination of human error and runaway automation that both propagated and acted upon the mistake.

You could try to blame the human element of the equation since in this case without the human error of resurrecting an out-of-date story, the incident would never have happened. But Scott Moore, head of Yahoo News, hit the nail on the head when he said:

This is what happens when everything goes on autopilot and there are no human controls in place or those controls fail.

Now in what could be an important (but potentially risky) step further. we are beginning to see computers acting as both the producers and consumers of content, without a human in the loop.  In this case it is called computational journalism and it consists of content generated by computers for the express purpose of consumption by other computers.

Academics at Georgia Tech and Duke University have been speculating about computational journalism for some time. But now, the folks at Thomson Reuters, the world’s largest news agency, have made the ideas a reality with a new service they call NewsScope. A recent Wired article has a good description of NewsScope:

NewsScope is a machine-readable news service designed for financial institutions that make their money from automated, event-driven, trading. Triggered by signals detected by algorithms within vast mountains of real-time data, trading of this kind now accounts for a significant proportion of turnover in the world’s financial centres.

Reuters’ algorithms parse news stories. Then they assign “sentiment scores” to words and phrases. The company argues that its systems are able to do this “faster and more consistently than human operators”.

Millisecond by millisecond, the aim is to calculate “prevailing sentiment” surrounding specific companies, sectors, indices and markets. Untouched by human hand, these measurements of sentiment feed into the pools of raw data that trigger trading strategies.

One can easily imagine that with machines deciding what events are significant and what they mean, and other machines using that information to make important decisions, we have the makings of an information ecosystem that is free of human input or supervision. A weather report suggesting a hurricane may be heading towards central America could be interpreted by the automated news scanners as a risk to the coffee crop, causing automated commodity trading programs to cause a rise on coffee futures. Machines at coffee producing companies could see the price jump, and trigger release of stockpiled coffee beans onto the market, all without a human hand in the whole process. Machines will be making predictions and acting on them in what amounts to a fully autonomous economy.

This could be an alternative route to the Global Brain I previously envisioned as the end result of the TweetStream application.  By whichever route get there (and there are likely others yet to be identified), the emergence of a viable, worldwide, fully-automated information exchange network will represent an historic moment.  It will be the instant our machines no longer depend entirely on humans for their purpose. It will be a critical milestone in the evolution of intelligence on our planet, and a potentially very risky juncture in human history.

The development of NewsScope is appears to be an important step in that direction. We live in interesting times.

1/4/09 Update

Thomson Reuters, the developers of NewsScope, today acquired Discovery Logic, a company whose motto is “Turning Data into Knowledge”.  Among its several products, is Synapse, designed to help automate the process of technology tranfer of government-sponsored healthcare research by the NIH Office of Technology Transfer (OTT).  They describe Synapse as:

An automated system to match high-potential technologies with potential licensees. Using Synapse, OTT now uses a “market-push” approach, locating specific companies that are most likely to license a certain technology and then notifying the companies of the opportunity.

Using the same product, OTT also found it could also successfully perform “market pull,” in which OTT can identify multiple technologies in its inventory in which a company may find interest.

Apparently Reuters isn’t interested in just automating the process of generating and disseminating news, but technology as well.

I’m sitting on the couch at my in-laws connected to the global network via my cell phone, and mesmerized by events unfolding in real-time in Iran.  While I sit relaxing with family in the afterglow of Christmas, half way around the world people like this man:

with rocks in both hands and his cell phone in his mouth, are serving simultaneously as fighters and reporters.  And I’m doing my tiny part, as observer and cheerleader, spreading the word with tweets like this one:

My fascination is as much with the process as with the events themselves. CNN, Reuters and the BBC are relying almost exclusively on unconfirmed posts by ‘citizen reporters’ sharing news, pictures & videos on services like Twitter, Twitpic & YouTube.

We are experiencing the future of news, with the line forever blurred between those who make the news and those who share the news.  For the first time we can experience news anywhere and anytime, as it happens. We are all so much more intimately connected than ever before. Global consciousness is awakening. We live in interesting times.

Read more about Twitter’s critical role in the unfolding drama in Iran, and the potential downsides of using social media to instigate change.

Stowe Boyd and Freddy Snijder have posted an interesting dialog about the streams and the “global sensorium”.  Freddy’s original post, Stowe’s reply, and Freddy’s reply to Stowe, are all worth reading.

I like what both have to say, and the fact that dialogs like this is occurring is a sign that a collective intelligence is already emerging.  But I believe the two have missed several important points.

First both Boyd & Snijder seem resigned to our current set of individual cognitive capabilities. As a neuroscience researcher, I’m confident that one day advances in our understanding of the brain and in particular brain-computer interfaces, will endow individuals with new cognitive capabilities.  Virtual telepathy, infallible memory, vision at a distance, are all within the realm of possibility, and could redefine with at means to be human. In fact, I’m participating in a workshop at MIT on January 7-8th sponsored by the XPrize Foundation and Ray Kurzweil’s Singularity University to discuss creating an XPrize competition to turbo charge progress in brain-computer interfaces. So big advances may be in store for our future…

But for now at least, both Boyd & Snijder are correct in observing that we’re stuck with our rather limited individual cognitive capabilities. Given these cognitive limitations, there is a serious question about just how individuals can best cope with the exponential growth of both information and societal complexity.  Freddy Snijder poses it this way:

The question remains how this global sensorium can be effectively used by all the individuals that make it up.

A minor point –  ALL individuals are unlikely to ever effectively use any technology or service. They’ll always be those who resist or are denied access to new technology. A big question is how to manage this digital divide.

But more fundamentally, I don’t believe any technology can possibly exist that will restore the degree of individual understanding and agency that it seems we crave as human beings.  Lets face it, the global knowledge base and real-time information stream are growing at such a rapid pace that even with the best collaborative filtering technology, it inevitable that individuals will continue to know more and more about less and less. At some point, it seems inevitable that we reach a point where we know almost everything about next to nothing!

The unavoidable reality of information overload doesn’t sit well with people, particularly folks who pride themselves on keeping up with the latest in information technology. We are programmed by evolution with the drive to understand and control all aspects of our environment. As a result, there are many hot start-ups today promising to tame the torrent of information and return each of us to a idyllic state of information mastery.

I’d love it if this were the case – I too am an information junkie and have always hoped to find a way to change the world through personal engagement.  But my gut tells me that the global society is quickly becoming far too complex for any single individuals to understand, to say nothing of  influence, the global sweep of human events.

If the organization of biological brains is any indication (and I’m betting it is), the Global Mind will be an emergent phenomena, and its workings will likely be incomprehensible to individual humans, just like individual neurons are oblivious about the thoughts to which their activity contributes. Like the neurons in our brain, individual people  participating in the functioning of the global sensorium may see little evidence of the part they are playing, and may not even realize the questions that the collective intelligence is working to solve.

The parallel growth of collective intelligence and decrease in individual agency raises fundamental questions that will need to answered if humanity is to survive and prosper:

  • Can we overcome the egocentric perspective that drives each of us to want to stand out and get ahead, often at the expense of our neighbor?
  • Can we transcend our self-centered tendencies and accept playing a small, largely unsung role in the workings of the whole?

In short, can we find a way to leverage technology to allow individuals to coordinate their modest local activities (both on-line & off) into a global, decentralized intelligence while remaining engaged in the process, despite realizing that their individual contributions will inevitably be tiny in the grand scheme of things?

The path is far from clear, but I remain hopeful.

centralized

Humans, like many species, are highly social creatures. The process of natural selection has instilled in us a drive to connect with other people. Those ancestors that were well connnected got support from their community and prospered, allowing them to pass on their gregariousness down to their offspring.

Facebook Addiction

With the advent of modern communication technology we’ve developed more and more effective ways to ‘scratch the itch’ to connect with others at greater speeds and distances. Social networks like Facebook and Twitter are the latest in the line of personal connectivity technology.

While these services can provide much value by allowing people to link with friends, ideas and events in new ways, they are not without a dark side.  As their popularity has mushroomed, it has become increasingly apparently that these services can be addictive, and this tendency is especially prevalent among youth of today, for whom fitting into the social fabric has always seemed  critically important.

This New York Times article, “Driven to Distraction, Some Teenagers Unfriend Facebook” documents some of the troubles teenagers are having with Facebook addiction, and managing their compulsion to connect with their social network. Psychology Professor  Walter Mischel of Columbia University says:

Facebook is the marshmallow for these teenagers

referring to the treat young kids found irresistible in his now famous series of experiments probing how young children cope with, and often succumb to, temptation.

Professor Mischel  found that kids who could not delay gratification, but instead snatched the marshmallows at the earliest opportunity, turned out to be under-achievers as adults.

So the big question seems to be:

Is the 24/7 connected culture we find ourselves embedded within today serving us, or is it driving us (and our kids) to distraction?

My guess – probably both.

One thing seems clear – Driven by our compulsion to connect, we humans are beginning to serve the global network at least as much as the global network is serving us. It remains to be seen whether the emerging collective intelligence will help steer humanity towards healthy and creative forms of social networking, or undermine the well-being of the very nodes that form it…

Update 2/03/2010: A new study in the Journal Psychopathology found a strong correlation between excessive internet activity (especially at social media sites) and depression. The study authors day:

“Our research indicates that excessive internet use is associated with depression, but what we don’t know is which comes first — are depressed people drawn to the internet or does the internet cause depression?

“What is clear, is that for a small subset of people, excessive use of the internet could be a warning signal for depressive tendencies.”

My hobby is analyzing real-time social media from the perspective of neuroscience. I’m fascinated by the analogy between Twitter and the brain. The recent discussions about the etiquette of ‘thank you’ posts on Twitter got me thinking – how do neurons in the brain handle thank yous?Pat on Back

At first it seems like a silly question. Upstream neurons don’t thank downstream neurons for passing on the message they sent. A pre-synaptic neuron sends neurotransmitters to the post-synaptic neuron and that would seem like the end of it. Right? Or is it?

In fact, if the post-synaptic neuron fires soon after the pre-synaptic neuron sends it a message, the strength of the synapse between the two neurons is strengthened according to the spike time dependent plasticity (STDP) rule I discussed previously.  So while there is no explicit acknowledgment or ‘thank you’ by the pre-synaptic neuron for the equivalent of a retweet by the post-synaptic neuron, the pre-synaptic neuron’s gratitude (to stretch the analogy) manifests itself as a strengthening of the synapse, the equivalent of the ‘social bond’ between the two neurons.

The equivalent response on Twitter would be if I started posting more content that I think will be appreciated by someone who has shown a tendency to retweet my posts in the past – in other words, sending more good stuff their way.  In a sense, the neurons are just ‘doing their job’ of passing on the best information they can find to those other neurons they think will listen, rather than explicitly greasing the skids of communication by exchanging extra messages expressing gratitude.

Perhaps this disconnect between how effective communication appears to happen in the brain (without thank yous) and how messages are passed on Twitter today is part of my ambivalence about ‘thx for the RT!’ posts.

But as we see from the wild west nature of real-time social media today, and last nights successful attack that took down Twitter, the Global Brain is still in the early process of development.  Maybe before it got so complex and sophisticated, the brain was more like Twitter?

This is purely speculation, but I strongly suspect that when brains were more primitive, and proto-neurons in those primitive brains were trying to figure out whether or not it was worth talking to their neighbors, there must have been something that was ‘in it for them’ to encourage message passing.

Perhaps like vampire bats share blood to build social ties, early neurons might have shared chemicals to help nourish each other and build supportive networks. The survival value of this ‘cellular food’ might have encourage the initial exchanges, which got co-opted later by natural selection for communication purposes as multi-cellular organisms evolved.

But a more likely possibility seems to be that proto-synapses between proto-neurons served a communication function from the start. A rather dense 2009 paper from Nature Neuroscience by neuroscientists at the the University of Cambridge on the evolution of the synapse seems to support this idea:

“Strikingly, quintessential players that mediate changes dependant on neuronal activity during synaptic plasticity are also essential components of the yeast’s response to environmental changes.”

In other words, these scientists appear to be suggesting that early semi-independent single cell organisms may have developed proto-synapses to communication information about their shared environment, like the presence of food or toxins nearby. Perhaps through communication, these early colonies of cells might have reacted in concert, and thereby coped more effectively with threats or opportunities presented by their shared environment.  Such ‘communication for a common cause’ would have had survival advantage for the cell colony, encouraging its elaboration through the process of natural selection.  Anthropomorphically speaking, the cells would have been saying ‘if we listen to each other, we can all get ahead.’

All this points to the content of the message itself as as the carrier of value in these early colonies of cells, without need for explicit exchange of ‘thank you’ messages. Listening and being heard were both of intrisic value to individual primitive cells, and to the colony as a whole.

So can we get away with such a ‘thankless’ model on Twitter, or is a virtual pat on the back in return for digital kindness, in the form of a thank you post for retweets, still necessary to grease the skids of communication in the rapidly evolving global brain?

In the last post, I introduced the idea of the TweatStream app, an interface to Twitter that could help tame (and monetize) the torrent of information that floods the blogosphere every hour.  This posts talks about the implications of such an app, both for the individual and for the emergence of global consciousness.

So What’s In It for Me?

So what makes TweetStream a good value for the user?  Why would they want to use it rather than one of the other Twitter clients like Seesmic or TweetDeck? Two reasons:

  1. Quality Information – The key distinction is personalized content delivery via collaborative filtering.  The value proposition for the user is having a single, trusted place to go for up-to-the-minute news and information about what’s happening and being said across the global network that will interest them.
  1. Money – What if it is free to read anyone else’s content, and people who are ‘thought leaders’ get paid for creating and posting content that lots of people want to read?  Answer – people will strive to be the first to post the most interesting information, turning Twitter from a nice way to spend a little spare time to an indispensible resource.  The competition to be a useful source of information in order to earn money and influence people, will drive users to specialize in particular niches (e.g. mobile gadgets, iPhone apps), and seek out the latest & most interesting information to share with their followers.

Who Pays?

Where would the money come from to pay content generators? Ads – of course. A viable model could be the way CoolIris advertises now – with targeted ads interspersed among regular content. In the case of TweetStream, it would take the form of ads occasionally placed between the tweets on a users input stream. Unlike Ad.ly, where ads are associated with a particular stream (potentially denigrating the good name of the poster whose message it is attached or coming from), in this case the ads are simply inserted into the users input stream, in much the same way Google Adsense places ads alongside the content on a page.  Everyone realizes the person who created the content on the page didn’t select the specific ad being shown.

Occasional, easy to ignore targeted advertisements are the price we are willing to pay for the multitude of free Google products.

What Does This Have to Do with the Brain?

From the perspective of a neuroscientist, it is striking how the patterns of connectivity and the flow of information in on-line social networks are rapidly evolving to mirror the structure and function of an actual brain.

Twitter in particular exhibits many of the characteristics of a real network of neurons and the TweetStream idea described above simple take next logical step to extend the parallel.  In the TweetStream model, individual users are like the neurons in the Global Brain. Like real neurons, they collect information relevant to their interests/specialty via their personalized input stream. They assimilate the information, discover new connection among the stories they receive, and then propagate it downstream by putting what they find most interesting on their output stream for followers to see and react to.  This is a very close parallel to the ‘integrate and fire’ model  of neurons.

What’s It Mean for the World?

Increasing the flow of information and the efficiency by which ideas are exchanged. With increased idea exchange comes greater innovation, since according to experts on innovation:

“What the innovators have in common is that they can put together ideas and information in unique combinations that nobody else has quite put together before.”

The personalized filtering of TweetStream mean every user will see a customized input stream, with previously unrelated ideas and events juxtaposed in a way that will spur innovation.

We’re seeing it already on Twitter to some degree. A plan to charge for ‘premium’ Twitter accounts in Japan, with access to special content, was quickly retracted, perhaps as a result of backlash among Twitter users over the idea.  The premium Twitter account story illustrates an important trend. Right now, much of the energy in the social network world is directed towards influencing the medium itself.  It is as if the global information network is in the process of development, and it is using the information exchange infrastructure available now to collaboratively design the next generation of social media.  The phenomena of social media is lifting itself up by its bootstraps – people are using the current social media tools to design the next generation of social media tools.

But there are signs that this is changing – social media tools are turning outward to influence a broader range of human endeavors.  For example, companies are starting to mine their customers for new product ideas via Twitter, as indicated in this article about the contribution of Twitter fans to the design of the game Modern Warfare 2:

“During development, if we are sitting in a design meeting and we are arguing about something, no matter what it is, I can just turn to what is now 60,000 people and post the same question,” Bowling told game developer news site Develop Online. “‘Do we think players will like this?’ well why don’t we ask 60,000 of them and get a good representation of what we think they may like?”

But it was the next statement that might cause gamers participating in social networking to rejoice. Bowling told site that Twitter was “fantastic throughout development” and he “would recommend many, many more people adapted that into their design schedule.”

This example seems like just the beginning. I predict that TweetStream, or something like it, will come to serve as a dominant force shaping global thought, and behavior, just as Google has come to dominate search.  The distinction between Google and the Global Brain that will emerge from TweetStream is coordination.  Google does a terrific job of serving the interests of individual, disconnected users.  If I personally want to know the capital of Hungary, or find the best price on an 8GB iPod, Google is an amazing resource. But my interactions with Google stop with me.

In contrast, through real-time collaborative information filtering and idea exchange, TweetStream will usher in a form of large scale coordination of people (and their digital agents) across geographic boundaries the likes of which the world has never seen. What may emerge is a Global Brain. It remains to be seen just what impact this emergence will have…

A few days ago in a post titled Twitter & The Global Brain I blogged about the parallels between twitter and giant neural network. Now I want to flesh out that model and make it a little more tangible by describing an app that I call TweetStream that could potentially solve several of twitter’s current problems:

  1. Taming the torrent of information that blasts current twitter users.
  2. Monetizing content and rewarding participation in the twitter experience.
  3. Moving the global system towards a coordinated efficient information exchange framework through which global consciousness crise and be exercised.

The TweetStream App

Imagine an app that provided not only the chronological list of friends updates that are is currently provided by Seesmic and Tweetdeck, but also provides what I’ll call a “personalized tweet stream”. My personalized tweet stream would be composed of two parts, presented by the App in two separate column – my “Input” stream and my “Output” stream.

My Input stream would show me tweets extracted from the global twitter stream that an algorithm (described below) predicts will be of most interest to me.  My input stream is theoretically arbitrarily long, but tweets would be sorted so that those towards the top of my incoming list are the ones it expects me to be most interested in reading.

My Output stream would represent the list of tweets that my followers would see if they choose to view what I find most interesting in the global twitter stream at the moment – although do such a “direct view” of an individual’s tweet stream will be rare, for reasons given below.

If I’m not on-line and actively managing my output stream, my input stream will be copied directly to my output stream.  Anyone following me would therefore see what the algorithm thinks I would consider the most interesting content in the twitter stream at the moment.  TweetStream would serve as my digital agent, offering up to the world a source of information filtered through a ‘virtual me’ and therefore tailored to my interests.  Since I’m a fan of Steelers football and a mobile gadgets, the output ‘DeanPomerleau Stream’ that others might follow would likely contain a mix of stories about the latest Steelers sports news and information about the latest in mobile technology, and a few others stories of broader interest that I find interesting

When I’m on-line and actively engaged in managing my personal tweet stream, my interaction with the TweetStream app would entail reading my input stream, surfing the web to find interesting content, or generating new content myself (e.g. blogging) and then posting to my output stream the stuff I consider most interesting.

My output stream is analogous to sequence of tweets and retweets that people generate now on Twitter, except that rather than a single most recent post and a long tail of past posts, I have a set of 25-100 posts that I (with the assistance of my digital agent) consider the most interesting content currently flowing in the twitter stream.

The TweetStream Rank Algorithm

Central to the success of TweetStream will be its ranking algorithm that understands what type of content interests me. TweetStream will use this knowledge to extract and display a manageable amount of personalized content for me to enjoy out of the torrent of information flowing through the global twitter stream.

TweetStream’s personalize content ranking algorithm will learn my preferences by observing my viewing habits. There will be no need to explicitly search out interesting people to follow unless I want to.  When I sign up, I’ll select a few topics that interest me from a list (e.g. ‘Steelers football’ & ‘mobile gadgets’).  These will be used to seed my initial rank algorithm.  What selecting these topics will do is to automatically connect my input stream to the output stream of users who have are interesting in one or both of those topics.

Each of the users I’m connected to will have a weight associated with them, which reflects how closely our interests match each other.  Each time I read (and perhaps rank) a tweet from someone, the weight they are given by my ranking algorithm is increased, so content they generate in the future will be more likely to show up near the top of my incoming stream.  In addition, my ranking algorithm will increase the weight given to other users who have also shown interest in the tweet that I enjoyed (by reading it themselves), drawing me closer to other who may be passive content viewers (rather than generators) but who share my interests.  The closer someone is to the source of the original message that interests me (either in time or retweet depth), the more the algorithm will increase their weight – embodying the idea that I’m likely to enjoy information from the ‘thought leader’ on a topic more than retweets by one of his many followers.

In this way, TweetStream will leverage my viewing (and maybe ranking) history to create list of people for me to follow that is tailored to my interests. They more I use TweetStream, the better it will understand my interests, and the more effective it will be at delivering on my input stream the content I’ll enjoy.  And of course, I’m free to explicitly add or remove people from this automatic following list to personalize it even further.

From Rank to Input Stream

To generate the list of tweets that I see on my input stream, TweetStream will take a weighted sum of the output streams of the people I’m following.

Suppose for example, several of the people I’m following have the same tweet on their output stream right now, either because they read it and enjoyed it directly, or simply because their automatic ranking algorithm thinks they would enjoy it if they were on-line now.  The ranking algorithm will interpret this convergence of support for a tweet from several people with whom I share interests as an indicator I too will likely find it worth reading.  So TweetStream will be placed high on the list of tweets on my input stream.

Alternatively, suppose I follow has generated a tweet on their output stream, and they are the only one to have tweeted about it so far.  If they are someone who I value highly, and if they have placed a high score on this tweet, the strong endorsement of a single person for whom I have a high affinity for will be sufficient to ensure their tweet shows up on my input stream.  But if the person ‘cries wolf’ too often, perhaps by tweeting an ad or simply by tweeting content that doesn’t interest me or that I’ve already seen, my choice not to read their post (or to give it a low rating) will cause their weight will be decreased, so in the future I won’t be as likely to see their content. Instead, their post will have to be endorsed by others I trust if it is make it into my input stream.

At the opposite end of the spectrum, an important breaking news story (e.g. death of a leader, or terrorist attack) that isn’t directly aligned with previously expressed core set of interests could nonetheless make it onto my input stream if many users (each of whom I may be only very weakly connected) are reading about it and retweeting it.

In part 2 – what does this model buy the user, and what does it mean for the emergence of collective intelligence on the web?

Ever feel like you're part of a big machine?

This blog is an exploration of what being part of a collective might mean for each of us as individuals, and for society.

What is it that is struggling to emerge from the convergence of people and technology?

How can each of us play a role, as a thoughtful cog in the big machine?

Dean Pomerleau
@deanpomerleau

----------------------------

Twitter Updates

Error: Twitter did not respond. Please wait a few minutes and refresh this page.