You are currently browsing the tag archive for the ‘Social Media’ tag.

The question for 2010 is How is the Internet Changing the Way We Think? The site has lots of interesting answers including quite a bit doom and gloom about how we’re distracting ourselves to death, penned by smart people like Clay Shirky, Danny Hillis, and Dan Dennett.  But I was particularly intrigued by a couple passages from the response of evolutionary biologist Richard Dawkins.

Like many of the other respondents, Dawkins has observed a dumbing down of the individual as a result of the lower quality of media we’re exposed to, and the information firehouse that seems to be preventing us from focusing too hard or too long on anything that requires deep thought.  But at the same time, Dawkins (and other respondents) see room for optimism. As Dawkins put it:

But I want to leave negativity and nay saying and end with some speculative — perhaps more positive — observations. The unplanned worldwide unification that the Web is achieving (a science-fiction enthusiast might discern the embryonic stirrings of a new life form) mirrors the evolution of the nervous system in multicellular animals. …

I am reminded of an insight that comes from Fred Hoyle’s science fiction novel, The Black Cloud. The cloud is a superhuman interstellar traveller, whose ‘nervous system’ consists of units that communicate with each other by radio — orders of magnitude faster than our puttering nerve impulses.

But in what sense is the cloud to be seen as a single individual rather than a society? The answer is that interconnectedness that is sufficiently fast blurs the distinction. A human society would effectively become one individual if we could read each other’s thoughts through direct, high speed, brain-to-brain radio transmission. Something like that may eventually meld the various units that constitute the Internet.

I agree with Dawkins and many of the other experts who give their opinion to the question. The jury is still out on just how the Internet is impacting the thinking of individuals. It gives us the opportunity to be aware of so much more than has ever been possible. But whether this will translate into knowledge individuals can employ to lead better lives isn’t yet certain.

What is indisputable is that the Internet is affording opportunities for collective intelligence, and coordinated action on a scale that has  never before been possible.  But is less certain is whether we will find ways to effectively nurture and harness this collective energy.  That seems to be what Web 2.0 is all about.  At the moment, we appear to be going through the equivalent of a Cambrian Explosion of projects & startups trying to capitalize on web-enabled collaborative systems. There are literally hundreds of big and small apps trying to leverage Twitter alone.

As Jeff Stibel (@Stibel) suggests in his new book Wired for Thought (which I highly recommend), we are likely to soon see a period of mass extinction of social media startups as the novelty of this new form of collaboration and communication wears off.  Such a die off will resemble  the massive pruning of connections that occurs in the human brain to eliminate redundant and unhelpful connections during childhood.  The human brain’s synaptic down selection during maturation is astonishing and quite draconian, going from about 10 quadrillion connections in a three year-old to a mere 100 trillion by adulthood, which means than only 1 in 100 synapses survive (source: Edelman’s book Neural Darwinism).

Hopefully the fittest & most useful (as opposed to the most amusing) will survive, and the result will be a set of sites and services that will facilitate true collective intelligence and collaborative action to move humanity forward, pulling we overstimulated and distracted individuals along with it.


Black HoleI’m on a plane to Boston to attend a workshop sponsored by the XPrize Foundation, and hosted by MIT & the renown futurist Ray Kurzweil, head of Singularity University. The workshop is being held to discuss the merits of creating an $10M XPrize to accelerate the development of direct brain-computer interfaces or BCI for short.

I’ve always thought BCI was a great idea. I’m not the first to call it the “next step in human evolution”.  I’ve thought, “wouldn’t it be empowering to be able to surf the web with your thoughts?” When this vision of BCI becomes a reality, you would have all the knowledge of the world much closer than your fingertips, since the knowledge would be inside your head, and instantly accessible, just like your ‘native’ thoughts and memories, .

This optimism has prompted me to engage in research to make BCI a reality, as described in this recent Computer World article, in which I was (mis)quoted as saying that people will be interacting with computers via chips implanted in their brain by the year 2020.  It will likely take longer than that, given the technical hurdles and regulatory approval process, but there don’t appear to be any show-stoppers that would prevent it from happening within the next 20 or 30 years at the outset.

Now, after spending little over a month as an active user of Twitter, I’ve got serious reservations about this wisdom of this vision of the future. Don’t get me wrong. Twitter is an amazing service. But its nothing like surfing the web.  Surfing the web is a ‘pull’ activity. I’m in control of what I’m looking for, and what I’m reading. In contrast, Twitter is a ‘push’ service. Once I’ve set up a list of people to follow (I’ve got about 80 now), posts and links come at me relentlessly.

The perpetual stream of information wouldn’t be a big deal, except for one simple fact – its addicting. I’m fascinated by the information contained in a large fraction of the links the people I follow are posting. As a result, I find myself spending hours reading their posts, and when I’m done there is a whole new series of tweets, and the process repeats.  And better semantic filtering technology seemingly wouldn’t help. Its not that I’m overwhelmed by crap. There is too much interesting stuff to read, and better filters would probably just point out more stuff that I’m missing, making the whole thing even more addicting!

I’m no slacker, and I’m usually a very self-disciplined person. Just ask my friends, family and co-workers. But the stream of information coming at me from Twitter is just so interesting and so distracting, it is hard to focus on other things.  I don’t think I could describe the experience as well as Jim Stogdilll (@stogdill) has done in his post Skinner Box? – There’s an App for That. I’ll just quote a couple passages, but anyone interested in the addictive side of Twitter, and what it can do to your thinking ability, should read it in its entirety.

In describing his conflicted attitude towards Twitter, Jim says:

I can either drink liberally from the fire hose and stimulate my intellect with quick-cutting trends, discoveries, and memes; but struggle to focus. Or I can sign off, deactivate, and opt out. Then focus blissfully and completely on the rapidly aging and increasingly entropic contents of my brain, but maybe finish stuff. Stuff of rapidly declining relevance.

This rings so true with me, and like Jim, I find it hard sometimes to willfully opt out – the stream is just too enticing. As Jim observes, we’re like rats in a Skinner box, self stimulating with our reward of choice, real-time information.  We’re ‘digital stimulusaholics’. Jim goes on to say:

For the last couple of years I’ve jacked in to this increasing bit rate of downloadable intellectual breadth and I’ve traded away the slow conscious depth of my previous life. And you know what? Now I’m losing my self. I used to be a free standing independent cerebral cortex. My own self. But not any more. Now I’m a dumb node in some uber-net’s basal ganglia. Tweet, twitch, brief repose; repeat. My autonomic nervous system is plugged in, in charge, and interrupt ready while the gray wrinkly stuff is white knuckled from holding on.

What if Twitter is turning us into mindless cogs in a big machine, and the machine turns out to be dumb?  As Jim describes it:

What if the singularity already happened, we are its neurons, and it’s no smarter than a C. elegans worm?

Now imagine just how much more addicting the stream would be if it was coming at us in real-time through a two-way link hooked directly to our brains. Sure there would be an ‘off’ button – responsible scientists (like me!) will make sure of that.  But would anyone be able to push it?  I’m far from certain.

So I’m stuck in a difficult position.

Tomorrow I’m meeting with Mr. Singularity himself, Ray Kurzweil and a bunch of other proponents of brain-computer interfaces to brainstorm about offering a big cash XPrize for the first group to make high-bandwidth BCI a reality.  And I’m thinking it may not be such a good idea for the future of humanity.

I expect Kurzweil will argue that merging our slow squishy brains with our machines is the only option we have, and that rather than turning our brains to mush, it will jack them up to runs thousands of times more efficiently than they do today, since transistors are so much faster than neurons.

Recent studies have shown that humans aren’t very good at multi-tasking, and paradoxically, people who multi-task the most are worse at multi-tasking than people who usual focus on one thing and occasionally multi-task.  So much for the learning / brain-plasticity argument that ‘we’ll adapt’.

Perhaps our brains could be reconfigured to be better at multi-tasking if augmented with silicon?  Perhaps with a BCI, we could be reading an article and talking to our spouse at the same time. How weird would that be?  And with such a significant change in my cognition, would I still feel like me?  Would it feel like there were more than one of me?  Talk about schizophrenia!

Call me a conservative, but I know enough about the brain and human psychology to realize that it maintains its hold on reality by a rather tenuous rope, carefully woven from many strands over millennia by evolution. That rope is bound to get seriously frayed if we try to jack up our neural wiring to run many times faster, or to be truly multi-threaded in the time frame Kurzweil is talking about for the singularity, i.e. 2030 to 2050.

But on the other hand, one might conclude we’re damned if we do and damned if we don’t.  Whether we like it or not, things aren’t slowing down. The amount of information in the stream is doubling every year. If instead of jacking in with BCI, we take the conservative route and leave our brains alone, the Twitter experience shows us we’re likely to be sucked in to the increasingly voluminous and addicting flood of information, left with only our meager cognitive skill set with which to cope with the torrent.  I’m afraid our native, relatively feeble minds may not stand a chance against the selfish memes lurking in the information stream.

Sigh. Maybe I’m over-reacting…  If I don’t chicken out, I will try to bring up these concerns during the BCI XPrize discussions starting tomorrow.  I may even tweet about it. The official hashtag for the workshop is #bcixprize. Just click the link to follow along – it should be fascinating…


  1. Here is another interesting perspective by Todd Geist on what it might be like to be a small part of a global information network, like the organisms on Pandora in the movie Avatar.  As I pointed out in my comment to Todd’s post, the difference between Pandora’s creatures and humans is that they had millions of years of evolution to cope with the direct mind-to-mind linkages, while its happening to us in the course of at most a few generations.
  2. Here is a skeptical perspective on the whole idea of the singularity.

So far, social media seems to have a lot of roar, but very little teeth when it comes to facilitating social change.  Users of services like Twitter and Facebook seem more interested (sometimes compulsively) in entertainment, ‘branding’ & connecting with friends than about initiating positive social change. The always-insightful Venessa Miemis (@venessamiemis) hit the nail on the head in the comments to her blog post What is Social Media? [the 2010 edition] when she said:

Does all this online talking matter if nothing comes of it in the real world?

Neal Gorenflo (@gorenflo) elaborates on the potential pitfalls of conversation:

Connecting and conversing is necessary, but  again, the danger is that we get stuck in conversation. There is such a thing as being too connected. We have cognitive and time limits. Web 2.0 can overload us with messages, shrink attention spans, absorb our time, erode focus, and thus disrupt our ability as citizens to find common ground and take action together. It’s possible that through Web 2.0 we may be, as in the title of cultural critic Neil Postman’s influential book, amusing ourselves to death.

Venessa goes on to ask the big question:

How do we make something happen? What are small things we can start doing to get the hang of real coordination, collaboration, and action?

I’m all for starting with something small but nonetheless tangible – to give us something to build on and learn from.  Why not shoot first, and aim later?  The worst that can happen is we fail fast and learn from our mistakes.

With that goal in mind, I’m fascinated by an initiative by my Carnegie Mellon University colleague Priya Narasimhan (@priyacmu) to use crowdsourcing and social media to help locate, assess & repair potholes around Pittsburgh [see news story w/ video].

Pittsburghers are given three options for reporting potholes – dial 311 on their mobile phone, log it at the website, or best of all, report it using a free iPhone app called iBurgh.

The iBurgh app is cool because of it is so easy to use. Simply snap a photo of a pothole with your iPhone. The image is automatically geotagged with its location, and sent to the city’s public works department. Once  three pictures of the same pothole are logged, the city promises to repair it within five days.  Granted its not an instantaneous response, but we’ve got a lot of potholes in Pittsburgh!  The tool can also be used to report issues like needed snow removal – a big problem around here this time of year…

Pittsburgh City Council member Bill Peduto said the program makes Pittsburgh the nation’s first large city to implement a government integrated iPhone app.  He goes on to say:

“This type of technology that merges social media with democracy is going to boom within the next year.”

This is exciting for me partly because it is being done by a friend.  But more importantly, it illustrates something we saw emerging with the DARPA Red Balloon Challenge which might be called crowdsensing – using a distributed network of tech-enabled individuals to track and report on significant (and sometimes not-so-significant) events happening in their world.

Another nice example is the Twitter Earthquake Detection Program, which encourages people to report when the earth moves via Twitter or on a dedicated “Did You Feel It?” website.

I’m hopeful an even bigger and better example will happen soon in the form of a regime change in Iran, thanks in part to Twitter. As I observed recently, Twitter has given the citizens of Iran a way to tell the story of their quest for freedom to the world in real-time and in a way that engages public interest, at a time when traditional media channels have been locked out by their oppressive government.  I wish them the best of luck, and will be tracking the events on Twitter as they unfold.  When (not if) they succeed, it will be an important milestone for the emerging Global Brain.

Until then, I’m happy to start small.  Excuse me while I go report a few potholes…

I’m sitting on the couch at my in-laws connected to the global network via my cell phone, and mesmerized by events unfolding in real-time in Iran.  While I sit relaxing with family in the afterglow of Christmas, half way around the world people like this man:

with rocks in both hands and his cell phone in his mouth, are serving simultaneously as fighters and reporters.  And I’m doing my tiny part, as observer and cheerleader, spreading the word with tweets like this one:

My fascination is as much with the process as with the events themselves. CNN, Reuters and the BBC are relying almost exclusively on unconfirmed posts by ‘citizen reporters’ sharing news, pictures & videos on services like Twitter, Twitpic & YouTube.

We are experiencing the future of news, with the line forever blurred between those who make the news and those who share the news.  For the first time we can experience news anywhere and anytime, as it happens. We are all so much more intimately connected than ever before. Global consciousness is awakening. We live in interesting times.

Read more about Twitter’s critical role in the unfolding drama in Iran, and the potential downsides of using social media to instigate change.

Stowe Boyd and Freddy Snijder have posted an interesting dialog about the streams and the “global sensorium”.  Freddy’s original post, Stowe’s reply, and Freddy’s reply to Stowe, are all worth reading.

I like what both have to say, and the fact that dialogs like this is occurring is a sign that a collective intelligence is already emerging.  But I believe the two have missed several important points.

First both Boyd & Snijder seem resigned to our current set of individual cognitive capabilities. As a neuroscience researcher, I’m confident that one day advances in our understanding of the brain and in particular brain-computer interfaces, will endow individuals with new cognitive capabilities.  Virtual telepathy, infallible memory, vision at a distance, are all within the realm of possibility, and could redefine with at means to be human. In fact, I’m participating in a workshop at MIT on January 7-8th sponsored by the XPrize Foundation and Ray Kurzweil’s Singularity University to discuss creating an XPrize competition to turbo charge progress in brain-computer interfaces. So big advances may be in store for our future…

But for now at least, both Boyd & Snijder are correct in observing that we’re stuck with our rather limited individual cognitive capabilities. Given these cognitive limitations, there is a serious question about just how individuals can best cope with the exponential growth of both information and societal complexity.  Freddy Snijder poses it this way:

The question remains how this global sensorium can be effectively used by all the individuals that make it up.

A minor point –  ALL individuals are unlikely to ever effectively use any technology or service. They’ll always be those who resist or are denied access to new technology. A big question is how to manage this digital divide.

But more fundamentally, I don’t believe any technology can possibly exist that will restore the degree of individual understanding and agency that it seems we crave as human beings.  Lets face it, the global knowledge base and real-time information stream are growing at such a rapid pace that even with the best collaborative filtering technology, it inevitable that individuals will continue to know more and more about less and less. At some point, it seems inevitable that we reach a point where we know almost everything about next to nothing!

The unavoidable reality of information overload doesn’t sit well with people, particularly folks who pride themselves on keeping up with the latest in information technology. We are programmed by evolution with the drive to understand and control all aspects of our environment. As a result, there are many hot start-ups today promising to tame the torrent of information and return each of us to a idyllic state of information mastery.

I’d love it if this were the case – I too am an information junkie and have always hoped to find a way to change the world through personal engagement.  But my gut tells me that the global society is quickly becoming far too complex for any single individuals to understand, to say nothing of  influence, the global sweep of human events.

If the organization of biological brains is any indication (and I’m betting it is), the Global Mind will be an emergent phenomena, and its workings will likely be incomprehensible to individual humans, just like individual neurons are oblivious about the thoughts to which their activity contributes. Like the neurons in our brain, individual people  participating in the functioning of the global sensorium may see little evidence of the part they are playing, and may not even realize the questions that the collective intelligence is working to solve.

The parallel growth of collective intelligence and decrease in individual agency raises fundamental questions that will need to answered if humanity is to survive and prosper:

  • Can we overcome the egocentric perspective that drives each of us to want to stand out and get ahead, often at the expense of our neighbor?
  • Can we transcend our self-centered tendencies and accept playing a small, largely unsung role in the workings of the whole?

In short, can we find a way to leverage technology to allow individuals to coordinate their modest local activities (both on-line & off) into a global, decentralized intelligence while remaining engaged in the process, despite realizing that their individual contributions will inevitably be tiny in the grand scheme of things?

The path is far from clear, but I remain hopeful.


I have to admit, I’m new to social media, especially Twitter, and I’m still learning the rules.  But it appears to me that there are (as yet) few clear behavioral codes, perhaps because the medium is so young. For example, today one of my favorite visionaries on Twitter and a seasoned social networkers, @VenessaMiemis, tweeted the question:

sooo.. how many times should u RT someone w/o them acknowledging u before you just give up & unfollow? hmm

In a way that made me feel better, since if Venessa doesn’t know, perhaps I’m not as naive as I feared.  She went on to say:

not being RT’d back or thanked made me feel ignored, and that made me sad. 😦

My question is – should Venessa feel that way? And heaven forbid, was I one of the thoughtless people who inadvertently snubbed Venessa and made her sad?

There seems to be a schizophrenia about the role Twitter should be playing in the lives of people like Venessa and like me.

On the one hand, I think  many people (myself and Venessa included) see Twitter as primarily a way of exchanging ideas, a way of of passing useful information amongst individuals with similar interests.

But many people also see Twitter as a way of building social relationships and virtually connecting with kindred spirits.  And of course, may people hope Twitter can serve both an informational and a social role.

For someone trying to follow 50 or 100 active Twitter users,  messages like ‘@BlahBlah – thx for the RT!’ appears as a waste of my limited bandwidth. So when I’m fortunate enough to have someone retweet one of my own posts, I’m always saddled with a decision.  Should I be polite and  thank them publicly for the RT?  Or should I thank them privately (perhaps with a direct message)  and thereby avoid giving existing and potential followers the impression that the signal-to-noise ratio of my tweet-stream is so low that I’m not worth following?

From the perspective of the person who was kind enough to RT me, a public thank you would seem much preferred, since it calls attention to them, and could help them build their list of followers. But at the same time, I also sometimes get the impression that ‘thank you’ retweets are a bit self-serving – calling attention to oneself by illustrating how many other people think you’ve got something important to say.

I even find myself second guessing myself earlier in the process, when considering whether to retweet content I find valuable.  Are my followers likely to have already seen the information I’m about to RT?  Does the information fit the interest profile I perceive my followers to have?  My big fear is that RTs of information my followers have already seen will be worse for my ‘Twitter reputation’  than having said nothing at all.  But on the other hand, when I see at RT by someone whose perspective I value, I’m more likely to read it, even if I passed over it the first time it showed up on my Twitter input stream.

In short, it is quite unclear to me (and apparently others) what the rules of right conduct are when engaging with others on Twitter, if such rules even exist.

What all this points out is just how primitive are the tools and filtering mechanisms we have at our disposal now for real-time social networks.  I hope someday soon Twitter clients will be smart enough to filter out ‘thank you’ retweets unless I’m the sender or recipient, and to filter out RTs of stories or blog posts I’ve already read.  If I knew everyone had access to these two seemingly simple features, I’d be much less reluctant to RT good content that my followers may have otherwise missed, and I’d be much more willing to be polite and thank people for RTs of my own posts, knowing that I wouldn’t be cluttering the tweet-stream of people who follow me with posts which are nothing more than noise to them.

How do others handle such conflicts?  Are there codes of conduct that people have tried to formulate for how to be a good Twitter participant?  How much does it depend on the size of your following and the type of reputation you’re trying to build?  I know there are people (like Robert Scoble @Scobleizer) who maintain several Twitter feeds – one (or more) for pure content, and another for more personal stuff. That makes sense for someone like Scoble, who has more than 100K followers. But it doesn’t seem reasonable to expect people to follow more than one Twitter persona for a newbie and relative unknown person like me.

If I’ve offended you Venessa, or anyone else, I sincerely apologize.  I’m the first to admit I’m still stumbling along trying to find my way through the complicated and quickly evolving world of social media.

Update: Venessa – congrats on your Ideas Project video. Very nice! Right now I’m asking – should I join the crowd and tweet a friendly ‘congrats!’ message to Venessa or just stay quiet?

Ever feel like you're part of a big machine?

This blog is an exploration of what being part of a collective might mean for each of us as individuals, and for society.

What is it that is struggling to emerge from the convergence of people and technology?

How can each of us play a role, as a thoughtful cog in the big machine?

Dean Pomerleau


Twitter Updates

Error: Twitter did not respond. Please wait a few minutes and refresh this page.