Monday, May 9, 2011

What do you want to be when you grow up?

Remember when you were a little kid, and grown-ups all used to ask you that same question, "What do you want to be when you grow up?" The question itself has certain underlying expectations: that a person is defined by his profession, that true passion is something that is innate and not learned, and that children should all have some sort of goal.

Personally, I always dreaded that question. Mostly because I had no idea what I wanted to be when I grew up. Usually I would lie just to avoid any awkwardness. "I want to be an astronaut!" I would say, because it always made adults' faces light up, and let's face it, space is pretty cool. But even though all your teachers and your parents assure you that you can grow up to be anything, whatever you want, I wasn't naive enough to believe that I would actually be an astronaut one day, especially once I learned that being an astronaut was more about engineering and mathematics than zero-gravity sports and meeting extra-terrestrials. So every once in a while, I would respond truthfully with a simple "I don't know." Now most adults aren't sure how to handle this response. To be fair to them they were probably just trying to start a conversation to entertain me, and I wasn't providing much help. So they'd pause for a beat, then ask me what sort of things I was interested in, or give me some classical options: "How about a fireman! Or a doctor!" they'd say, as if the suggestion of a generic job title would give me a sudden epiphany for which I would credit them in my Nobel acceptance speech twenty or thirty years down the line. But I was stubborn in my ambivalence, because I really did have no idea, and eventually the interrogation would end with a smile and a pat on the head. "Well don't worry about it now, you've got plenty of time to figure that out!" seemed to be the standard reassurance, which always irked me a little. I wasn't worried about it before, but the constant suggestion that I should be always made me wonder if there was somehow something wrong with me.

As I grew older, people continued to ask: "What do you want to be when you grow up?" And still I didn't know, nor did I usually care. But that nagging voice in the back of my head got louder as I got older, and I continued to wonder if I should get my act together and figure out what it was I wanted to do with my life. But people kept assuring me, "Don't worry about it now, you've got plenty of time!" When it came time to apply to colleges, I stuck to Liberal Arts programs, because even though I didn't know what I wanted to "be" when I grew up, I knew what kind of things I found interesting. And when I got to Lehigh, it took me all four of the allotted "take a lot of classes to figure out what you like" semesters before I finally studied on Global Studies as a major (in part because my adviser, Jack Lule, continued to reassure me that I didn't need to have my career trajectory mapped out quite yet).

So that's how I now find myself, only two weeks away from graduation, still with no halfway decent response to the question "What do you want to be when you grow up?" I don't blame anybody but myself for the procrastination, but I'm still not sure if its as big of a problem that everyone has always hinted it to be. Don't get me wrong, I've always been insanely impressed by and secretly jealous of those who have known what they wanted to do since they were able to articulate it with words. But there's another part of me that doesn't understand it, the same part that always prevents me from giving a straight answer to the question. To be frank, I just don't understand how somebody could know, right this second, what they want to do for the rest of their time on earth. The idea terrifies me. What if you pick the wrong thing? What if you aren't as good at it as you thought you would be? What if you change your mind one day, and you've been lying to yourself and everyone else for as long as you've been declaring that you know what you want to do with your life?

I guess that's why Global Studies and all of its great professors like Jack Lule and John Jirik have always appealed to me. They push students to focus on information and learning instead of the endgame. When they tell me not to worry about the future, I listen, because... well they turned out alright, didn't they?

If I had to guess, I think that Clay Shirky would probably tell me not to worry either. After reading his latest book on social-media, "Cognitive Surplus," I couldn't help but think that he'd find himself in pleasant company among the global studies professors at Lehigh. "Cognitive Surplus" is mostly about how people are putting their free-time to better and better use these days. Whether it's in writing a Wikipedia article that will serve to educate the online masses, or a caption for a LOLCat that will simply amuse them, people are starting to use their free time to create rather than simply consume. The best part about Shirky's book though, is that Shirky sees both of these activities as valuable. Got an idea? Try it out! Shirky says. If it doesn't work, try something new. You don't have to devote your life to the original idea; if it succeeds that's great, but if it fails that's great too. And that's where I find the greatest solace: in the idea that failure is just as valuable as success. "As a general rule, it is more important to try something new, and work on the problems as they arise, then to figure out a way to do something new without having any problems," Shirky says.

I am really, really good at failure. I started tons of projects that never took off! I've had lots of ridiculous plans that crashed and burned spectacularly, like when I decided to create a website from scratch using notepad, or when I started a blog about international hip-hop, or when I decided to start a campaign to get Lehigh to get rid of that stupid rule that makes you change all your passwords every 6 months. And while I knew that every time I experienced one of these failures I also learned something, there was a small part of me that tried to convince myself not to try the next time, because I was sure to fail. But Clay Shirky has reassured me that it's not only okay to fail, its great! Before Facebook became the standard for social networking, there were hundreds of attempts at such like SixDegrees.com, Friendster and MySpace. And while Andrew Weinreich, the man who created SixDegrees.com, is probably not enthralled that it is Mark Zuckerberg and not him who is now the world's youngest billionaire, he would be remiss if he dismissed the SixDegrees project as worthless. After SixDegrees.com, Weinreich went on to found a slew of internet start-ups, most of which probably have some vestiges of the things that worked in SixDegrees.com, while he has probably avoided the things that didn't.

So thanks to Clay Shirky, to Jack Lule, and to all the great mentors I've had throughout my life, for reassuring me that I don't need to know what I want to be when I grow up- and that it's okay to fail.

Tuesday, April 12, 2011

Creativity, Rigidity and the LSATs

No one who has a sore throat need consult a doctor, because sore throats will recover without medical intervention. In recent years several cases of epiglottitis have occurred. Epiglottitis is a condition that begins with a sore throat and deteriorates rapidly in such a way that the throat becomes quite swollen, thus restricting breathing. Sometimes the only way to save a patient’s life in these circumstances is to insert a plastic tube into the throat below the blockage so that the patient can breathe. It is highly advisable in such cases that sufferers seek medical attention when the first symptoms occur, that is, before the condition deteriorates.

Which one of the following is the best statement of the flaw in the argument?

(A) The author draws a general conclusion on the basis of evidence of a particular instance.
(B) The author assumes that similar effects must have similar causes.
(C) The author uses a medical term, “epiglottitis,” and does not clarify its meaning.
(D) The author makes two claims that contradict each other.
(E) The author bases her conclusion at the end of the passage on inadequate evidence.

Above is a sample question for the Logical Reasoning section of the LSAT. Now unlike the vast majority of human beings, I like standardized tests. No really, I actually enjoy them. I mean I'll be the first to admit that there's a great deal of tediousness associated with standardized tests. And the gap between a standardized test's ability to realistically assess a person's intelligence and the importance of said test's score in determining an individual's future is far too wide. Everyone knows at least one person who scored horribly on their SATs in spite of their obvious intelligence, and I'd guess that you might know one or two people who scored extremely well even though they're less intelligent than a common squirrel.

But none of that has abated my love for a good, regimented standardized test. There's just something very... finite about them. Every question is familiar, because they have to follow a pattern. Similarly, every answer is familiar, because they follow patterns too. On the LSAT, you know that you're going to get five sections of 35 minutes each. Each section will have between 24 and 28 questions, and you're guaranteed a break after the fourth section of no less than 10 but no more than 15 minutes. Let's take a look at the question above for a minute (don't panic, I swear I'm not going to make you take any practice tests or anything, I'm just trying to prove my point). The question, "Which one of the following is the best statement of the flaw in the argument?" tells you a few things immediately. The phrase "statement of the flaw" indicates that this question falls into the category of Fallacy Questions, one of the 11 different categories of questions in the logical reasoning section of the test. Fallacy questions directly ask you to spot a flaw in an argument-i.e. to identify a "fallacy," or particular type of invalid logical reasoning. Now I've only been studying for the LSATs for a few weeks, but I know that with a fallacy question, the correct answer will do more than just describe the argument; it must describe an error in reasoning.

With just a little experience and prior knowledge, I've transformed my perception of this question. Unless you've been studying for the LSAT, or have taken it before, your initial reaction to this question was one of confusion, boredom, or if you actually tried to answer it, concentration. However, since I've seen a lot of fallacy questions before, and know a bit about how LSAT writers formulate questions, my reaction was a bit different. I first read the passage, taking careful note of absolute statements, definitions and qualifiers (e.g. "No one who has a sore throat...", "epiglottitis is..." and "It is highly advisable..."). Then I read the prompt, immediately realized it was a fallacy question, and started crossing out answers. Now I won't go through them one by one, but its fairly clear to me that A, B, C and E are all incorrect. They're clever tricks, meant to divert you away from the right answers, but since I know most of the playbook that the question writers use, I can spot these tricks easily. For example, answer A (The author draws a general conclusion on the basis of evidence of a particular instance) defines an actual logical fallacy, the generalization fallacy. However, a correct definition does not make a right answer, and its clear upon reading the passage that the author does not clearly draw "a general conclusion on the basis of evidence of a particular instance" of anything.

Odds are, if you tried answering this question yourself, you got it right. It's one of the easier practice questions I've come across, designed as a simple check against those who don't read carefully. It's clear the claims that "no one who has a sore throat need consult a doctor" and "it is highly advisable that sufferers [of epiglottitis, a condition that begins with sore throat,] seek medical attention when the first symptoms occur" are contradictory, and therefore D is the correct response. However, the odds are also good that if this is your first time looking at a sample LSAT question, you spent a good deal of time figuring out exactly what the question was asking, weeding out the wrong answers and mentally sorting through the importance of various phrases and factors in the question. Most people can, with sufficient time and a proficient grasp of English, reason their way through any of the questions on the LSAT. With sufficient time is the key phrase here, because when you take the test, time is anything but sufficient. You have 35 minutes to answer about 25 questions, which leaves with you with just over a minute per question. This is why experience and prior knowledge are so important with standardized tests. You have to know the types of questions you're going to see, the types of answers you're going to see, and be able to figure out why the right answer is right and the wrong ones are wrong. The key to doing really well on tests like the LSAT or the SAT isn't intelligence, it's practice. With enough experience and practice, complicated questions about logical fallacies begin to look more like a simple game than a part of some incomprehensible standardized test. The rules are rigid, they're set, all you have to do is play as well as you can. The best test takers will tell you that they don't have to concentrate very hard when taking a test, because at that point its mostly instinct.

I bring up standardized testing not just because I'm in the middle of preparing to take the June administration of the LSAT, but because it ties in perfectly to the book I read this week, "You Are Not A Gadget- A Manifesto" by Jaron Lanier. Lanier is a member of geek royalty- he's known as the father of virtual reality technology and has worked with figures like Kevin Kelly, the founder of Wired magazine and John Perry Barlow, founder of the Electronic Frontier Foundation. But with "I Am Not A Gadget," Lanier separates himself from his former colleagues. The open internet, he argues, is not nearly as open as we perceive it to be. Some of the most influential decisions in technological history have been made with little foresight as to how they could influence and limit the actions of the future, especially when it comes to the realm of creativity. Lanier, who has a strong background in classical music, uses the invention of MIDI to demonstrate his point:

One day in the early 1980s, a music synthesizer designer named Dave Smith casually made up a way to represent musical notes. It was called MIDI. His approach conceived of music from a keyboard player‟s point of view. MIDI was made of digital patterns that represented keyboard events like “key-down” and “key-up.”
That meant it could not describe the curvy, transient expressions a singer or a saxophone player can produce. It could only describe the tile mosaic world of the keyboardist, not the watercolor world of the violin. But there was no reason for MIDI to be concerned with the whole of musical expression, since Dave only wanted to connect some synthesizers together so that he could have a larger palette of sounds while playing a single keyboard.
In spite of its limitations, MIDI became the standard scheme to represent music in software. Music programs and synthesizers were designed to work with it, and it quickly proved impractical to change or dispose of all that software and hardware. MIDI became entrenched, and despite Herculean efforts to reform it on many occasions by a multi-decade-long parade of powerful international commercial, academic, and professional organizations, it remains so.
You've heard MIDI technology before, even if you aren't aware of it. Every song created on a computer utilizes it, the infamous auto-tune is built around it, and programs like iOS' Garage Band are utterly dependent on it. That means all of this music, all of this creative potential, must be funneled through the filters of MIDI. This is exactly what Lanier is warning against, and why his book is not just a book, but a manifesto.

One of the book's underlying themes has to do with this disconnect between the potential of the human mind and the technologies we interact with on a daily basis. When you sit down at a computer, you probably don't think about how your keyboard is limiting you to typing one letter at a time, or how your mouse restricts your ability to manipulate variables on screen to a single horizontal plane. These basic technologies, which we take for granted, have an enormous impact on our ability to create and innovate. Lanier isn't necessarily advocating that we abandon these restricting technologies immediately, he's simply asking us to pause periodically and consider the forest for the trees. In particular, he is criticizing "the promotion of the latest techno-political-cultural orthodoxy," in which the Singularity is considered inevitable and "Web 2.0" technologies are categorically heaped with praise (and rarely with criticism).

Facebook is yet another example that Lanier presents of an innovation that we rarely question, but which causes a form of "digital reductionism" that serves to disenfranchise our personalities. On Facebook, you are presented with standard prompts to fill out: relationship status, birthday, residence, profile picture. Your entire personal identity devolves into simple, personality-free entries in a database. Now compared with the other ways in which we represent our identities online, Facebook has been called an improvement. Unlike a webpage, where every bit of HTML is customizable (although HTML is also not without its limitations), Facebook is standardized. When you click through to somebody's Facebook profile, you don't have to read the banner at the top of the page that was formerly adorned with the silhouette of Mark Zuckerberg. You know that there will be a picture in the upper left corner which that person has chosen to represent themselves, you know that you can find their recent activity on their wall right in the center of the page, and if you want to know if they're in a relationship or not its as simple as clicking on the link titled "info" under their name. Before you even go to that person's Facebook page, you know exactly what to expect. Sure there may be minor variations here and there, some people choose to display less pictures and remove the wall, while others seem to have fully devoted their lives to Farmville. But all of these behaviors, pictures, and boxes of text which we use to represent who we are as people are standardized and taken for granted. Gone is the seemingly limitless potential of the individual webpage, which take serious commitment, creativity and know-how to make. In its place is the Facebook page. Stale, standardized, and predictable, Facebook is about as good at representing individual personality as a standardized test is at representing intelligence. Sure it makes things easier to sort through, but go look at your own Facebook page and tell me if it accurately represents who you are as a person. Just as the infamous "no child left behind" act forced millions of teachers to abandon their passion for inspiring young minds in favor of "teaching to the test," Facebook asks us to disregard our inner creative influence in favor of making ourselves easier to search for.

Now this isn't to say that you should go delete your Facebook and Twitter account, throw away your mouse and keyboard and completely revert back to man's natural state. But there are a lot of valuable warnings in Lanier's book, especially for technology enthusiasts like myself. I've read a lot of books about social media and technology this semester, and I have to admit that "You Are Not A Gadget" is my absolute favorite. That's because unlike the authors of Cluetrain, Clay Shirky, and Kos, Jaron Lanier refuses to accept the tools of social interaction as simple tools. Instead, he forces you to question the evolution of speech, its contexts, and its limitations. Instead of unabashedly praising new innovations, he wonders how they will limit us in the future, as well as how they can be improved. Instead of accepting the common belief that the abilities of the human brain are approaching those of a computer, he challenges computers to better adapt to the abilities of the human brain.

Now if you'll excuse me, I've got an LSAT prep class to go to. Hey, just because I can see the forest for the trees doesn't mean I don't like to go hiking through the woods every now and then.

Monday, April 4, 2011

Music and Politics

Music and Politics.

That phrase looks just as awesome to me today as it did nearly four years ago, when I first saw it as an incoming Lehigh freshman. In the College of Arts & Sciences, every freshman is required to select a "freshman seminar" to take during their first semester at school. Now these "classes" cover a ridiculously wide spectrum of human knowledge, letting freshman who have yet to experience their first college class choose from subjects like "Intro to Theater," "Prehistoric Religion, Art and Technology," and even "The Drugging of America." In high school, all my classes had dry, academic sounding names like U.S. History, or AP Statistics, so the prospect of taking a class all about the "Lives and Legacies of Great Explorers" was pretty exciting. But there was one class in particular that really caught my eye: Music and Politics. My mother, who majored in music in college, instilled a love for music in both her children at a very young age. And growing up just outside the beltway (AKA Washington D.C.) meant I was no stranger to politics, at least from an observational standpoint. But I had never really considered that there was any link between the two subjects until I found them juxtaposed so boldly against each other in my freshman orientation packet. So, curiosity piqued, I chose Music and Politics as my first choice for a freshman seminar.

Taught by the award-winning, internationally renowned concert pianist and conductor Eugene Albulescu, Music and Politics proved to be everything it suggested and more. For starters, Professor Albulescu (who will hereafter be referred to as Eugene, as he insisted we call him in class), plays the piano so beautifully that he must have made a bargain with the devil at some point, which made class more of a privilege than a responsibility. And our class discussions were constantly teaching me new things about artists I thought I had known. For example, I found out that Richard Wagner was a virulent anti-semite, even going so far as to publish an essay called "Das Judenthum in der Musik" (Judaism in Music) to 'helpfully' explain why Jewish composers such as Felix Mendelssohn are biologically incapable of producing song or music (the Flight of the Valkyries sounds much more sinister to me now than it used to). I also found out that the Bob Marley classic "Buffalo Soldier" was meant to be a rallying cry against racism in America. But what fascinated me most about the role of music in political discourse was its ability to subvert the rules and norms of society in order to create change. Whether you're talking about Woodstock, The Concert for Bangladesh, or Live Aid, musicians have time and time again proven that they can use their voices to usher in real, positive societal change, in addition to simple entertainment.

So when I read "Taking on the System" by Markos Moulitsas Zúñiga, aka 'Kos,' I was immediately drawn to the idea of using music to bypass, crush, and influence the 'gatekeepers.' Gatekeeper is Kos' derisive term for people in positions of power and control (think Marxism and its fascination with the individuals who control the means of production). Now don't get me wrong here, despite his ambition, popular success and organizational prowess, Kos is certainly no Saul Alinsky. "Taking on the System" reads more like a history of successful progressive community organizing efforts than a manual for radical protest. And I'm sure that while some people would be interested that the writer of the self-proclaimed "most influential political blog in the nation" can cite example after example of successful political campaigns, most were expecting a bit more pragmatism out of what Kos has dubbed the "Rules for Radicals of the digital age."

But the gatekeepers idea is interesting to me, especially since Kos used the music industry as an example to explain his point. Eugene's class was my first educational brush with music and politics, but it wouldn't be my last. In my sophomore year, I was introduced to international hip-hop when I read Heavy Metal Islam- Rock, Resistance and the Struggle for the Soul of Islam for an Anthropology class. The book is a travelogue of sorts, written by heavy metal musician and Middle Eastern history professor Mark LeVine. The title alone was enough to draw me in, but the stories I found inside provided a unique, first hand perspective of hip-hop in a different country that I had never seen before. Below, I've included my first impressions of Heavy Metal Islam. They were written for another blog I wrote on international hip-hop when I first read the book, before I was blessed with the wisdom of J325, but I think most of the insights still hold true.

Now despite being a white kid from the suburbs of Washington D.C., I've always loved hip-hop. When I first discovered the genre, I was fascinated by the way artists sampled and blended different bits and pieces of music together to create a mélange of music. I was a big fan of all kinds of hip-hop, and I even hosted my own radio show freshman year for two hours every Friday. So when I read Heavy Metal Islam, LeVine's description of the ways that artists communicate and collaborate in the Middle East amazed me.

“Our whole life is inside,” describes a musician from a popular Iranian metal band, “Inside you don’t need to wear your veil, you can blast your music, dance… and otherwise feel free" (176). This is a common sentiment among artists. In Iran, musicians are restricted mostly by the Islamic government. All music must be approved by the Ministry of Culture, which screens for anything from rebellious lyrics to guitar riffs that are just a bit too edgy. Cassette tapes, which were banned by Khomeni after the Iranian revolution, are bought and sold by fans like illegal drugs. Public concerts are regularly shut down and heavy metal artists are reduced to playing “metal theatre,” where patrons must be seated, and cannot headbang or dance. So in repressive cultures like this, how do artists collaborate to put out decent music, and create communities where they can share this music and their culture? One word: Internet

"Hanging around the internet has become the equivalent of hanging out on the street corner,” LeVine declares (18). And it’s true, the internet has proliferated most countries in the Middle East to the extent where almost any city dweller can afford an hour at an internet café to link up and share information across national and cultural boundaries. Indeed, the very existence of LeVine’s text owes thanks to the internet, without which he would have never met all the musicians and politicians that he did. He met Reda Zine, one of the most influential figures in Heavy Metal Islam, through the internet. In LeVine’s own words, “It was just a matter of time before [they] went from chat rooms to rehearsal rooms, the recording booth, and ultimately to sharing the stage” (25). What is even more important is when the power of the internet to unite people goes beyond these transnational bridges, and establishes local bonds. People who live under oppressive governments, where finding a public space to perform and share non-traditional music is difficult, have turned to the internet to create local communities based around musical movements. The web magazine Tehran Avenue has succeeded in creating a “means of bringing the vibrant underground scene of Tehran aboveground” (192). The magazine hosts regular web based music competitions where local artists can share and discuss their music. This allows musicians and fans who would otherwise be oblivious of each other a chance to communicate and establish a common community. These communities are even beginning to gain the courage to speak out against the government. LeVine talks extensively about the flourishing blogger scene in Egypt, and how it has become one of the primary outlets for activists who want to voice their opinion. They are even training other activists in technical matters and blogging techniques. The members-only metal community metalgigsforum.com (no longer exists) is another place where artists and fans alike can speak out about politics with lessened fear of government backlash. The ability to have anonymity on the internet and garner widespread report even allowed the web site Marock Sans Frontiers (Morocco without Borders) to post an open letter challenging the Moroccan King, which is a definite no-no in Morocco. The internet has provided a ground-breaking forum for citizens to assemble and create public communities, complete with a public group identity.

In Egypt an artist’s popularity is heavily dependent on their MySpace page. In Lebanon, the rap duo Soap Kills has chosen to distribute their music online to avoid corporate restrictions and reach a broader audience. In Iran, rapper Peyman-Chet uploads his raps to the internet to be downloaded by thousands of people worldwide. Does any of this sound familiar? It should, because all over the world musicians are using the internet to disseminate their music and create virtual communities. Here in the United States, the artist Girl Talk and the band Radiohead both chose to release their most recent albums online for free. Hip hop in America is heavily reliant on mix tape sites like datpiff.com where artists, beat makers, DJs and lyricists can collaborate across the world. The internet is one of the musician’s greatest tools, and it allows music to travel to places where it can attract brand new audiences and bring people together despite cultural boundaries. Slowly, cultural revolution is coming to countries which are in desperate need of change. Iranian rappers are beginning to reclaim public space through internet advertised rap battles in the park. Young people are traveling through the cities of the Middle East blasting hip hop and heavy metal they downloaded off the internet through their car speakers and iPods, while tagging their favorite artists’ names on the walls of the subways and buildings. Hopefully, if the movement continues, the internet will allow those people in the Middle East who have been forced to build online societies to enact real change, and eventually, bring their public, group identities back into the real world.
In hindsight, I'm pretty proud of this post. I, like Kos, cited Radiohead's free "In Rainbows" release as significant because the band subverted the typical gatekeepers by giving their album to the fans, rather than the record companies (although I now realize that Girl Talk only released his album for free because you can't legally profit off an album composed entirely of samples under copyright laws). And in February, LeVine wrote an excellent editorial for Al-Jazeera about how young people in the Arab world have ignored the typical gatekeepers of the western world in favor of a people-powered revolution.

But music, much like twitter or facebook, is not responsible for causing revolution or change (despite what CNN says). Music is simply a tool, or maybe even less than that, music is an issue. It's something that people can gather around, a simple, concrete goal that effects change (COUGHSUCCESs modelCOUGH). All it takes to create a great song is an idea, a couple voices and some enthusiasm. Add in about $3000 and you can even create an international music sensation overnight. And that's why "Taking on the System" isn't a total waste of time- it's about the stories, not the rules. While I wouldn't give Kos the "most influential" title, it would be foolish to say that he hasn't inspired some pretty big political movements. He reinvigorated "people power" in a country that has all but given up on it. And I'm certainly a fan of that.

Monday, March 21, 2011

Snopes was "Made to Stick"

I have a big family. I know a lot of people say that they have a big family, but my family makes the Brady Bunch look small-time. I've got two parents, two step-parents, a sister, four step-siblings, six sets of grandparents, thirty-one aunts and uncles, and nearly a hundred cousins (I've tried to count the actual number of cousins I have several times, but that number is expanding faster than I can keep an accurate count). I'm not even including my innumerable great aunts and uncles, second cousins, second cousins once removed, third cousins, honorary aunts and uncles, and a dauntingly long list of other genealogical titles which I don't fully understand and which make my brain spin in circles.


This is just one small branch of my gigantic family at a reunion last year

And here's the kicker: I'm pretty close to the whole lot of them. I rarely go a full year without seeing the majority of my extended family, and usually its not even that long between visits. Furthermore, if you know me at all, you won't be the least bit surprised to hear that my family is packed with talkers. So when e-mail became widespread in the early 90's, my family pounced on the opportunity to talk to each other even more than they did already. Many of my family members are especially big fans of chain letters, shoot we were even sending them around back when you had to do it by snail mail [insert hipster joke here]. My email inbox was constantly packed with messages like "FWD:fwd:Fwd:FWD:re:fwd:RE: SOO FUNNY READ ASAP!!!1" Back then, I was probably the biggest geek in my family, (who am I kidding, I'm still probably the biggest geek in my family) and I took great relish in disproving the countless myths and urban legends that my relatives would pass around. I'd plug a few words from the email into Google Ask Jeeves, skim through the results for a minute or two until I found a link with proof that the message was a hoax, then gleefully press the Reply-All button and type out a message that usually began with an obnoxious phrase like "Well ACTUALLY." (It's probably no coincidence that I picked up the nickname Andy "I Know" Flowe around this time as well, my sincere apologies to anyone who had to deal with the 7 year-old me on a regular basis).

My annoying knowitallness aside, I began to notice one site in particular popping up again and again in the search results: Snopes. Launched in 1995, Snopes is like the original version of the Mythbusters. They would index every urban legend and potentially false chain letter they received, then painstakingly research the story until they could definitively declare it True or False. Snopes is so accurate and contains a catalog of myths and legends so extensive that pretty soon I just started going straight to the site instead of starting with a search engine. I spent hours pouring over the different sections of the site, sharpening my know-it-all skills to a fine point. I even once had a conversation over email with Barbara Mikkelson, who created Snopes along with her husband David, when I was having trouble finding this myth on the site. My family too, began to notice that I was linking them to the same site over and over. Pretty soon, my aunts and uncles were beating me to the gun, and before I could even send my snarky email someone else had already sent one beginning "Actually, I checked Snopes and it says..." Eventually, to "Snope" something became a verb amongst my relatives, and the false email forwards started slowing to a crawl. Furthermore, the few that persisted always began with a link to Snopes assuring their veracity. "Snopes" had "stuck," at least with my family.

In researching this blog post, I learned that it wasn't just my family that Snopes had struck a chord with. It quickly gained a reputation as the number one debunker of lies on the net. The Mikkelsons were good writers, and even better fact checkers. They chastised countless media outlets for reporting urban legends as breaking news, and broke down the truths and lies in Michael Moore's infamous Fahrenheit 9/11. Indeed, 9/11 was a turning point for Snopes, as their site was flooded with new myths about the terrorist attacks. The Mikkelsons monetized the site with the addition of ads, quit their day jobs, and today it remains their only source of income.

So what exactly was it that made Snopes so "sticky?" After reading Chip and Dan Heath's "Made To Stick," I think I've got a pretty clear idea. The Heath brothers lay out a simple list of qualities that allows some ideas to succeed where others fail. Using the acronym SUCCESs, "Made to Stick" outlines six principles that sticky ideas often have in common, using numerous anecdotes to prove their worth. Since the Heaths opened their book with a classic urban legend, I immediately thought of Snopes. As I read, I tried to apply the Heath's model to the growth of Snopes as a fact-checking website and found that it followed almost perfectly. But I'll let you be the judge, starting with the first principle:
Simplicity. Simplicity is the most important factor in making an idea stick, and it definitely applies to Snopes. The Heath's compare their concept of simplicity to the idea of "Commander's Intent" in the military. The idea is to structure a goal broad and simple enough that commanders can adjust their plans on the fly to meet the goal. The original intent of Snopes was to catalog myths, folklore and urban legends, and they have hardly strayed from that intent in the past 16 years. Furthermore, by keeping their goal simple, the Mikkelsons were able to expand their reach into political myths, which today is by far their most popular section. Snopes also capitalized on the second principle of sticky ideas:
Unexpectedness. When I first began receiving chain letters from aunts and uncles, I often questioned the veracity of their claims but often struggled to find proof against them. With Snopes, I had a simple, straightforward counterpoint to every bogus email I received, usually with the text of the original message included. My relatives had no choice but to accept that they had been duped, and quickly learned that Snopes was an invaluable resource to avoid looking like a fool in the future. By including multiple versions of the original message, as well as all the sources that they had gleaned the myth from , the language in the Mikkelson's articles was so Concrete that it could not be doubted. Additionally, the idea was Credible, in that it capitalized on the doubts that many myths and urban legends presented. The articles almost always included numerous sources, and Snopes refuses to assert a myth as true or false until they are absolutely sure (the site has multiple labels for myths, including an "Undetermined" label for stories still under investigation).
Emotion is the next principle of sticky ideas, and I can personally vouch for the emotional qualities of Snopes. Being able to maximize my know it all skills with such a simple, yet extensive site was a huge relief. I no longer had to argue incessantly about whether or not a story was true, I had Snopes to make the argument for me. The last principle of sticky ideas, S
tories, applies so perfectly to Snopes that it seems like the authors had the site in mind. The site's purpose is its stories, and they aptly demonstrate the amount of information there is to be debunked. My family members also shared countless stories of "snopesing" others, or being "snopes'd," to the point that they wouldn't dream of passing an email on nowadays without "snopesing" it themselves.

The funny thing about Snopes is that like most "sticky ideas," it wasn't designed to stick. The Mikkelson's created the site as a hobby. They were extremely active in urban legend and folklore newsgroups, and they simply wanted a place to catalog all of their findings.

Like the Heath's say, "All they had were ideas. And that's the great thing about the world of ideas-any of us, with the right insight and the right message, can make an idea stick."

Monday, February 21, 2011

Sorry Malcom, had to do it


Last Friday night, I went to an open mic night in Lehigh University's Lamberton Hall. The event was being sponsored by a new student club called "The Music Box." The club, created by senior Ben Singer, aimed to unite all of Lehigh's musical talent in one place. Singer had been telling me about the club and their planned open mic nights for weeks, and while I didn't have the musical skills nor the self-confidence to perform myself, I was excited to see a bunch of my talented friends performing on stage. I started talking to all of my musically inclined friends about the show, asking them if they were performing, and if not, encouraging them to do so. And apparently, I wasn't the only person Singer had told about it. By the time Friday night rolled along, the Facebook event had over 120 people marked as "attending," and over 15 bands and artists had signed up to perform (if you go to Lehigh, you know that its hard to even get 20 people to commit to an event, much less over 100). And while the show was originally planned to last from 7-10, the massive interest pushed the event another 2 hours later, and it was rescheduled to last until midnight. During the day on Friday, everyone I talked to seemed to be buzzing about the show- which of their friends were playing, when they were scheduled to go on, which artists you were absolutely going to be blown away by- it seemed like everyone was going. And sure enough, when I got there at 8, the place was more packed than I had ever seen it (and that's including a performance by The Cool Kids my Freshman year where the attendance was approximately 75 people, which helps prove my point that Lehigh kids don't show up for anything). The show ended up being a total hit, with all the musicians receiving raucous, encouraging ovations, and I even think I saw a little bit of crowd-surfing going on at one point. Side note: if you're interested, the Music Box is having another show this Thursday at Godfrey Daniels on 4th St Bethlehem, BYOB

So what, if I may borrow the terminology from Malcom Gladwell, caused the show to "tip"? Why did the open mic succeed where other student events had failed? Was it because Ben Singer is a "connector", or a "salesman"? Was it because of other students like me, who were acting as "connectors" and "mavens", telling as many people as possible about the show, hyping it up and letting people know crucial details like what time performers were scheduled to go on, or how early to get there to ensure you got a good seat? Maybe it was due to the "power of context," and people were just piling on the bandwagon, going because that's where everyone else was going on Friday night. Or possibly it was "the stickiness factor" that had to do with it, and everyone remembered the event because they got a well worded facebook invite from a friend.

Most likely, it was a combination of any and all of these aspects, working to some extent or another, that caused the success of Friday's open mic. The fact that The Peeled Labels, a band that has already had some success at Lehigh, were performing certainly couldn't have hurt. Also, the news that Robbie Sherrard, a fairly well known Lehigh comedian, would be playing his hit song Rathbone only added to the hype (By the way, if you haven't checked out Robbie's blog yet you're definitely missing out, I've never even met the kid but I just started reading it this weekend and it had me in tears from laughing so hard). In fact, every band and performer that came up on stage seemed to have a substantial backing of friends and supporters cheering for them. The more I looked around at the audience, the more I started recognizing these relationships. Every artist and every band seemed to have a dedicated section of supporters, to the point where by the end of the show I couldn't have picked out a single person from the audience who didn't personally know someone performing on stage.

Looking back on it now, I'm nearly 100% certain that these relationships were what caused the success of Friday's event. I tried to figure out if I, or Ben Singer, had acted as a "connector", "salesman," or a "maven" in this scenario. I re-read Gladwell's chapter on "The Law of the Few" to try and rehash my definition of these terms, and what I found... well frankly, it left me unsatisfied. When defining what makes insurance agent Tom Gau a "salesman," Gladwell says:
"He seems to have some kind of indefinable trait, something powerful and contagious and irresistible that goes beyond what comes out of his mouth, that makes people who meet him want to agree with him. It's energy. It's enthusiasm. It's charm. It's likability. It's all those things and yet something more."
Really Gladwell? The definition of salesmen, one of your three key "agents of change" in the tipping point of epidemics, is "indefinable?" This was the point where I realized that Malcom Gladwell is little more than a journalist attempting to formulate some sort of sociological theory out of witty anecdotes and faulty inductive reasoning. Now before I tear the guy a new one, I must say that he writes incredibly well. His ability to paint a story with numbers and statistics is matched only by the likes of authors like Steven Levitt (I know you've seen the business students carrying around their copies of Freakonomics like its the bible). But his detective work would make Sherlock Holmes and Dr. Watson cringe. He uses specific examples to prove his larger point, often implying that correlation=causation and that because an example applies this time, it must always apply. The more I read, the less I found the book useful as a social theory and the more I found it useful as a collection of positive successes in business and social epidemics. Books blog Brick and rope gives a great example of this faulty reasoning as it relates to the anecdote about crime in New York City:

Steven Levitt, in Freakonomics, takes on this same crime rate data and draws a much more believable inference in my mind. Levitt links the reduction in crime in NY (and LA and many other major cities that saw similar reductions in that time frame) to the Roe vs Wade supreme court decision. Making abortions legal, in his view, dramatically reduced teenage childbirth and unwanted, single parent children at the margins of society. This is what reduced crime twenty years later when that generation entered adulthood. A much more believable theory, and more consistent with all the facts in the last (like the reduction in LA crime, where there was no 'zero tolerance' policy by the police). Gladwell, you feel again, has crafted a theory and stuck with it, facts be damned.

Personally, I like to rely on the idea of close social relationships which drive individuals to action. A few weeks ago, I announced proudly in my J325 class that I was the only person in the room without a twitter account. Secretly though, I had been considering creating a twitter since the class had begun. Many of our conversations in class revolved around what was trending at Lehigh, or who had tweeted what to whom. I realized that the main reason I didn't have a twitter account was because nobody else I knew did. And after spending ever increasing time reading the hashtag #j325, and clicking from account to account in an attempt to keep up with what was going on in Lehigh's journalism department, I finally realized the value of twitter (and subsequently caved to the pressure by creating an account).

Like the attendees of the open mic night last Friday in Lamberton Hall, I had been persuaded to join in not because of the "stickiness factor" involved with twitter, nor because of a compelling "connector" or "salesman" had sold me on the idea. I finally joined because I realized that the people in my existing networks were already heavily involved. And if I didn't get involved soon, I might miss out on the conversation.

By the way, if anyone is interested in laughing some more at Gladwells expense, check out the Malcom Gladwell Book Generator.

Monday, February 7, 2011

In defense of gray



Throughout my short life, I've encountered millions of choices. Many of these choices have been presented as dichotomies: do you like chocolate or vanilla? Are you a religious person, or an atheist? A Democrat or a Republican? Free Market Capitalist or Welfare Socialist? I've always hated these choices because there's always seemed to be something very divisive about them. Forcing people to pick sides, like the world is black and white and it is really easy to figure out which team you're on. And I've often felt alone in this. "There are two sides to every argument", they say. But I can't be the only one who sees the world in shades of gray... can I?

Hell no. This past month, the Arab world has stood up in defense of grayness, and I'm excited. Looking past the arguments about the degree to which social media has affected change in Tunisia, Egypt, Yemen and Jordan, I see people. And not just any people, I see people like me. Men my age, taking to the streets to protest day after day. Among the huge outflow of citizen journalism coming from Egypt last week, this video(yes, that video) and the picture above caught my eye. When I look at them, I see people similar to myself. Stuck in a dichotomy, between a rock and a hard place, they've been told for too long that they only have two choices. Settle for the stable autocracy, or try and take your chances with a turbulent theocracy, either way the government is not your government. But on January 25th, like the Tunisians before them, the Egyptian people politely stood up from that hole between the rock and the hard place, dusted themselves off and told the world "we'll take it from here, thanks."

Maybe its just the young idealist in me, but this got me PUMPED UP. I grew up in the suburbs of Washington D.C., and I've never been too far away from the capital. Both of my parents have worked for the government, and many of my friends growing up had parents who were government employees as well. But when Washingtonians talk about the government, its always in those terms: the government, the fed, the president, the senate, the house. Whatever happened to our government? Our constitution begins with the phrase "We the people of the United States, in order to form a more perfect Union." It never dawned on me until this past week that I have never thought of the government as MY government, one that truly represents my interests and beliefs. And to be frank, I never was completely sure it should be my government. I've always kind of thought of the government as what old people keep themselves busy with while the rest of us are actually doing things. I've had this lingering sense that the government as I knew has always been a little outdated, struggling to keep up with a society living on the cutting edge.

Okay, so maybe I got a little carried away with the hyperbole there. I did vote in the last two presidential elections, and I know our government isn't actually a retirement home. I like that President Obama seems to be a president who talks with the American public and not at them (Bill O'Reilly interview on Super Bowl Sunday anyone?), and I do talk about what our government is doing on a semi-regular basis with family and friends. But that doesn't fix the disconnect that seems to exist between what the people want from their government and what the government is actually doing. For that, we're going to need Clay Shirky's "Everybody."

In Shirky's book "Here Comes Everybody," he explains the institutional dilemma- a collective action problem. "Because the minimum costs of being an organization the first place are relatively high, certain activities may have some value but not enough to make them worth pursuing in any organized way. New social tools are altering this equation by lowering the costs of coordinating group action. The easiest place to see this change is in activities that are too difficult to be pursued with traditional management but that have become possible with new forms of coordination” (31).
Now lets put this in terms of government. Say your town has just authorized a contractor to install 35 closed circuit security cameras all about town. There is a huge public uproar in response: some citizens are appalled by the invasion of privacy, while some think 35 cameras is not nearly enough to watch the whole town. So the town government decides to hold a town hall meeting, where people can come and voice their opinions about the new security camera contract. The town hall discussion will determine the agenda for the next town government meeting, which will be postponed to one week later in order to allow more time for the town hall. When all is said and done, the process has taken weeks longer than originally planned, with potentially hundreds of people spending countless hours of time taking transport to and from the town hall and city hall, arranging for sitters for their children, preparing speeches and counter speeches that costs hundreds of dollars in paper, staples, and man-hours.

But you didn't go to the town hall. On the first day, when the local paper announced the news of the security camera contract, you posted a link on your facebook with the caption "too much?" A couple of friends "liked" it, a few people chimed in with comments and you all agreed that you'd rather live in a town without security cameras if it was up to you. One of your friends, a city councilwoman, sees this on her news feed and reads it. Two weeks later, at the city hall meeting, she thinks about what you and your friends said, what you have contributed to the conversation, and she considers your opinions as she casts her vote.

Now I know this doesn't sound like revolution, but it damn well feels like it. Yes, its mostly just a change in tools, but look how quickly the world has shrunk. Your simple action of posting that link, with the sparse, two-word caption, carries as much weight as any of your fellow citizens letters, with a fraction of the effort. This is government at the speed of you, and the barriers to entry in the conversation are nearly non-existent. You no longer have to choose between excess action and inaction, between black and white. Now, you can demand your right to be gray.

Long live the slacktivists.

Wednesday, January 26, 2011

The digital masses



In 2008, this video surfaced on the internet after a NYC critical mass cycling event. It immediately went viral, with thousands of internet vigilantes demanding the name of the officer responsible (and his head on a plate). The video was posted on a Friday, by Monday morning Officer Patrick Pogan had been stripped of his badge and gun.

Would Pogan have been caught, and properly punished, without the power of youtube and the internet? Possibly, but with much less public vitriol and shaming. Youtube, and the widespread nature of the camera phone, has enabled citizens worldwide to become "internet vigilantes." When injustice arises, often conveyed through an inflammatory youtube video of humans behaving badly (which are becoming increasingly common these days), these vigilantes use whatever information is available to them to seek out the real identities of the people involved and make sure the victim is safely taken care of. Sounds great right? Sort of like a global neighborhood watch, people watching out for each other.

Except if anybody has ever been involved in a neighborhood watch, they know the system is far from perfect. There's always that guy, or maybe a few guys, who are really into keeping the neighborhood safe. He's constantly reporting minor infractions to the local police, who start to ignore him as the reports become more frequent. He starts to get fed up that the police still haven't arrested that damn 13 year old with the paintball gun who keeps vandalizing the stop signs. One night, while dutifully patrolling the neighborhood for troublemakers and criminals, he spots little Jimmy up to some mischief and hastens to make a citizens arrest. In his anger, he tackles Jimmy a bit too hard and accidentally breaks his hand. And when the police show up, it is the neighborhood watch guy, not Jimmy, who ends up in the back of the squad car, screaming at the top of his lungs that he's not the threat to public safety, that kid with the toy gun is!

Internet vigilantes have garnered a reputation for taking it too far as well. This past summer, 11 year old Jessica Leonhardt (better known as Jessi Slaughter) uploaded a youtube video of her and her father. In the video, she is crying while her dad yells angrily at the internet trolls who have been harassing her. Yelling things like "I dun backtraced it," "I called the cyber police," and "consequences will never be the same," Jessi's dad, and the video itself, soon became an internet phenomenon. The internet was eager to teach this misinformed Floridian family about the power of a digital mob. Jessi's real name, address and phone number were posted to countless internet forums, and the mob set to work. Pizzas were ordered to the Leonhardt household, the phone began ringing off the hook with prank calls. But it didn't take long for the pranks to get even more insidious. The prank calls soon turned to death threats. People were calling and showing up at the house at all hours of the day. The family started to become fearful for both their daughter's safety and their own. The events culminated in Jessi being taken away from her family for nearly a week to be placed in police protective custody. Eventually, the digital mob calmed down, and Jessi was able to return home.

So what sparked such widespread internet outrage? Why did Jessi's father feel the need to get involved in the first place? Well apparently, Jessi had been posting videos online for a while. One of a growing number of young girls learning that the internet is a great place for attention, Jessi had found a small following among people who wanted to hear about her day, or her opinions on emo music. Others simply wanted to poke fun at her, trolling with comments to try and make her angry much like a pesky little brother would do. However, Jessi ignored a crucial internet rule (don't feed the troll!) and responded to these comments with a foul-mouthed video in which she tells all the "haters" to "suck her non-existent penis," and "get aids and die." Gleefully, devious internet users began reposting her obscenity-laden rant, encouraging further trolling until the point that her father felt forced to involve himself (which simply provided more food for the trolls).

Okay, so what does all this have to do with Dan Gillmor's "We the Media?" A lot, actually. While reading the book, I was struck by how outdated it seemed, even though it is only about 5 years old. When Gillmor wrote "We the Media" in 2004, Youtube didn't even exist yet (hard to believe I know, but it's true!). Nick Denton, founder of Gawker media, had not yet created The Consumerist, a blog that publicly shames companies for treating consumers badly (remember the Facebook TOS scandal?). And Wikileaks had not yet replaced Memoryhole and DrudgeReport as the leading provider of confidential scoops. It seemed like Gillmor's favorite concept was the blog, an idea that has become just one of many ways people publish their ideas online these days.

However, the deeper I read into the text, the more impressed I became with Gillmor's foresight. Even though Youtube did not exist, Gillmor realized the incredible value of video in journalism. "News organizations should issue a camera phone and digital camera to every member of the staff and urge people to shoot anything that even resembles news," he said. He predicted a rise in the usage and value of camera phones, and video in general, predicting that as a consequence "keeping secrets... will be more difficult for businesses and governments." Enter the Consumerist, a website dedicated to opening up corporations to the public eye. Gillmor predicted that Nick Denton's style of micro-blogging would catch on, and catch on it did. Today, Denton's company Gawker Media owns or originally created a significant portion of the most influential blogs online, including gadget blog Gizmodo, internet gossip blog Gawker, life tip guide Lifehacker, the popular female oriented blog Jezebel and the gaming oriented blog Kotaku, among others. Gawker blogs embrace conversation on their site, encouraging users to post comments and tagging everything with a #hashtag to make it easier to find similar articles. I've mentioned the Consumerist in particular a few times because of the way it has torn down walls of corporate secrecy. Among their most valuable features is the grocery shrink-ray, which exposes the corporate practice of shrinking the sizes of products while keeping them at the same price. Nearly all of the Consumerist's stories are user supplied, and the sites staff use these tips to adhere to the site's motto: "Shoppers bite back."

So what's the point of all this nonsense about internet vigilantes, The Consumerist and Wikileaks? I guess I'm eating my words from last week's assignment (I more or less committed the sin of being a cluetrain skeptic!). I focused a little too heavily on the idea of markets, and not enough on the conversation. The internet literally puts billions of people at your fingertips, a power I severely underestimated in the past. These digital masses can help you shop, or they can get you fired. They can build incredible open-source software, or they can harass a family into witness protection. They can even get you your own tv show. The only thing they can't do is be ignored.

Tuesday, January 18, 2011

Markets are conversations- that's great for them. Now shut up so I can go buy stuff


"The Cluetrain Manifesto" started off as a simple list. 95 theses, like those that Martin Luther nailed to a catholic church in 1517, written to herald an end to "business as usual". The first thesis was a simple, yet bold proclamation:

1. Markets are conversations

Now this is not an entirely new idea. Karl Polanyi wrote about markets as conversations in 1944 in "The Great Transformation." He argued that consumers rarely adhere to a strict economic rationality like Milton Friedman might suggest. Instead, consumers take all sorts of factors into account when making a purchase, including social and cultural ones.

To put this in simpler terms, imagine you are buying green peppers. You want to purchase two green peppers, and you need it in time for tonight's dinner. Now most people would simply go to the nearest grocery store and buy the first green peppers available. But imagine you are different from those people. Imagine you are a foodie, or a professional chef. You won't just go to the nearest grocery store for your green peppers, you're going to drive an extra 20 minutes to hit up that farmers market you love. Bill and Marge, the husband and wife who own the place, greet you as you come in-you are a regular after all-and let you know whats fresh. Maybe you walk out with your two green peppers, maybe you opt for something similar instead, but you'll probably incur much more cost for those peppers than if you just went down to the local grocery. You had to pay for the gas to get to the farmers market, you had to spend the time building a relationship with Bill and Marge, you probably spent some time investigating other markets before you found this one. But that doesn't matter, because your green peppers are worth way more than the green peppers down at the local grocery. You enjoy finding the best food ingredients, to best suit your needs, and don't mind taking the time to do it.

These are the people that the Cluetrain manifesto was written for. Yes the internet as you use it now satisfies your needs, but look how much better it could be! What the authors fail to fully appreciate is that the vast majority of consumers aren't searching for the perfect solution, they're simply searching for a solution. They were right in asserting that "markets are conversations," but some people prefer to have these conversations in real life, rather than online. Some people simply want to get in, make their purchase, and get back to their life.

Let me try re-framing this argument using something more technologically relevant today: smartphones. Now I'll be the first to admit that I'm a bit of a geek. I've been fascinated with smartphones since the Palm Pilot came out back in the late 90's. Today's smartphones are leaps and bounds ahead of the Palm Pilot, essentially hand-held computers. The main competitors in the industry are Google and Apple, Google being the developer of the Android operating system. After doing plenty of research, I finally chose to buy an android phone instead of an iPhone. Android is open-source software, meaning unlike apple’s app-store, you don’t have to have an app approved by a mysterious board of reviewers with a vague set of standards. A programmer can develop an Android application, post it to his website or the Android marketplace, and it will immediately be available for download. To me, this is a valuable feature, because it allows me to use experimental applications written by aspiring developers to customize my phone’s capabilities in ways that the iPhone simply can’t. However, it also means I run a higher risk of downloading an app that might carry a virus, make my phone freeze, or that simply does not function as described (all of which have happened). If I wanted to, I could take it a step further and “root” my android phone. This would allow me to install custom operating systems, increasing the customizability of my phone to its limits. However, rooting my phone would void my warranty and take up a lot of my time—time that I simply don’t feel like spending just so I can make my phone automatically turn on the lights when I walk into a room. I am content with my android phone—un-rooted and limited as it is.

In the same way, iPhone users aren’t demanding infinite customization, they want trust. All applications must be reviewed and approved by Apple before they are available for download. When you download an app from the Apple store, you know that it will work on your iPhone, you know that it won’t carry a virus and you can be reasonably assured it will do what it says it does. Now you can root iPhones, just like you can root an Android phone, but most users are using their iPhone right out of the box. They download apps that are ranked highly, have many downloads, or that their friends are using. This is not because they don’t have an alternative, it’s because they’ve looked at the alternatives and decided that they’d prefer to know what they’re getting without spending lots of time in marketplace “conversations.”

In the updated version of Cluetrain, Rick Levine says "we never said it in so many words, but we strongly implied that conversation is an easy thing." Right you are Rick, and while there are many consumers using the internet to improve their conversational skills, there are many more who simply want to remain silent. Take, for example, the vending machine. Its entire purpose is simplicity. Money goes in, candy comes out. Now if you're using a vending machine you know that the candy will cost more than it does at the store, and there's a small chance it might get stuck on its way down, but 99 times out of 100 you're going to get exactly what you expected from it. And if you don't, you can always go to the vending machine's owners to get your money refunded.

A huge portion of internet consumers are just like vending machine users. They want to get in, get their answer or their product, then get out. If they need to buy new earbuds for their ipod, they're going to go to Google, type in earbuds, then purchase the cheapest pair they can find. They probably won't pour over product reviews beyond looking at numerical rankings (4 out of 5 stars, sounds good to me!). Now a technophile would probably skip google and go straight to Newegg or amazon, where they know they can find cheap, honestly reviewed earbuds. They'd read the reviews for the earbuds in their price range, think over the purchase for a little while, maybe ask a friend or two for their opinion, then ultimately buy the earbuds that worked best for them.

The authors of Cluetrain would probably argue that the first consumer would engage in more marketplace "conversation" if he only had the opportunity. Yet he does! Consumers again and again are deciding against conversation in favor of simplicity and efficiency. The massive success of the iPhone compared to Android phones is one indicator, but there are indicators on a larger scale.

Searls and Weinberger heralded the rise of eBay as an incredible "virtual flee market." In its formative years, eBay seemed poised to completely revolutionize the way customers and vendors interact. Direct interaction meant that sellers could match buyers needs more accurately, and since "marketplaces are conversations," consumers would embrace this new way of shopping.

However, a quick look at eBay's history reveals a different story. In the beginning, listing fees for vendors were very low, and all sellers were treated equally. However, consumers were reluctant to buy from a vendor with a low amount of sales since their rankings were based on such a small sample size. This was reflected in the rise of the big sellers- large companies started listing their products on eBay en masse, quickly racking up near 100% reputations with large scores next to them. In 2008, eBay struck a huge blow against small vendors. They increased listing fees- while simultaneously increasing discounts for large vendors, they changed the way that search results were shown to favor large vendors, and they removed the ability of vendors to give buyers negative feedback. Now small vendors have been stripped of their ability to compete with larger ones. Marketplace "conversation" has been abandoned in favor of efficiency and trust for the lowest common denominator of consumer.

Chris Locke's latest addition to Cluetrain, a chapter entitled "Obedient Poodles for God and Country," made me wonder if he hasn't spent the past 10 years cooped up in his basement cooking up conspiracy theories. His writing is a bit eccentric, but he does make a very interesting point about the internet and authority that relates to the idea of markets as conversations. In the original Cluetrain, Locke speculated on the internet's power to undermine authority. In the updated version however, he declares that "ironically, authority seems to be making a comeback, using the very medium we naively thought might defeat it." Anyone who has witnessed the debate surrounding net neutrality, or who has studied the Great Firewall of China is no doubt schooled in authority's so called "comeback," aided by the internet.

Locke's ideal world is free of this authority, like in the ideal world of Cluetrain's authors (and probably the majority of its readers) markets are conversations. However, maybe this isn't the ideal world of all consumers. Maybe, despite Weinberger's wishes, in the ideal world of many consumers, "the Internet [is] as easy to use and as high-quality as cable TV." Maybe, just maybe, these ideal worlds are able to exist simultaneously. I can envision a day where those who want to can engage in unlimited conversation in the marketplace, while those who don't can continue being coerced and controlled by vendors.

Actually, now that I think about it, that day may have already passed.