Ten Commands For A Digital Age

Economists are often fond of the old adage that “there is no such thing as a free lunch”.  Yet, there are some who think that things like Google Search and Facebook really are “free”.  After-all, you don’t have to pay money to use them, right?  If you’re not financially paying for something, however, it’s probably more expensive than you realize.  Here, then, is the likely rub: “If you are not paying for it, you’re not the customer; you’re the product being sold.” (Andrew Lewis on MetaFilter) So how does the process of online data collection and monetization work? Check out the video below to see a nice explanation.

Anyway, since I recently reread, Program Or Be Programmed by Douglas Rushkoff, I thought I’d share his ten commands for a digital age.  For this post, I’ll briefly spell out what these ten commands are along with some brief thoughts.

  1. Time: Do Not Always Be On — I think most of us can relate to this one.  Phone calls, texts, emails, blog posts, status updates, and tweets will rob us of our humanity, if we let them.  The trouble is that answering them more efficiently only exacerbates the problem.  The more efficiently we respond the more incoming information comes our way.  The human nervous system wasn’t meant to constantly be on call.  If you’ve experienced Phantom Vibration Syndrome, then it might be time to put away the gadgets for an extended period of time.
  2. Place: Live in Person — Suppose it’s New Years Eve and you’re with some friends.  One of your friends, always with his smartphone in hand, is looking for the best party.  Accordingly, he’s not content until he finds it and he drags you and the rest of your friends around from place to place chasing the best scene.  The reason he is able to do this is because he gets continually updates on his phone (In case you’re wondering, this has actually happened to me).  We can only really live in one place at a time, yet the Foursquare check-ins, tweets, and status updates constantly draw people’s attention from their immediate surroundings.  Instead of actually enjoying a dinner out, some people are more worried about making others think that they are enjoying a dinner out.  Vanity becomes more important than happiness when we surrender our presence to the digital world.
  3. Choice: You May Always Choose None of the Above  — We may not always be aware of it, but computing is biased towards urging us to make a choice.  Are you a libertarian?  Yes or no.  Notice that there isn’t any room to express nuanced beliefs when dealing with these questions that are presented as if two answers were the only options.  Computers are biased by bits and consequently create a world of binary choices that may mislead us.
  4. Complexity: You Are Never Completely Right — Computers are biased towards a reduction of complexity.  In the words of D.H. Lawrence, “the map appears to us more real than the land”.  Let’s use Facebook as an example.  Facebook reduces the complexities in varying types of friendships that exist in reality.  I may want to share certain things with some friends, but not others.  However, Facebook continually forces us to try and categorize people into different buckets that we want to share some information with, but not all information with. This is not a perfect map of the reality of our complex social lives.
  5. Scale: One Size  Does Not Fit All —  There is a bias on the Net towards abstraction.  Instead of creating a valuable resource, there is an economic incentive to aggregate the creative work of others, as a form of nebulous value creation.  Instead of writing your own blog, why not just aggregate the work of the best bloggers?  A race to become the most meta site out there often ensues.  But really this just adds layers of abstraction, which demeans the value that actual creators provide.  The financial world makes for a great analogy.  First, there were asset-backed securities (ABS), then CDO’s, then CDO’s-Squared — when should the abstraction stop?  Every level of abstraction seems more profitable than the one before it, but is there any real value being created here?
  6. Identity: Be YourselfAnonymity on the Web has a tendency to bring out people’s inner-trolls.  However, in some extenuating circumstances, anonymity is understandable.  Most of the time though, anonymity removes the human element from our interactions and degrades the relationships we build with others online.  The beauty of the Web is that we have time to think carefully and review things before publishing them.  Being yourself forces you to own your words and should encourage you to be civil.
  7. Social: Don’t Sell Your Friends — The Internet is an inherently social tool.  Accordingly, it’s become increasingly difficult to demarcate some people’s professional identity and their actual identity.  This, of course, can be good or bad, depending how we look at it.  Marketing is a powerful force, and the Web makes it even more powerful.  It’s important to remember that we shouldn’t misrepresent ourselves and exploit our friends for our financial gain.  If you truly believe in a product or service, that is one thing, but peddling things just for the affiliate cash is shady.
  8. Fact: Tell the Truth — To put it simply, what you write online can be hard to erase.  Before you leave a comment you should contemplate its permanence.
  9. Openness: Share, Don’t Steal — Creators spend a lot of time and energy on their work.  It’s easy to rationalize stealing another persons work in such an open system. Here’s an example.
  10. Program or Be Programmed — This is obviously a false dichotomy (which is why I think it was a poorly chosen name for the book); however, I think it’s important to at least understand the tools and models we use.  Do you need to learn how to program?  Probably not.  Do you need to understand what programming is and that it exists?  Absolutely.

Overall, I like these commands, although I did take issue with some of Rushkoff’s arguments in the book (especially the second time around).  Anyway, we live in a world with an increasing reliance on digital technology that many of us simply don’t understand.  Is this dangerous?  Well, it depends.  If we fetishize our digital tools, it certainly is dangerous.  However, digital technologies can also be a huge boon to humanity.


Is the Quantified Life Worth Living?

If for some strange reason you, my dear reader, wanted to know how many hours I slept or how many yards I swam on any given day in 2008, I would be able to tell you with fairly accurate precision. This is no joke. At the risk of revealing my obsessive compulsive tendencies I must admit that at one point in my mid-twenties I was a triathlete who kept track of such things. I was by no means an elite, even amongst the age-groupers, but, I had the strong desire to become one. More important to me than my relative status, however, was my strong curiosity to see exactly how good (in this case, how “fast”) I could become in the sport of triathlon. I often wondered where, given my genetics and athletic background, my personal limits existed. The only way to figure it out, I thought, was to focus on measuring my performance and doing everything in my power to improve it.

Popularized by Malcolm Gladwell in his book, Outliers, there is a belief that in order to become excellent, or even world-class, at something requires 10,000 hours of practice; it’s called “The 10,000 hour rule” . I was always an athlete growing up, but I had virtually no experience in running long distances, swimming, or biking. If it was going to take 10,000 hours to see how good I could be at triathlon, I was hell-bent on making it happen. As such, I started obsessively measuring everything I could think of that pertained to my physical fitness. And I mean everything.

I kept a log of things like daily bodyweight, calories consumed, body-fat percentage, hours slept, hours and distances spent running, biking, and swimming, heart rate statistics, and more for well over a year. At the time, I was woking in the world of high-finance in Chicago and my days were spent pouring over and analyzing data. Crunching numbers in finance is what gets you results. As such, I figured I might as well apply it to my personal life to get results as well. Aside from being able to brag about how many yards I swam in 2008, I was able to quantify, graph, and look for various correlations amongst all of these metrics as they related to my performance (with hopes of improving it).

Similarly, both my undergraduate and my current graduate studies of economics have seemed to focus almost exclusively on numbers and graphs. Many economists, in fact, suffer from a compulsion for the issues they deem quantifiable (the problem is most issues of importance are not quantifiable). Without a doubt, many economists are guilty of trying to quantify the unquantifiable and I suppose I must admit that I at one point I was a member of this camp.

Reflecting back on it, I was well on my way to becoming what Kevin Kelly calls a “lifelogger“, which is a term that refers to people who attempt to record and archive all information in their life. In my triathlon days, I had mistakenly become entranced by a silly doctrine that is still rampantly promoted at business schools around the globe, i.e., Peter Drucker’s management philosophy of: “What gets measured gets managed”. In some ways, I blame my economics background and the world of finance for my clouding my thinking (perhaps it’s still clouded). Finance, although not necessarily economics, is, in essence, all about the quantifiable. I suppose, however, that it is only fair that I take some personal responsibility for being a fool in the past as well.

Recently, the marketing guru, Tim Ferriss, published a new book called the The 4-Hour Body (disclaimer: I have not read the book) that is all about body hacking and from what I can gather, seems very similar to what I was doing in the past. The origins of body hacking, however, can be traced back to Benjamin Franklin, who famously kept a list of 13 virtues and put a check mark next to each when he violated it. Franklin believed that collecting this type of data motivated him to refine his moral compass. Mr. Ferriss, again, from what I understand, believes collecting this type of data can turn us into “super-humans”.

There is also an increasingly popular movement, that I was unaware of until the past year or so, called the Quantified Self movement, the tagline is “self knowledge through numbers.”[1] In a recent comment, a reader guided me to the Bullet Proof Executive, which is run by Dave Asprey, a “bio-hacker” and a big player in the Quantified Self, who claims to have shaved 20 years off his biochemistry and increased his IQ by as much as 40 points through “smart pills”, diet and biology-enhancing gadgets.

In a blog post, about a Quantified Self conference he attended in May, 2011, Kevin Kelly writes:

Through technology we are engineering our lives and bodies to be more quantifiable. We are embedding sensors in our bodies and in our environment in order to be able to quantify all kinds of functions. Just as science has been a matter of quantification — something doesn’t count unless we can measure it — now our personal lives are becoming a matter of quantification. So the next century will be one ongoing march toward making nearly every aspect of our personal lives — from exterior to interior — more quantifiable. There is both a whole new industry in that shift as well as a whole new science, as well as a whole new lifestyle. There will be new money, new tools, and new philosophy stemming from measuring your whole life. Lifelogging some call it. It will be the new normal.

The Financial Times also recently had a very interesting piece titled “The Invasion of Body Hackers” that referenced both Mr. Ferriss and the Quantified Self movement. Reading about body hackers brought back some interesting memories (a few years ago I would have been gung-ho about quantifying even more bodily related data points) and has caused me to philosophically reflect on the implications of our increasingly measurement obsessed society. Does quantifying ourselves make us better and healthier human beings?

Although the body is distinctly different from, say, finance or economics, I think there are some similarities. Namely, that not everything that matters can or should be quantified. As it relates to body hacking, I’ll pontificate that an obsessive focus on measuring everything often detracts from the ability to actually live and enjoy life. In his book, Enough, John Bogle (the founder and retired CEO of the Vanguard Group) writes about trying to quantify economic and finance issues, but I think what he says can be applied to the body as well. He writes:

Today, in our society, in economics, and in finance, we place far too much trust in numbers. Numbers are not reality. At best, they are a pale reflection of reality. At worst, they’re a gross distortion of the truths we seek to measure. But the damage doesn’t stop there. Not only do we rely too heavily on historic economic and market data; our optimistic bias also leads us to misintrepret the data and give them credence that they rarely merit. By worshipping at the altar of numbers and by discounting the immeasurable, we have in effect created a numeric economy that can easily undermine the real one.

As Bogle implies, the more we try to measure what’s important, the more it seems to escapes us. I don’t, however, mean to imply that measuring is entirely useless or necessarily destructive either. In fact, I believe in some domains it is incredibly valuable. Albert Einstein elaborated on this point as well. He once said, “Not everything that can be counted counts, and not everything that counts can be counted.” Einstein brings up a great point, but some things can be counted and do count in the long run; I think it’s important not to ignore that reality either.

I think even one of my favorite ancient philosophers would agree that reflecting on and analyzing our lives is important (although not necessarily quantitively). The Stoic philosopher, Seneca the Younger, wrote: “Every day, we must call upon our soul to give an account of itself. This is what Sextius did. When the day was over and he had withdrawn to his room for his nightly rest, he questioned his soul: ‘What evils have you cured yourself of today? What vices have you fought? In what sense are you better?’ Is there anything better than to examine a whole day’s conduct?’” As things like “lifelogging” become more and more popular, however, I believe there is a real danger of being deluded into thinking that the absolutely quantified life is worth living. From my experience, even a semi-quantified life can be destructive. The most beautiful parts of life are not quantifiable; there is, after-all, more to life than data.

Please consider buying me a coffee to help sponsor more posts like this and to help with the fees for maintaining this site. Thanks!




Notes:
[1] Here’s an interesting TED Talk by Quantified Self founder Gary Wolf.


The End of Human Labor

Strangely, I’ve often wondered if it would ever be possible to domesticate monkeys. Recently, there was an interesting article in National Geographic titled “Animal Domestication” that has me thinking about monkey labor again. Imagine if monkeys replaced humans in factories or if a chimpanzee showed up to clean your house. I could spell out countless humorous examples of monkey labor, but I don’t want to digress from a key point. Why does the thought of monkey labor make us feel uncomfortable? I think there is one fundamental reason and it’s the same reason we feel uncomfortable about technology becoming really good. Both threaten to make human labor obsolete.

The End of Work

Back in the 1930’s, in his piece, “Economic Possibilities for our Grandchildren“, John Maynard Keynes described “technological unemployment” as follows.

For the moment, the very rapidity of these changes is hurting us and bringing difficult problems to solve. Those countries are suffering relatively which are not in the vanguard of progress. We are being afflicted with a new disease of which some readers may not yet have heard the name, but of which they will hear a great deal in the years to come—namely, technological unemployment. This means unemployment due to our discovery of means of economising the use of labor outrunning the pace at which we can find new uses for labor.

But this is only a temporary phase of maladjustment. All this means in the long run is that mankind is solving its economic problem. I would predict that the standard of life in progressive countries one hundred years hence will be between four and eight times as high as it is today. There would be nothing surprising in this even in the light of our present knowledge. It would not be foolish to contemplate the possibility of far greater progress still.

Was Professor Keynes right? Imagine a world in which technology becomes so sophisticated that robots can do virtually any task. If this sounds a bit far fetched consider the Roomba, which has already intruded into the world of domestic chores like vacuuming. When robot labor becomes cheaper and more efficient than human labor, what, then, will be left for humans to do? One possibility for a society that didn’t require human labor would be the one in which communists envisioned. Consider Leon Trotsky’s idealist vision for communism.

All the arts – literature, drama, painting, music and architecture will lend this process beautiful form. More correctly, the shell in which the cultural construction and self-education of Communist man will be enclosed, will develop all the vital elements of contemporary art to the highest point. Man will become immeasurably stronger, wiser and subtler; his body will become more harmonized, his movements more rhythmic, his voice more musical. The forms of life will become dynamically dramatic. The average human type will rise to the heights of an Aristotle, a Goethe, or a Marx. And above this ridge new peaks will rise.

One need only to read a bit of history to learn about the disasters of communism. This Utopia painted by the mind of Trotsky didn’t come to fruition. The realities of communism were much different and deplorable. Trotsky wanted people to believe that humans would be left to explore their creativity and ingenuity with abundant free time and that they’d spread the wealth evenly. There is, however, one distinct difference in the future I’m describing from communism, i.e., human labor isn’t needed to survive in the new technology dominated world.

What, then, exactly happens when humans no longer need human labor to survive? I think this idea is incredibly difficult for most people to fathom, but it ought to be one of the premier issues for political theorists. For those who don’t own any of the means of production, no matter how cheap goods became, they would have no way to sell their labor to buy them. It would become near impossible to acquire anything if you don’t already own some means of production. The robots (or even the domesticated monkeys) would essentially create a new source of slave labor that would starve out the middle class. Some individual or company would likely own them. This, of course, brings up a few interesting ethical questions. Do we have an ethical obligation to robots? Will they ever deserve to be treated like humans? We, as a society, rarely talk about these questions in public discourse, but perhaps we ought to.

You might be skeptical of the claim that any of this could ever happen, but it might just be more probable than you think. In fact, driverless cars already exist. I have yet to hear, however, anyone ask: What happens to all the cab drivers now? A hypothetical filmmaking cab driver must now go out into the market where it is becoming increasingly difficult to find a job that pays. He will struggle to get paid creatively, and thanks to robots, he will struggle to get paid for physical labor. Paradoxically, as technology is getting better, our society has (mostly unknowingly) gravitated towards a communist-like world in which people work for free for the collective. This has been a disaster in the past and it’s going to be a disaster in the future.

The End of the Middle-Class

A healthy middle-class is the key to a thriving democracy, but we have set ourselves up for a plutocratic rule. The economic architecture of the Internet has created a “culture of free”. The basic idea is that human created content is free and companies with search engines profit from this by selling advertising. Jaron Lanier calls this digital Maoism or “cybernetic totalitarianism” and I agree with him that it’s very dangerous.

The problem is that most of the things that are uniquely human creations are free on the Internet today. When you think about it, this is strange. At no time in the past did we expect all books, magazines, newspapers, magazines, photographs, movies, and music to be free, but for some reason we expect them to be online. This speaks to a very important point: nothing a search engine does is valuable unless there is human created content for it to query. Humans make search engines valuable, but search engines don’t make humans any more valuable. To believe otherwise is to destroy the concept of personhood. As machines (or monkey labor) are able to potentially replace the need for physical human labor there will likely be no way left for humans to make a living. Unless you already have money, there will be no way to make any more of it.

One way this could change would be through paying to access each others creative content online. Many people are opposed to this, but I don’t really see how else there could be a middle-class. How are the musicians, artists, writers, journalists, and filmmakers supposed to make a living from their craft? In the present, they can still sell their labor on the market as a restaurant server or cab driver, but as I’ve discussed, those types of jobs may soon be obsolete for humans. Perhaps the most troubling aspect of the current digital economic structure is that as the physical jobs are disappearing, thanks to the “culture of free”, no one is willing to pay for content online. We are, with open arms, welcoming back a part plutocratic and part communist-like society.

Without a doubt, the machines are getting really good and the “culture of free” is destroying the possibility to earn a living in ways that are generically human. Is it possible that our own human intelligence will eventually make human labor obsolete? Are humans capable of surviving in a world like this? More importantly, just because we can create a world like this someday, does that mean we should? Glorious civilizations in the past have been destroyed before from unintended consequences. And technology will not necessarily save us from a similar fate. Whatever our answer to the questions I’ve posed in this essay are, I think we need to spend more time thinking seriously about the potential ramifications of our digital economy.


Digital Ambivalence

I’ve read all of the responses to the 2010 Edge Question: Is the Internet Changing the Way You Think? I thought it would be fun to answer the question with an essay of my own.  Without further adieu, here is that essay.

***

Before I can offer any speculation as to “how” the Internet is changing the way I think I must first wax epistemic and answer a seemingly simpler question: Is it possible for me to know if the Internet is changing the way I think? It’s not likely, but then again, how would I know? Now back to the original question; I don’t believe the Internet is, on any physiologically fundamental level, changing the way I think. What it is doing, however, is changing what I think of it, in both positive and negative fashions. I suppose I feel a sense of digital ambivalence towards the Internet. Allow me to explain why.

The Internet is, for the most part, a wonderful tool for sharing. Never before have the masses had access to the world’s knowledge at their fingertips, nor have the masses been able to add to the world’s knowledge base. An unfortunate by-product of the unrestricted ability to share knowledge is the similar ability to share faux-knowledge as well. An important question thus arises: Does the amount of knowledge now available on the Web benefit our world more than the cost of all the faux-knowledge? Time will tell.

In my opinion, social media is certainly one of the more interesting phenomena on the Web. Again, it offers many wonderful advantages that I cannot deny; however, it comes with many ills I can’t ignore either. Through social media I’ve been able to communicate with brilliant and interesting people from all over the world, some of which I’ve met face-to-face as a direct result of social media.  My experiences in doing that have been nothing but positive. There are, however, aspects of social media I dislike; namely, that there is too much “noise”. Anyone who uses Facebook or Twitter has certainly experienced similar frustrations when they see their feed clogged up with non-sense. Interestingly, I’ve found that most of the noise is created by people I actually knew in the past, but have not seen in almost a decade. Perhaps there was a good reason as to why I was no longer connected to them before the advent of social media. What I’ve come to realize is that one of the most important skills in the Digital Age is learning how to efficiently navigate through the “noise” on the Web. I’m continuing to work on and hone personal strategies.

Addressing the issue of our very humanity, I must ask: What does it mean to be human in the Digital Age? I think anyone with even an inkling of an addictive personality has discovered that information can be an addictive substance. I know I sure have. I don’t, however, necessarily consider the fact that I like to read academic research, articles, essays, and blogs a vice, although within certain contexts it certainly can be. Like other addictive substances, information can be dangerous in toxic doses. This, however, seems to be a problem with the human users and not the technology itself. I think Aristotle, if he were alive, would agree that the skill of phronēsis (practical wisdom) is more important than ever in today’s digital world.

Many people have argued that the Internet is robbing us of our human essence. In some ways I think there is at least a grain of truth to that claim. Thanks to the ubiquity of smartphones many people are expected to be available for work calls and email at all times of the day. I think this is largely a cultural issue, but nonetheless, it is an ugly abuse of technology.

Many people are also spooked by the idea of the Singularity. This, in my opinion, is committing an act of technological hubris. The Internet is a powerful tool, but it can’t think for itself. What makes the Internet powerful are the people who use it and it’s easy to forget that.

I’ll leave it to the reader to ruminate further over their own digital ambivalence.


Why Information Overload Matters

Scott Berkun recently proposed the following hypothesis about information overload here: “It doesn’t make the world any worse to add more information to it, since we can’t be/feel more overloaded than we already do.”  This short essay is in response to his hypothesis.

***

Is all information created equally?  The simple answer is ‘no’ — some information is simply better than other information.  The world is indeed worse off if “junk” is posted on the Web because I must use part of my capacity to consume information to sort through irrelevant junk.

Consider the following example.  Suppose you wish to get from X to Y in a Euclidean world and are seeking out information (directions) to get from X to Y. It is possible that there are many different way to get there; it is also conceivable that there are many routes that won’t get you there. Does any of this matter? Clearly, I think the answer is ‘yes’; there is a well defined objective goal of getting from X to Y, so I think it’s fair to assume that directions that don’t get you there are worth less than those that do.

Let’s examine the assumption that information overload is indeed a constant. I take that to mean that at some threshold I can no longer physically consume any more information. To simplify my argument let’s say an individuals threshold is 10 articles a day. Suppose that I must sift through three articles containing directions to Y and carefully analyze them in order to find the “correct” directions that will actually get me to Y. I end up wasting two articles of my information capacity on irrelevant noise. This is an example of when the world can be worse off with more information.

The more noise that is out there the harder it is to find the best and most accurate information. Sure, there is a limit as to what I can consume, but the quality of the information I consume can vary. Again, the more noise out there, the more time I have to spend sorting through “junk” information to get to the relevant stuff and thus, it is very possible for the world to be worse off with more “junk” information on the Web.