You Are Not A Gadget

James Murphy

Member
Mar 26, 2002
4,481
1
36
0019571d.jpeg

[ame="http://www.amazon.com/You-Are-Not-Gadget-Manifesto/dp/0307269647"]http://www.amazon.com/You-Are-Not-Gadget-Manifesto/dp/0307269647[/ame]


Jaron Lanier, a Silicon Valley visionary since the 1980s, was among the first to predict the revolutionary changes the World Wide Web would bring to commerce and culture. Now, in his first book, written more than two decades after the web was created, Lanier offers this provocative and cautionary look at the way it is transforming our lives for better and for worse.

The current design and function of the web have become so familiar that it is easy to forget that they grew out of programming decisions made decades ago. The web’s first designers made crucial choices (such as making one’s presence anonymous) that have had enormous—and often unintended—consequences. What’s more, these designs quickly became “locked in,” a permanent part of the web’s very structure.

Lanier discusses the technical and cultural problems that can grow out of poorly considered digital design and warns that our financial markets and sites like Wikipedia, Facebook, and Twitter are elevating the “wisdom” of mobs and computer algorithms over the intelligence and judgment of individuals.

Lanier also shows:

• How 1960s antigovernment paranoia influenced the design of the online world and enabled trolling and trivialization in online discourse.
• How file sharing is killing the artistic middle class.
• How a belief in a technological “rapture” motivates some of the most influential technologists.
• Why a new humanistic technology is necessary.

Controversial and fascinating, You Are Not a Gadget is a deeply felt defense of the individual from an author uniquely qualified to comment on the way technology interacts with our culture.


A Q&A with Author Jaron Lanier

Question: As one of the first visionaries in Silicon Valley, you saw the initial promise the internet held. Two decades later, how has the internet transformed our lives for the better?

Jaron Lanier: The answer is different in different parts of the world. In the industrialized world, the rise of the Web has happily demonstrated that vast numbers of people are interested in being expressive to each other and the world at large. This is something that I and my colleagues used to boldly predict, but we were often shouted down, as the mainstream opinion during the age of television’s dominance was that people were mostly passive consumers who could not be expected to express themselves. In the developing world, the Internet, along with mobile phones, has had an even more dramatic effect, empowering vast classes of people in new ways by allowing them to coordinate with each other. That has been a very good thing for the most part, though it has also enabled militants and other bad actors.

Question: You argue the web isn’t living up to its initial promise. How has the internet transformed our lives for the worse?

Jaron Lanier: The problem is not inherent in the Internet or the Web. Deterioration only began around the turn of the century with the rise of so-called "Web 2.0" designs. These designs valued the information content of the web over individuals. It became fashionable to aggregate the expressions of people into dehumanized data. There are so many things wrong with this that it takes a whole book to summarize them. Here’s just one problem: It screws the middle class. Only the aggregator (like Google, for instance) gets rich, while the actual producers of content get poor. This is why newspapers are dying. It might sound like it is only a problem for creative people, like musicians or writers, but eventually it will be a problem for everyone. When robots can repair roads someday, will people have jobs programming those robots, or will the human programmers be so aggregated that they essentially work for free, like today’s recording musicians? Web 2.0 is a formula to kill the middle class and undo centuries of social progress.

Question: You say that we’ve devalued intellectual achievement. How?

Jaron Lanier: On one level, the Internet has become anti-intellectual because Web 2.0 collectivism has killed the individual voice. It is increasingly disheartening to write about any topic in depth these days, because people will only read what the first link from a search engine directs them to, and that will typically be the collective expression of the Wikipedia. Or, if the issue is contentious, people will congregate into partisan online bubbles in which their views are reinforced. I don’t think a collective voice can be effective for many topics, such as history--and neither can a partisan mob. Collectives have a power to distort history in a way that damages minority viewpoints and calcifies the art of interpretation. Only the quirkiness of considered individual expression can cut through the nonsense of mob--and that is the reason intellectual activity is important.

On another level, when someone does try to be expressive in a collective, Web 2.0 context, she must prioritize standing out from the crowd. To do anything else is to be invisible. Therefore, people become artificially caustic, flattering, or otherwise manipulative.

Web 2.0 adherents might respond to these objections by claiming that I have confused individual expression with intellectual achievement. This is where we find our greatest point of disagreement. I am amazed by the power of the collective to enthrall people to the point of blindness. Collectivists adore a computer operating system called LINUX, for instance, but it is really only one example of a descendant of a 1970s technology called UNIX. If it weren’t produced by a collective, there would be nothing remarkable about it at all.

Meanwhile, the truly remarkable designs that couldn’t have existed 30 years ago, like the iPhone, all come out of "closed" shops where individuals create something and polish it before it is released to the public. Collectivists confuse ideology with achievement.

Question: Why has the idea that "the content wants to be free" (and the unrelenting embrace of the concept) been such a setback? What dangers do you see this leading to?

Jaron Lanier: The original turn of phrase was "Information wants to be free." And the problem with that is that it anthropomorphizes information. Information doesn’t deserve to be free. It is an abstract tool; a useful fantasy, a nothing. It is nonexistent until and unless a person experiences it in a useful way. What we have done in the last decade is give information more rights than are given to people. If you express yourself on the internet, what you say will be copied, mashed up, anonymized, analyzed, and turned into bricks in someone else’s fortress to support an advertising scheme. However, the information, the abstraction, that represents you is protected within that fortress and is absolutely sacrosanct, the new holy of holies. You never see it and are not allowed to touch it. This is exactly the wrong set of values.

The idea that information is alive in its own right is a metaphysical claim made by people who hope to become immortal by being uploaded into a computer someday. It is part of what should be understood as a new religion. That might sound like an extreme claim, but go visit any computer science lab and you’ll find books about "the Singularity," which is the supposed future event when the blessed uploading is to take place. A weird cult in the world of technology has done damage to culture at large.

Question: In You Are Not a Gadget, you argue that idea that the collective is smarter than the individual is wrong. Why is this?

Jaron Lanier: There are some cases where a group of people can do a better job of solving certain kinds of problems than individuals. One example is setting a price in a marketplace. Another example is an election process to choose a politician. All such examples involve what can be called optimization, where the concerns of many individuals are reconciled. There are other cases that involve creativity and imagination. A crowd process generally fails in these cases. The phrase "Design by Committee" is treated as derogatory for good reason. That is why a collective of programmers can copy UNIX but cannot invent the iPhone.

In the book, I go into considerably more detail about the differences between the two types of problem solving. Creativity requires periodic, temporary "encapsulation" as opposed to the kind of constant global openness suggested by the slogan "information wants to be free." Biological cells have walls, academics employ temporary secrecy before they publish, and real authors with real voices might want to polish a text before releasing it. In all these cases, encapsulation is what allows for the possibility of testing and feedback that enables a quest for excellence. To be constantly diffused in a global mush is to embrace mundanity.


Discuss...
 
I've always found it funny how people think that "open-source" is the absolute be-all end-all best way for software to be, when it seems like it could very easily lead to massive fragmentation and lack of standardization.

Also, I'm not entirely convinced that individual voices are being lost; I know I always go to Cnet and Phonedog for my gadget reviews, Roger Ebert for my movie reviews, and Gamespot for my VG reviews, for example, because I've come to trust the viewpoints of the specific authors (who I know by name) on those websites; I'm sure that many other people could say the same for the news reports, scientific articles, economic analyses, etc. etc. that are important to them.
 
I will say his comment on the singularity is a bit of a misrepresentation.

it's not really... i've read The Singularity Is Near, and i think he's pretty dead on in his characterization of it, even if it is a bit simplified. i'm not at all sure that his conclusions about Kurzweil's work are sound, but the basic characterization of it is spot-on, IMO.
 
I read this a while back, pretty damn interesting. It's fairly easy to say people will always misuse the potential of new technology and go down roads that are arbitrarily chosen by a few and then followed by many. The internet has such amazing power and potential but we mostly get spam, porn and facebook. :lol:
 
it's not really... i've read The Singularity Is Near, and i think he's pretty dead on in his characterization of it, even if it is a bit simplified. i'm not at all sure that his conclusions about Kurzweil's work are sound, but the basic characterization of it is spot-on, IMO.

I've read it too. And I'd say his characterization of it is very opposite to the intentions of the book.
 
then you read a different book than i did Drew.... because the one i read proposes a time when technology will have advanced far enough to essentially encode one's experiences, mind, and somehow conscientiousness, into computers and basically become immortal in this way. As i said, ti's a very simplified, condensed version of Kurzweil's thesis, but it is what's in the book, and that is what Lanier related. hardly the opposite.

granted, the "evolution" of humans via GNR (Genetics, Nanotechnology & Robotics) that he extrapolates from is much more complex than simply saying "encoded into a computer", but the net result is the same; your mind, your thoughts, etc., have to be encoded and stored, and essentially you "transcend biology", as he puts it. that's why i said before that what Lanier said is just a simplified version of Kurzweil's thesis... and definitely not the opposite.
 
Thanks for this post. I think I'll have to read this. It sounds very interesting indeed, and somewhat inline with some of my thoughts.
 
then you read a different book than i did Drew.... because the one i read proposes a time when technology will have advanced far enough to essentially encode one's experiences, mind, and somehow conscientiousness, into computers and basically become immortal in this way. As i said, ti's a very simplified, condensed version of Kurzweil's thesis, but it is what's in the book, and that is what Lanier related. hardly the opposite.

granted, the "evolution" of humans via GNR (Genetics, Nanotechnology & Robotics) that he extrapolates from is much more complex than simply saying "encoded into a computer", but the net result is the same; your mind, your thoughts, etc., have to be encoded and stored, and essentially you "transcend biology", as he puts it. that's why i said before that what Lanier said is just a simplified version of Kurzweil's thesis... and definitely not the opposite.

What I took from it was the notion that we could create a hybridisation of the human form with technological advancements. Nanobots to control our heart rate, to repair damage done to our organs etc, which would then lead on to networking - a literal interpretation of a collective unconscious if you will.

Which is a very different thing to downloading thoughts into a computer.

Your interpretation is a very simplified one, and thus I don't think it is quite in line with Kurzweil's thesis.

Transcending biology is basically just a way of saying to correct our biological faults, extend our lives, and live happier existences. He even talks about getting rid of death.

The problem with Kurzweil as I understand it, is that his thesis is very blue-sky-thinking. He doesn't make much of an argument as to why these things couldn't work. He doesn't address mankinds innate greed, and innate drive to connect with something higher than himself.

That's kind of where a lot of futuristic thought falls down. Bit too solipsistic.

But yeah... I just think the simplification of his ideas presented doesn't really represent the ideas too well. Seems like a lazy interpretation to me.
 
a slight correction on Jason Lanier, the singularity has nothing to do with human consciousness being uploaded to a computer, and i have no idea how he reached such a conclusion.

the singularity is a term in AI computer science, which is the point where a true AI is created that can make a faster, better version of itself, which in turn can make a faster, better version of itself, the growth of which is exponential.

that is all.
 
no drew.... it was not MY characterization of Kurzweil's thesis.. it was Lanier's... i simply stated that it's a simplified version of part of what Ray Kurzweil posits in The Singularity is Near... i did not make a judgement, one way or the other, about whether or not i think his theories hold any water. Thanks though, for pointing out what i already pointed out: that Lanier's take on Kurzweil's Singularity was a very simplified version.... right after saying it was the opposite thing, lol.... and thus not "very different" at all.. just simplified. lazy, probably yeah... but his point is valid regardless, whether or not you agree with it.

anyway, quite aside from the book, Kurzweil has done numerous video interviews wherein he definitely posits the continuation of the conscientiousness, beyond the human body, existing as part of highly advanced computer systems and networks... able to interact with and control robots, etc.... it's all science fiction at this point, regardless.

and no, dcdanman... you are focusing on "Moore's Law", which is a primary component of Kurzweil's thesis, but certainly not the totality of it... the end result is a merging of human consciousness into a non-biological media... "computers", albeit something far more advanced and universally networked than what we know of computers today.

so again... Lanier did get it "right".. just very simplified.... for his purpose, since he really wasn't debating Kurzweil, but rather just warning of the dangers of open systems were the collective always trumps the individual, it was close enough.... the end result being the same... a collective.

FTR, i haven't once said whether or not i agree with either of them.... just posting a link to a book i think is a worthy read, to spark discussion.

i do however, agree with much of what Lanier has to say in his own book... which is not about Ray Kurzweil or his theories, nor is this thread... it's about the book that's linked in the OP.
 
The thought of human consciousness becoming one with computers is a scary one.
 
I came to an identical conclusion about Kurtzweil as Lanier did and I believe I even compared the singularity to a religion in an earlier thread on this forum just like Lanier. I will have to read some of his material. I don't agree completely with everything he says here but I find this very interesting as I've never heard of him til now.

I'm pretty sure it was drew drummer who argued with me for several hours about Kurtzweil as well :lol: good times. And good to know I'm not the only one who feels this way!
 
no James, Moore's law is related to the number of transistors that can fit into a computer die, and whilst it's held true for a long time (doubling every 18-24 months), it's tapering out, as we're reaching the limits of how small silicon based transistors can go. I have no doubt that some alternative technology will come to alleviate this issue, but even if a transistor could be made into 1 atom, that's still an upper limit on how many transistors you can fit.

the singularity (and more generally, the technological singularity) are primarily concerned with AI improving itself faster and faster each time, with exponential growth occurring.

well, either that, or i'm in trouble when i come to do my finals in june.

thanks,
 
There are a lot of different interpretations of the singularity but Kurtzweil's definition involves the creation of a superintelligence that would surpass humans as the primary shapers of world history
 
yes, that is correct.

it's the point at which AI is able to improve itself, each time it does, it gets faster/better, until the creation of a super-intelligence (wether growth could be infinite or not is an unanswered question). after which it is impossible to predict what will happen.

again, it has nothing to do with the uploading of human consciousness to a machine.

personally, i think that it's possible for a human mind to be copied on to a machine (and for a live individual to exist in such a state), but not sure about transfer of consciousness..

thanks,
 
Well... perhaps the singularity *is* like a religion in that it purports to be a universal truth, but fuckwits like us still interpret it in many different ways! :lol:

In all seriousness, I think the comparison to a religion is flawed. There are no supernatural elements, there are no gods being mentioned, there are no airy-fairy concepts that are difficult to latch on to, and there is no reliance on concepts of good, evil, spirituality, and immortality.

If anything it is science taken to its ultimate logical conclusion.

Kurzweil promotes improvement, not replacement. He talks about adding and modifying our existing framework. Downloading consciousness is not his agenda, and a direct quote where he says that would be good to see. His agenda is this - we have natural flaws, we can improve them.

Now *my* problem with it is that it is a bit of a utopian pipe dream. But that's a different criticism to one mentioned. I feel about Kurzweils singularity theory as I do about the venus project, et al.

At any rate - this book sounds like a good read.