Philosophy topic: the global society

Well the example stated hardly disproves "absolute power corrupts absolutely". What if the biggest of said corps bought out the other two. Then we would be fucked so hard our colons came out.
 
Competition in the free market would not be enough on its own to reign in multinational conglomerates.

How are the raw materials controlled? Besides, your statement is a bold assertion, since we have never seen a market without government protections, subsidies, and grants.

@SS: Then start a competitor.
 
No one starts a competitor in order to improve people's freedoms, so all the corp has to do is make sure that it's not very profitable to try, or to buy them out.
 
If it's not very profitable to try, this is good for consumers, and a corporation cannot buyout competition forever. This is why there are so many barriers to starting a new business. The existing industry leaders dont want competition.
 
I'm sort of surprised by Grant's seeming (it could be him playing devil's advocate) support of laissez faire capitalism. I support free markets, but I feel that regulation is necessary, especially from the top down. It's my opinion that starting a small business is too difficult, what with all the the licensing and taxation that is required. However, in the world of big business, anti-trust regulations (as an example) have basically lost their teeth.

Well the example stated hardly disproves "absolute power corrupts absolutely". What if the biggest of said corps bought out the other two. Then we would be fucked so hard our colons came out.

I probably should not have gotten started on the tired old "public vs private sector" debate, not really the direction i was going for. That's what i get for not being sober when i started the thread though :p

A lot of people say that since the world "opened up" around 1989-1991, people have clung to their ethnic, religious and national identities like a child to his teddy bear on a dark night. Quite a few catastrophic massacres took place just after that period that at the time were understood to be ethnic conflicts that were part of a long intractable conflict. Now, the point here is that was how the average participant in those conflicts (Rwanda, Bosnia, central asia etc) saw it. The actual people behind those conflicts made tactical decisions to consolidate their power and for financial gain. How quickly people jumped to the bait of these supposed ancient hatreds and rivalries demonstrated certain human tendencies though.

Anyway, on the subject of the internet, I think people like Rupert Murdoch have kind of been fucked by it. People, I imagine starting with my generation, will get sick of media that is not interactive which kind of fucks over propaganda, to a degree.

Yeah, strong feelings about culture/heritage certainly make people more easily persuaded by propaganda/rhetoric that presents their country's domestic problems as resulting from an external threat.

The internet is helping to break these barriers down in many cases, perhaps most notably in the middle east now with the Arab Spring going on. I think this trend will continue, and as people from different 'worlds' learn more about each other they will come to respect each other more and care less for divisive government rhetoric. The breakdown of language barriers (i.e. more people in developing countries learning English) is still pretty crucial to this, though.

That still leaves the problem of "unifying" government rhetoric, however -- i.e. proposals to promote cultural/economic/political consolidation in the name of "let's all get along".
 
The Arab Spring is the result of CIA/corporate instigation, not original populist uprisings. The new regimes taking the place of the old are quite repressive. Of course the media isn't mentioning that. For instance, Egypt was essentially a military coup d'état.
 
zabu of nΩd;10043457 said:
Many would say that we now live in a "global society" thanks to our communication networks and information technologies and cool shit like that. Do you think this global society is making the traditional geographically bound societies and institutions (i.e. ethnicities, nationalities, educational institutions, religious institutions, or even governments) obsolete?

Short answer to this question: yes, but as long as class warfare plays a role in modern society, you won't see a decrease in public "awareness" (for lack of a better word) of different ethnicities or nationalities.

As of today, society uses diversity as a way to distract itself from class inequality, which in turn only bolsters ethnic distinction and, to put it bluntly, racism. So we still have a big hurtle to overcome in that area.

zabu of nΩd;10043457 said:
Could technology be said to have put a sort of anarchist system in place that allows people some measure of economic independence and a way of getting around all the problems in politics and the "sphere of institutions"?

Honestly, I don't think so. If we're specifically addressing the internet, then there's no doubt it's provided a new kind of forum for individuals to speak their minds; but then again, only individuals who can afford and know how to work a computer. Technology requires some level of expertise to run, most of the time, and there is a significantly large group of people who are beyond the scope of knowledge needed.

In the long run, however, when technology becomes self-sufficient to the point of not needing human assistance, that problem will be overcome. Of course, a whole new one presents itself, as well.

zabu of nΩd;10043457 said:
Also, would you say technology is working more "for" or "against" the current economic crisis (i.e. debt and unemployment problems)? Could it potentially save us from the crisis? Will the bulk of society be able to escape disaster under a "service-oriented market economy"?

Discuss!

Again, I feel that most jobs created by new technologies are going to be looking for educated and technologically experienced individuals as employees, which still excludes a large portion of the population.

zabu of nΩd;10043509 said:
Can corporations take over the role that governments have traditionally served in maintaining social order (i.e. by regulating the flow of capital)? In many ways they have become the new "agents of worldwide change" in managing the infrastructure and economies of the world, and even if there's a lot that sucks about them, having them around is possibly preferable to leaving 'sovereign' governments to run us into the ground. I am assuming, of course, that this question has significance from a political orientation / public policy perspective.

I think if we see technology continue in the way that it has been, the nature of corporations and financial institutions will change drastically. Perhaps one of the best fictional sources that offers a startlingly fresh look at what corporations will be like in the future is Charles Stross's Accelerando.

However, if we buy into Stross's vision in that novel (which is heavily influenced by Ray Kurzweil, although Stross is poking fun at him a bit), then corporations of an advanced nature will be equally as totalitarian as the worst form of governments are now.
 
The giant multinationals (and big governments for that matter) love Kurzweil's vision of the future, and are going to dump as much capital as possible into making it happen. His hopes of machines passing the Turing test and full integration of genetics/nanotechnology/robotics by 2030 are a bit overly optimistic, however.
 
For some reason, Kurzweil's theories seem to embody (for many) a kind of culmination of global capital and information technologies. I'm not sure why, because from what I can see the advent of the technological singularity will result in the obsolescence of free market economics and liberalism as we know it.
 
The singularity can only happen with an elite to direct/enforce/encode/control it, so it is definitely happening specifically to end the possibility of freedom in all aspects. I find Kurzweil's predictions to be as revolting as they are likely, should the apocalypse not predate the singularity.
 
You're assuming that whatever sentient AI brings about the singularity is developed under the control of an elite (as opposed to some university lab, hacker group, etc), and that this elite understands enough about the AI's behavior to be able to keep it under their control without the AI figuring out some way to escape them and propagate itself to the wider world.
 
It's still arguable whether "full" AI could be achieved. The singularity, as in the human/machine meld would require a massive amount of capital and brute force of the state/elite to bring about absolute compliance.

I, for one, would not comply even it if meant death, since to become inhuman is essentially death anyway, imo.
 
That's why the elite class would never embrace the technological singularity. They might be willing to take neural mapping to its logical end ("uploading" the complete contents of a human brain into a computer), which would make immortality possible, but full AI is too much of a threat to their power. I'm also skeptical that full AI could ever be achieved, at least without self replication (which would also be harmful to the elite's control). Humans could never program a computer to have the emotional capacity, intuition, and pattern recognition that we have in a top-down fashion. Self-replication would be necessary for the computer to "evolve" these characteristics. I think at most, we'll see 90% of AI in computers and wide-spread cybernetics and genetic engineering in humans.
 
zabu of nΩd;10056278 said:
You're assuming that whatever sentient AI brings about the singularity is developed under the control of an elite (as opposed to some university lab, hacker group, etc), and that this elite understands enough about the AI's behavior to be able to keep it under their control without the AI figuring out some way to escape them and propagate itself to the wider world.

True; and honestly, I think that, even if some corporation or financial entity somewhere is funding the research and development for a full AI project, if computers actually achieved sentience, it would have little to do with any final flick of a switch. There's no control when dealing with this sort of technology, because it's beyond technology as we understand it.

It's still arguable whether "full" AI could be achieved. The singularity, as in the human/machine meld would require a massive amount of capital and brute force of the state/elite to bring about absolute compliance.

I, for one, would not comply even it if meant death, since to become inhuman is essentially death anyway, imo.

I don't think of the singularity as a "human/machine" meld; it's the moment when technology is able to advance at sentient levels without the input/regulation of human monitors. Posthuman might certainly become a form of life after the singularity; but the singularity itself is pure AI, that is: wires and circuits.

That's why the elite class would never embrace the technological singularity. They might be willing to take neural mapping to its logical end ("uploading" the complete contents of a human brain into a computer), which would make immortality possible, but full AI is too much of a threat to their power. I'm also skeptical that full AI could ever be achieved, at least without self replication (which would also be harmful to the elite's control). Humans could never program a computer to have the emotional capacity, intuition, and pattern recognition that we have in a top-down fashion. Self-replication would be necessary for the computer to "evolve" these characteristics. I think at most, we'll see 90% of AI in computers and wide-spread cybernetics and genetic engineering in humans.

We're already seeing more of the ruling/elite class rejecting the technological singularity through attempting regulation of the internet.
 
I don't think of the singularity as a "human/machine" meld; it's the moment when technology is able to advance at sentient levels without the input/regulation of human monitors. Posthuman might certainly become a form of life after the singularity; but the singularity itself is pure AI, that is: wires and circuits.

Transhumanism then? I believe this is also one of Kurzweil's keystones.

We're already seeing more of the ruling/elite class rejecting the technological singularity through attempting regulation of the internet.

I disagree in the rejection aspect. They merely seek to control it and have exclusive use of it's benefits. DARPA, among other less known groups, is as hard at work on the singularity as it was on the internet.
 
Transhumanism then? I believe this is also one of Kurzweil's keystones.

I think transhumanism's more appropriate; but it all gets jumbled together. I think of posthuman as a biomechanical adaptation.

I disagree in the rejection aspect. They merely seek to control it and have exclusive use of it's benefits. DARPA, among other less known groups, is as hard at work on the singularity as it was on the internet.

I may have jumped the gun with that; I would agree that corporations and firms are certainly pursuing technological advancement. However, if there any individuals on those boards that have even the slightest notion of what the singularity might mean, I can't imagine they want to cross that final threshold.

The regulation of information seems, in my opinion, to be directly in contrast to progress in the field of artificial intelligence. Perhaps those involved with DARPA are unaware of this, but the singularity, as the potential next stage of information and artificial consciousness technologies, requires the liberation of information, not its containment.
 
i just wanna say fuck you Keynesians for fucking up the economy

and fuck you Greece for fucking up the euro.