I’ve been reading up on quite a bit of transhumanist literature recently, both arguments for and against it. I must say, I’m beginning to think that the biggest hurdle to any kind of transhumanist and historical/technological singularity ideas is the shallow naivety of the transhumanism/singularity proponents themselves.
Technology will not magically fix the ailing of the world, and the nature of intelligence and consciousness will take much longer to understand fully; it is only that we will be capable of simulating such characteristics using artificial medium. Electric networks certainly catalyzed some great changes for the system of the world, but in the end it was merely catalyzing of the potential already there. The human network and corresponding complex system of human-nodes and social-economic-cultural links were already put in place long time ago, to the extent that we classify such trait as a fundamental part of humanity as organisms. This also means that simple increase in technological capacity will not be enough to surpass the nature of the human network itself, only speed the process already in place.
Mind you, I am very enthusiastic about the future potential of humanity. And I do certainly believe that some sort of chapter-opening change of human civilization will take place sometime soon, not necessarily while I’m alive (I’m 21 by the way) but definitely soon when viewed from the scales of world history. I am simply becoming increasingly skeptical of the kind of change expected to take place by the transhumanist community at large (if there can be such a thing). Massive information processing and storage ability does not translate into intellectual capacity without human input. There simply aren’t enough scientific evidence to support such a claim. The very idea that some sort of external intelligence engine would be able to fix the world’s problems is a vague notion that makes me want to question the degree of understanding possessed by some of the more radical supporters of transhumanism regarding matters of intelligence, brain physiology, and complex system dynamics. Certain degree of performance boost in brain capacities will definitely change the face of human civilization. Artificial intelligence in its ideal form will transform everyone’s lives. There is no doubt about that. I am just very irked about the underlying notion that such advances would be the singular answer to the singular problem of the world. Does anyone remember the concept of legacy anymore? I suggest you to find and read Jaron Lanier‘s essay on irreducible complexity (I’ve read it in a book) if you don’t know what I am talking about.
I believe in singularity-esque future, and all the good things it will bring. I also believe in reasonable ideas and sound scientific basis for reality, something some people seem to be forgetting in their rush to live forever.