Saturday, May 30, 2009

Ida, the “missing” link


“Ida”, a 47-million-year-old primate skeleton has been unveiled amid much fanfare and a flurry of well-orchestrated media announcements. It coincided with the release of a book and a television documentary, both of which had been prepared under a cloak of secrecy.

For a fact, the fossil is a paleontologist’s dream come true. It belongs to a species named Darwinius Masillae (after Darwin and the place of its discovery Messel, Germany) and it has been exquisitely preserved. Parts of its last meal were found inside its stomach. We thus know that she (for the specimen is probably a female) was an herbivore who feasted on fruits, seeds and leaves. She was overcome by carbon dioxide gas whilst drinking from the Messel lake: the still waters of the lake were often covered by a low lying blanket of the gas as a result of the volcanic forces that formed the lake and which were still active. X-rays revealed a broken wrist which may have contributed her demise. Hampered by her broken wrist, Ida slipped into unconsciousness, was washed into the lake, and sunk to the bottom, where she was preserved for posterity. When she died she was approximately nine months old and she measured almost three feet high.

Ida lived at a critical period in Earth’s history called the Eocene. Earth was just beginning to take the form that we recognize today – the Himalayas were being formed and modern flora and fauna evolved. Following the extinction of dinosaurs, the ancestors of modern mammals, including primates, lived amid vast jungle. Till today, scientists’ most-valued fossils of primates from that era comprised mostly of teeth. It is therefore easy to imagine the scientific excitement about Ida. But is she the common ancestor of humans and apes? Is she really the “missing” link?

Behind the media hubbub lays a bitter scientific trench war with regards to human evolution. Follow the tree of human evolution backwards and when you reach around 6 million years into the past you are going to meet the common ancestor of humans and chimpanzees. Go further back several more million years and you arrive at a big enigma: when did the earliest “anthropoid” primates (who ended up as apes, monkeys and humans) split from their even earlier ancestors who were lemur-like? The question is paramount to scientists because lemurs and anthropoids differ in many significant ways. For example, lemurs have claws and anthropoids have fingernails.

There are three fractions battling it out with their respective theories. Firstly, the discoverers of Ida who think our ultimate ancestor is their finding, Ida the Darwinius, which belongs to a lemur-look-alike species called “adapids”. Then there are those who support that the ancestral split happened thanks to the “omomyids”, an extinct group that looked like tarsiers; and, lastly, there are those who contend that our great-great-great grandfathers were sweet-looking, wide-eyed primal tarsiers (whose descendants are still around today). The science team behind Ida has received considerable criticism for trying to “steal the show” by claiming that their specimen resolved the matter for ever.

The way scientists build a case for a species being the ancestor of another is by looking at certain anatomical characteristics that are common to the two species to the exclusion of others. Darwinius is linked to anthropoids on the basis of the absence of two common lemur characteristics: a tooth comb (a set of forward-facing incisors) and a grooming claw (a special claw on the foot). Since anthropoids lack such traits too, the scientists surmise that Ida is closely connected to them. And yet, the analysis published in their paper leaves many questions unanswered. Press releases subtly claim that 95% of the evidence points out that the scientists are right. But this claim is ludicrously unscientific. The same percentage of evidence - and more – used to confirm the Ptolemaic theory that the Sun revolved around Earth; and yet the theory was completely wrong. Some critics contend that Darwinius is not an adapid at all, but a convergent subspecies of tarsier. Anyway, whatever species she may turn out to be given a more scholastic analysis, could Ida be the “missing link”?

Although the scientists who studied Darwinius deny making such statements, the promotion machine of History Channel who produced the documentary, and Brown, Little, the publisher who released the relevant book, are making the most out of this angle, calling their respective products “The Link” and the relevant website “Revealing the Link”. Which has exasperated evolutionary biologists the world over. Why? Because it chimes with the agenda of creationists who doubt the colossal corpus of evidence supporting evolution and request to see “missing fossilized links”. Because to think of evolution as an unbreakable chain made of links is woefully untrue. Species evolve from previous species following great numbers of seamless generations of gradually accumulated characteristics. There are no distinct “breaks” and therefore no “links”.

Presumably, the marketing gurus who sold the story to the media transpired that this was an excellent way to transgress the arcane scientific debates of human evolution, extract Ida from the obscurity of science journals and science meetings, and communicate her story to the public. At first glance, this may appear imaginative, commendable even. Moreover, one might argue that using a bad cliché to talk good science is sometimes de rigueur. As Jørn H. Hurum, the scientist at the University of Oslo who acquired the fossil and assembled the team of scientists who studied it, claimed: “Any pop band is doing the same thing. Any athlete is doing the same thing. We have to start thinking the same way in science.”

And yet, the sloppiness by which the scientists examined this very significant fossil if only to support their theory, their hurry to meet a publication deadline in order to coincide with History Channel’s premiering of the relevant show, as well as the ridiculous framing of Ida as the “missing link”, put all good intentions into doubt. Perhaps, cynical as it may seem, the scientists who analyzed Ida in such haste, were competing for scarce funding. Funding is undoubtedly a serious issue in today’s economic crisis. One should wonder however, if the backlash which the Ida science team currently receives will do them any good in the long run. The bad precedent of Hwang Woo-suk, the Korean geneticist “superstar” who in 2006 claimed to have cloned a human being - only to be exposed that he was lying - should have taught them a lesson. Hwang was ridiculed, discredited, and following the debacle his research got no further funding.

The only ones who are sure to gain something out of all this are the publishers and television networks who are milking the cash out of the so-called “link”. They are doing so by treating a primate fossil as a spectacle. However, what makes science different from sport, or pop music, or Paris Hilton, is that science matters much more to society in the long term. That, unlike rock stars, scientists may achieve recognition by means of truly valuable scientific discoveries and not by claiming whatever comes. The scientists who unveiled Darwinius should have known that sacrificing good science for the sake of media sensationalism does little service to science and society alike.

Published in the Athens News on May 30th 2009

Friday, May 15, 2009

Eternity

A conversation with Nikos Prantzos

When trying to convey the feeling of eternity Nikos Prantzos likes to quote an ancient Nordic myth. “In a distant country stands an enormous rock in the shape of a cube, each of its sides measuring one hundred kilometers. Once every ten thousand years a small bird flies over the rock and for a few moments rubs its beak on it. When the rock has disappeared, completely worn away by the rubbing of the beak, one day in eternity will have completed.” Prantzos has calculated how long this will take; 1030 years (1 with 30 zeros following). But as the future of our universe goes, this mind-bending timescale is but a mere moment. Protons, the subatomic particles at the nuclei of every chemical element will disappear in 1033 years. Black holes, having gulped whatever remaining matter, will evaporate too in 1066 years. For an astrophysicist such as Prantzos the cosmological future is eternity raised to the power of eternity many times over.

Born in Volos in 1956 Prantzos is one of the most prominent European astrophysicists. Currently a Director of Research the Astrophysical Institute in Paris and a professor at University Paris VI, he sits on the editorial boards of several significant scientific journals, and consults the European Space Agency. His main scientific interests focus on the evolution of the universe, and he has published pioneering work in the investigation of the natural processes that take place inside stars and galaxies. He is also passionate about communicating his science to the wider public. His popular science book “Voyages in the future” has been awarded the Jean Rostand prize in France, and has been translated in a number of languages, including English and Greek. In it he tackles a major philosophical dilemma that has troubled western thinkers ever since the nineteenth century, when scientists realized that the corollary of thermodynamics was the ultimate “heat death” of the universe. “Twelve years ago”, adds Prantzos, “astronomers discovered that the expansion of the Universe became more rapid in the past few billion years. If this accelerated expansion of the Universe continues in the far future all stars will run out of energy and die. Matter will ultimately decay to elementary particles and radiation, which will be diluted in a vast and cold space.”

So since the universe is destined to end with a pathetic whimper, what could be the meaning of life? If the ultimate future is dark nothingness, why bother with anything? “The vastness of the timescales involved,” says Prantzos, “is such that it leaves plenty of time to us - or to any future civilization - to consider and construct not one but literally millions of interesting futures. Along the way, meaning and purpose will be redefined time and again”.

Of course, in an ever expanding Universe, with temperatures dropping everywhere to nearly absolute zero and matter decaying into elementary particles and diluted radiation, indefinite survival appears impossible. And this is true not only for biological creatures such as us made of flesh and blood, but also of robots made of bolts and nuts. Nevertheless, Prantzos is agnostically optimistic. He contends that one should not forget that our present understanding of the Universe, based on the physics of the 20th century, is incomplete. “New theories will certainly emerge in the decades and centuries to come, perhaps offering us different and more optimistic perspectives for the far future of intelligence in the Cosmos”.

And if we earthlings don’t manage to figure things out someone else might. Which brings in the centuries-old, and ever-fascinating, question of life and intelligence elsewhere in the Universe.

Science has firmly established that the laws of physics – and therefore chemistry - are the same everywhere in the Universe. However, even if biology is essentially chemistry, biological evolution depends on many unpredictable conditions: it was the disappearance of dinosaurs following an asteroid hitting Earth 65 million years ago that paved the way to mammals and thereof allowed the ascent of humans. Prantzos points out that evolution from bacteria towards high intelligence, and further on to a technological civilization, looks like an extremely improbable event. He is not the first to doubt the existence of ETs. In 1948, the famous Italian physicist Enrico Fermi highlighted the fact that no convincing trace of an extraterrestrial visit to Earth has ever been found, despite the fact that there are billions of stars much older than the Sun in our Galaxy. Pranztos notes that “Fermi interpreted that absence of evidence as evidence that civilizations undergo a nuclear holocaust just before mastering space travel.” He contends that if there were indeed hundreds of civilizations in our Galaxy, it is improbable that all of them blew up or failed to reach us for some reason. “We should therefore get familiar with the idea that we are probably alone in the Galaxy”, he adds. And yet he remains a supporter of SETI (The Search for Extraterrestrial Intelligence) who look for extraterrestrial signals from space. “Despite its small chances of success, the rewards would be enormous if a single such signal is ever detected”, he says.

Aliens may not be in Prantzos mind when he looks at the stars, but space travel is. As a young boy marveling at the night sky over the Pagasitikos Gulf and Mount Pelion, he dreamt of becoming an astronaut. Growing up in the 60s, with the Moon as humanity’s next frontier was inspiring to a whole generation of kids like he, and Prantzos regrets that young people today do not have that same opportunity. Today, although man conquered Moon after travelling 380,000 kilometres from Earth, no astronaut has gone further than 500 kilometres from our planet since 1972. “Progress has been terribly slow both because astronaut security is a more important and funding is considerably less,” he says. “The Cold War and the competition between United States and Soviet Union hastened progress in space matters enormously”.

What appeared an easy feat in the 60s looks much harder today. The harshness of space environment makes problematic all plans for long term exploration of space by humans. The construction of adequate protecting shelters in space will require considerable efforts and resources. Furthermore, access to space is very costly because of the gravitational attraction of our planet. Until today rockets, using chemical propulsion and design principles invented by the Chinese centuries ago, remain humanity’s only means to escape Earth. Prantzos hopes that a new generation of propulsion technologies, such as nuclear, ionic, antimatter, or solar sail pushed by Sun's radiation, will provide alternative and faster ways to visit our nearest worlds, such the Moon and Mars.

Prantzos is particularly fond of manned missions into space and a believer that investing in them is a sound decision for the future of humanity. “The situation of our planet obviously requires all of our attention today”, he says. “We should try to heal the wounds of Mother Earth before embarking to ambitious programs of space travel and colonization of space. However, this does not preclude a reasonable step-by-step, long term program of space exploration. Moon, Mars and the asteroids are the obvious targets of such a program, at least for the 21st century. All major space agencies have interesting projects for those targets. I don’t think that funding issues are prohibitive. When compared to the cost of the Iraq war, or to the one of the recent financial crisis, the annual cost of a long-term project of human space exploration is substantially smaller”.

Published in the Athens News on May 15th

Friday, May 1, 2009

A traveler’s companion to Mars


In 1894 a wealthy Bostonian by the name Percival Lowell built an astronomical observatory in Arizona dedicated to learning more about enigmatic Mars. It was the time of canal-building on Earth, with the French having completed Suez a few years earlier and the Americans getting busy on cutting across the Isthmus of Panama. Lowell, saw canals on Mars too. Inspired by Darwin, he imagined life on Mars spawning and evolving over time into intelligent creatures who built planet-wide aqueducts to bring water from the poles to the equatorial deserts. He saw lush areas of cultivation, and he imagined cities and people not very much unlike us. Lowell’s ideas, published in his widely-read book Mars the Abode of Life, culminated humanity’s timeless fascination with the red planet with a profound conclusion: that we are not alone in the universe. Indeed, the intelligent beings who presumably constructed those irrigation marvels were but a relatively small leap across the dark sea of space. H.G Wells, penned The War of the Worlds by borrowing heavily on Lowell’s ideas, if only to illustrate what bad neighbors can do to each other. At the dawn of the 20th century Earth was abuzz with Mars excitement.

Lowell had put together a wonderful theory by drawing on the latest science. Alas, the theory was completely wrong. There are no Martians. Several fly-bys by spacecrafts, beginning with Mariner 4 in 1964, and not a few landings by robotic rovers, have ascertained that. There are no canals either. But there are plenty of fascinating and intriguing features to explore. There are dried riverbeds and meandering streams, wide landforms that resemble lakes, gullies that slope down mountainsides as if sculptured by torrential rains, volcanoes and underwater ice. There are telltale signs aplenty that once upon a time Mars was a completely different world. In fact, many scientists believe that Mars used to be covered with oceans and had an atmosphere and a benign climate similar to Earth’s. Then about 3.5 billion years ago a catastrophic event turned the planet into a cold and barren desert. That is why the quest for Martian life - albeit of a more humble, bacterial nature - has not ceased. It is possible that unicellular organisms still exist, buried deep under the surface. Their discovery would be of tremendous significance, for they would provide conclusive evidence that life is not a local, earthly phenomenon, but ubiquitous, something that might permeate the cosmos.
The list of enigmas that scientists draw each time they push the Mars exploration agenda is long. Our planetary neighbor is similar to Earth in many more interesting ways. Although half its size it has roughly the same land area. Martian days (called “sols”) are only a 37 minutes longer than our 24 hours. A tilted axis of rotation creates seasons; summers, falls, winters and springs follow each other in a year that is twice as long, as Mars takes 699 “sols” to journey once around the Sun. The Sun rises in the East and sets in the West. Both planetary siblings have polar icecaps. Why then has the Red Planet met such a disastrous fate? What catastrophe happened all those billion years ago? What can we learn about its climate that can help us understand the climate here on Earth?

Mars may be a must-go destination for other reasons too. On March 31st six volunteers locked themselves inside a hermetically living space in a laboratory in Moscow. For the ensuing 105 days, they will be eating dehydrated food and breathe recycled air. Their communication with the outside world will have a twenty-minute delay. Simulated emergencies (e.g. equipment failure), as well as real ones will keep them on their toes, as will a number of scientific experiments that they will have to perform. Their every move and vital sign will be monitored around the clock. It sounds like the ultimate version of “Big Brother” but is in fact a simulation experiment for a human mission to Mars. If a real mission ever sets off, the astronauts will have to deal with the physiological and psychological aspects of being confined in a tin can hurled into space, bombarded by cosmic radiation, and sailing towards another world at a pace very unlike the zapping speeds sci-fi films have us accustomed to. The trip to Mars may take up to eight long months.

Once those future space pioneers get there however, there will have little to celebrate for. Mars is not a friendly place. Temperatures may vary from -870C at night to a “balmy” -250C in the afternoon. The atmosphere is mostly carbon dioxide and the air pressure less than 1% that of Earth’s. A space suit must be worn when walking about. But walking will be easy because the gravity is only one third that of our world. The sky would look bright pinkish because of the fine reddish dust blown aloft by Martial winds. Dustdevils roam the surface like wandering phantoms and kick up so much dust that visibility often gets close to zero. At sunset, with the winds subsiding, there may be some scattered clouds in the sky, and as our astronauts prepare for their night rest they may momentarily look up to the bright, tinkling stars, and easily recognize the same constellations, with one difference. Somewhere over the horizon a tiny bright blue ball of light will be gleaming: Earth.

There are many who doubt the usefulness, let alone wisdom, of such a risky and costly undertaking as a manned mission to Mars. To sustain a permanent base in Mars would be even more perilous. But as the Moscow experiment shows danger, and discomfort, seem to have a strange appeal to human nature, one not to be underestimated. There will always be people ready to take the challenge, no matter how impossible it may seem. The benefits are almost impossible to estimate. We will never know what there is to gain from landing humans on Mars unless we do it. For now we have at least two good reasons to attempt it. One has to do with pushing the technological envelope of space flight further. For the past sixty years very little progress has been made in space propulsion. A human mission to Mars will require a new generation of rockets and fuels; an eight-month trip is just too long and has to be cut back drastically. New technologies will have to be developed to ensure the safety of the mission. The second reason to send people instead of robots is space colonization, an agenda pushed forward by the Mars Society, a non-profit organization which has already designed the “planetary flag” of Mars, while undertaking some serious scientific work too. Establishing permanent human settlements on Mars will be a step-wise project which may take several centuries, unless an as-yet unimagined innovation comes along and revolutionizes space travel. And if living in a pressurized building and walking around in a spacesuit is not your idea of a good life, there is always terraforming. Scientists believe that given enough knowledge and time, we could pollinate the Martian atmosphere and soil with oxygen-producing organisms which will make Mars a more hospitable planet. For romantics and visionaries, it would be the “unwinding” of the ancient catastrophe and the making of a space Utopia, with canals and all. Perhaps then Lowell, as he peered into his telescope under the Arizona sky, he saw a revelation of the future. Perhaps, in that visionary future, the Martians will be our descendants.

Published in the Athens News on 24th April 2009

Monday, April 27, 2009

The AI Singularity

The AI Singularity has been defined as a future point in history when machine intelligence will surpass human. A technological transition event of this magnitude has been compared to a cosmological "black hole" with an "event horizon" beyond which it is impossible to know anything.

The idea of the AI fails philosophically because it assumes the following clause: “I am not so smart to know what smarter is”. This clause is implicit in “recognizing” a hyper-intelligence. The acid test of passing through the AI “event horizon” does not suffice in you or I being flabbergasted by the amazing smartness of machines, or human-machine fusion. The acid test is not to be able to comprehend anything! Smartness after the AI Singularity event horizon is defined as so huge that no ordinary human mind can realize that is there. It is similar to asking a chimp to realize how smarter a human is. The chip cannot possibly know that. The question (for the chimp) is meaningless. And equally meaningless would be the question for the human of the future crossing the event horizon of the AI Singularity. For all we know, there might exist today - or have existed for centuries, or millennia – higher intelligences that ours. Perhaps, we live in a “Matrix” world created by smarter than us intelligences. Perhaps we crossed the AI Singularity event horizon many centuries ago, or last year. But we can never possibly know that.

The AI Singularity reduces thus to the “brain-in-a-vat” argument. However, a brain-in-a-vat cannot possibly know that is a brain-in-a-vat because to state that you are a brain-in-a-vat is a self-refuting statement. Therefore, to claim such a thing is nonsense.

The AI Singularity also fails scientifically. In order to have any sensible discussion on intelligence - human, machine or otherwise - we need a theory of intelligence; which we do not currently have. Major questions are looming, the most significant of which is the relationship between a mind and a brain. Until such scientific problems have been clearly defined, researched and reduced to some testable causal explanatory model, we cannot even begin to imagine “machine intelligence”. We can of course (and we do) design and develop machines that perform complex tasks in uncertain environments. But it would be a leap of extreme faith to even compare these machines with “minds”.

The AI Singularity fails sociologically too. It is a version of transhumanism, which is based on a feeble and debatable model of human progress. It ignores an enormous corpus of ideas and data relating to the human psychological and cultural development. It assumes a value system of ever better, faster, stronger, longer, i.e. a series of superlatives which reflect a social system of intense competition. However, not all social systems are ones of competition. Indeed, the most successful ones are of collaboration. In the social context of collaborative reciprocity, superlatives act contrary to the common good. To imagine a society willing to invest resources into building the intelligence of its individual members would be to imagine a society bent on self-annihilation. To put it in simple terms: if I am the smartest person in the world, if I can solve any problem that comes my way, if I can be happy by myself - why should I need you?

The Precautionary Principle

Stavros Dimas, the European Union’s Environment Commissioner, ignoring the opinion of the European Food Safety Authority (EFSA), recently [nb. 2008] indicated that the Commission will ban two genetically engineered varieties of corn, because of the potential harm it may cause on certain beneficial insects. At present, only one transgenic crop can be cultivated in Europe: Monsanto’s MON810 insect-resistant maize, which now comprises nearly 2% of maize grown in Europe.

In the on-going debate about Genetically Modified Organisms (GMOs) in Europeans’ plates the mantra is the so-called “precautionary principle”; the idea that regulation should prevent or limit actions that raise even conjectural risks, particularly when the scientific evidence is inconclusive. Add to this the widespread feeling in society that science is moving too far ahead becoming more and more incomprehensible – and should therefore be made to slow down, or even stop – and you get a neo-luddite backlash to anything biotechnology has to offer.

But is this the way to go? If the world rejects GMOs, what other options do we have in order to feed ourselves? Traditional farming, through the intensive use of pesticides and fertilizers, is known to be linked to serious health hazards such as cancer, and is using great amounts of fossil fuels which contribute to climate change. Organic farming is often quoted as a valid alternative, but the figures simply do not add up. We are six billion people on this planet, two billion of whom live on the edge of starvation. Put in this wider context organic farming, with all its splendid benefits, begins to look more like a luxury to be afforded by rich westerners only.

GMOs have gotten a bad name. They are considered an environmental risk. Releasing mutated organisms in nature, say their opponents, could spread havoc to natural evolution and cause untold damage to ecosystems. What one usually does not hear is that most traditional plant-breeding techniques are simply imprecise forms of genetic engineering. Cross-fertilization and cross-breeding are the most obvious ones but there is also mutagen breeding, whereby plants are bombarded by X-rays, gamma rays, fast neutrons and a variety of toxic elements in an attempt to induce favorable chromosomal changes and genetic mutations. The difference is that genetic engineering is a more targeted and precise method, which has the potential to avoid large scale environmental contamination.

The second big fear is impact on health. I have often been told that “mutant organisms cause cancer, everyone knows that!” The truth may in fact be the very opposite. Traditional farming causes cancer, and there have been numerous epidemiological studies that confirm this. GMO farming may even prevent cancer, as in the case of the transgenic corn that Dimas wants banned. The particular corn product releases a protein that is toxic to insects but harmless to mammals (such as humans). By preventing the corn to be invaded by insects it protects the product from a very dangerous fungal toxin called Fumonisin, which is a known carcinogen and a cause of neural tube defects in newborns. The transgenic corn has been shown to contain 900 percent fewer fungal toxins than the non-GMO corn variety grown by traditional and organic farmers.

Nevertheless, Europe says no. Maybe because we Europeans suspect that big American-based multinational crop companies, such as Monsanto and Syngenta, want to monopolize the agro-food business, to the detriment of our environment and health, no matter what scientists say. After all, scientists could be on their payroll too. I can agree that one should not trust corporations with the public good, and that government regulations as well as alert and well-informed citizens are society’s best defenses. However, the current market of big monopolies selling traditional - as well as transgenic - crops to farmers may be about to change, and change rapidly too.

Synthetic biology is nowadays taught at undergraduate level, and any biology student can create her own artificial organism on a Petri dish. Techno-optimists, such as physics professor Freeman Dyson, are heralding a new era where “domesticated biotechnology, once it gets to the hands of housewives and children, will give us an explosion of diversity of new living creatures, rather than the monoculture crops that big corporations prefer.” Just imagine that future: you can grow whatever you like in your back yard, creating your own varieties of plants and animals, free from the monopolizing corporations. On the other end of the spectrum, technophobes see this as a nightmare scenario where bioterrorism runs rampant and Earth’s ecology is disrupted beyond control. GMOs is the tip of a great big iceberg that floats to our shores, whether we want it, ban it, or not.

The risk of being wrong

However, let me focus on the infamous precautionary principle. The first, and most obvious perhaps, problem with the precautionary principle is that it takes no account of the cost of non-action. For example, let us say that I have a serious heart problem and my doctor thinks I should be given an artificial heart. Unfortunately, no-one can absolutely guarantee that the artificial heart will not fail, thus resulting in my death. By applying the precautionary principle I should refuse the treatment. However, if I do not get the treatment my death is certain. The principle, in this case, should be overridden. And yet, when one moves away from managing the risk of a certain technology on her own life or well-being, and arrives at decisions by governments or by the European Union, the Precautionary Principle of risk management has increasing appeal to politicians and policy-makers. It is a completely different thing for me to decide what to do with my health than deciding what should be done with everyone else’s health.

In the latter case the appeal of the Precautionary Principle is irresistible to politicians, and policy-makers, as well as various advocates of the public good. They are certain to win favors with society because citizens will tend to support a precautionary policy, because human instinct prevails.

Reality vs. experiments:

Reality, idealistically

Experimental results

- (not harmful)

+ (harmful)

- (not harmful)

True

False negative

+ (harmful)

False positive

True

Evolution has programmed us to avoid false negatives at all costs (an error made when we fail to connect A to B, when A is truly connected to B – for example, I did not flee upon hearing a noise, when in fact the noise was a lion coming to get me!). In the case of GMOs a false negative is when the experiment shows that something is harmless when in fact it is harmful (see table above). By the same token evolution has made us more relaxed with respect to false positives (an error made when A is falsely connected to B, for example I hear something which may be a lion about to attack me, and I flee – but there was no lion). In the case of GMOs a false positive is when the experiment shows that something is harmful when in fact it is harmless (see table above). Our survival has depended over eons upon evaluating false negatives as being more risky than false positives. In other words, since we are not absolutely certain that GMOs are safe, let us ban them.

The Gap

In the case of technology, however, the possibility of false negatives will always be with us. Owing to the intrinsic nature of statistical errors, as well as the philosophical impossibility of knowing for certain the link between cause and effect, there can be no absolute knowledge of experimental consequences. Science is a systematic, logical and experimental method of probing into an “ideal” realm that we hypothesize it exists and is called “reality”. We will never know if “reality” really exists. Therefore, the sum total of all our experiments does not exclude the possibility of a false negative.

In the case of science a false negative is a blessing, as it may falsify a given theory. This is another reason for the usual lack of understanding between science and society. Scientists love false negatives, but society does not. Scientists are usually more ready to take risks with new technologies than citizens.

If we do not wish to return to pre-industrial times, there can be no other way forward than through technology. One may argue that this is a recipe for humankind’s eventual doom. I would, however, like to remind of Malthus and his predictions about an unsustainable world which was not supposed to feed the billions of people. Only technology can beat demographics, as the “green revolution” of intensive farming proved. In a projected world of 9 billion people, everyone should be given an equal chance of economic and social development. We cannot hope to achieve this by banning or over-regulating technological progress. The only thing we can do is develop better channels of communication between science and society, to bridge the gap mentioned above, and prepare society for the changes ahead.

When I see a tree what do I, really, see?

The question of what comprises an external reality is ancient. The frustrated physicist cries “shut up and measure”; and he is right for all that we can know is only that which we can measure. The rest, we cannot know. The rest is the unknowable. We can only describe the knowable, and so we do through science, and sometimes through art. Optimists amongst us contend that we could also describe the relationship between the knowable and the unknowable. I think that we can do so only as a conjecture. Let’s call it “the objective world conjecture”. Our favorite tool here is logical abstractions; that is; thinking about non-objects. In the abstract, therefore, we can assume that there exists U, the unknown. And the K – the known – is, somehow (via the mysterious chance mechanisms of evolution perhaps), configured within U. This is a conjecture that, alas, we can never prove. The objective world will be forever unknowable. The reason for this should be obvious by the definition we give to U; K is always a subset of U; K+K1 ,where K1 is a new discovery, is also a subset of U and so on, for any Ki, ad infinitum.
From this U-neverland arise the ghosts of our measurements, the contents of our consciousness, and the K-objects of our senses, emotions and feelings. When I look at a tree I see the only thing that can be seen. I call it a “tree” and, if I utter the word in any language, or draw a “tree”, the overwhelming majority of my species will intuitively imagine a “tree”, different in its details but similar in its essence. Our “essential tree” is an object of K made up of things we decide to call “cells” and “atoms” and so on.
Interestingly, as we expand through scientific observation the limits of K into the vast (conjecturally) expanse of U, we arrive at logical paradoxes. The objective world conjecture is paradoxical in itself, since we assume an increasing infinite progression of Knowledge (ΣΚi) which forever remains a subset of U.
The more abstract our reasoning the more paradoxical it becomes. And we thus arrive at the ultimate walls of K. When classical “objects” fade into quantum ghosts, our “K-trees” become U-trees, things of the unknowable; and we are fenced inside the realm of subjectivity, the only reality knowable.

Social Echo and Rational Economics

In the construction of economic theories, from Plato and Aristotle to Adam Smith, Karl Marx, John Maynard Keynes and beyond, assumptions about human nature have taken their toll to the detriment of effective predictability of markets and systems. The need to predict, plan and reason about systemic behaviour, too strong to admit postponement, overrun the obvious lack of knowledge about the human animal. Economics thus framed new ideologies, such as Marxism or capitalism, which in turn guided scientific exploration. An example of ideological framing causing a sociological paradox is eugenics. Since Galton and Darwin, eugenics has been the logical projection of evolutionary theory applied to humans. But although the idea was accepted in the beginning by liberals and leftists, it became an abomination following its corruption by the Nazis. Since the end of WWII whenever it resurfaces it is being slammed down by an almost hysterical reaction from academia and the Press (James Watson being a recent victim of this). And yet the idea persists, albeit dressed in other hides; in socialist-inspired models of egalitarianism, in laws that regulate abortions to the detriment of the middle classes, in genetics research (cloning in particular), and in educational systems. Eugenics makes one huge assumption: that by selecting for higher intelligence humanity will become, in a few generations, less violent, more altruistic and wiser. And yet, the correlation between higher intelligence and the desired attributes of peace, social cohesion and wisdom is a weak one. It smacks of wishful thinking and cultural bias. Human beings are highly intelligent animals, not angels, not creatures that stand apart from the rest. We are apes, and when push comes to shove we act like ones too – regardless of our intelligence, or good manners. War is the most obvious testament to our innate cruelty. If anything, our ingenuity seems to have been invested more in engineering war machines that in anything else.
Recent discoveries in neuroscience promise to shed much-needed light in what constitutes a human being. Deciphering the unconscious circuitry would be a task of monumental proportions dwarfing the Human Genome Project by several degrees of magnitude. Alas, the result - if successful - will be of limited use. As in the case of genes, interactions in the whole seem to play a role more important than the interacting parts alone. The infusion of neuroscience into sociology and economics is a welcome development. Nevertheless, owing to the infancy of neuroscience it should be expected that social framing will once again take the upper hand, leading to new “realistic” economic theories that reason the animal inside us. Once more, experts will reason about social systems forgetting that human activity is diffused and dominated by unconscious, autonomic, neuropsychological systems that enable people to function effectively without always calling upon the brain's scarcest resource– attentional and reasoning circuitry. To reason about non-reason is a paradox that we cannot escape. We can only understand an echo of what the human society is truly all about. The dilemma will always be how much we want to believe in it.

Whence narrative?

Recently, I happened to be coordinating a public discussion on literature at the National Research Foundation. Being the organizer of the discussion, my objective was to explore, with the aid of an English Literature professor and a writer/critic, fictional narratives as inroads to humanness. Indeed, what could have been more profound than that! Still, my “ulterior” motive was to compare literary narratives to scientific ones.
Psychologists use personal narrative routinely, as a way to explore their patients’ personalities, but I was more interested to see if there was some connection to non-personal narratives, such as the Big Bang Theory, or the Evolutionary Theory. Then the question came from a member of the audience: why does the brain produce narratives? In a way, the question relates – to a higher level – to the structure of memory. And yet, memory has a biological foundation. What could that foundation be?
I am no expert, but the only way that I could have answered the question (“whence narrative?”) would be to point out our “sense of time”. Why do we have a perception of time? Obviously there have been evolutionary reasons for it. The interchange of light and darkness and the resulting circadian rhythm commission us with the sense of “before”, as distinct from “now” and “after”. And yet: of all the things that modern physics tell us about the Universe, of all the quantum paradoxes of electrons being at many places at the same time, the most unimaginable of all is that time does not flow but is still. That time is an “illusion”. This seems not only counter-intuitive (so much of modern physics is) but unimaginable. A universe where time is like length, or width, a dimension upon which points stretch and exist regardless of us being there, of time being still in other words, is a universe without narrative. Therefore, a universe without before, now and after. Our minds are evolved to produce narratives and therefore the only way we can comprehend anything succinctly and effectively is by incorporating it somehow into a narrative. The more explicit the narrative the more comprehendible the object, and vice-versa. Abstraction, the loss of narrative, equates with artistic amnesia, it makes time freeze to a still, space reduced to a point and communication to a silent pause.

Doomsday Narratives

When discussing the formation of ideas for the Society of the Future, a most important element of the synthesis is prophecy. Prophecy is an important contributor to the development of the western thought. Although elements of ritualistic foretelling can be found across many cultures, it is in the West that foretelling was historically institutionalized, whether one refers to the Oracle of Delphi, or the Prophets of the Bible. The underlying theme of prophecy is the juxtaposition of teleology with moral conditioning. In other words, do what you must do, for if you do otherwise you will be lost in the whirl of upcoming events. Therefore, prophecy frames the ethical debate of today by stressing the daunting onslaught of a frightful tomorrow. Redemption is offered only if one falls back.
The idea of prophecy has transformed into the idea of predictability, as the western world moved away from a mostly metaphysical explanatory model towards a materialistic one. Science and engineering succeeded in the social arena because they offered consistently predictable results. Indeed, it is in the core of scientific ideology that experimental results must be verified by their idependent repetition. This means that a prediction made by a theory - a mini-prophecy in disguise if you will - must be verified in various labs to hold any water.
Prophecy in the pre-scientific Christian world was dominated by the Book of Revelation, which in turn expressed pre-existing ideas of “telos”, a word meaning both “end” in the temporal sense, as well “end” in the purpose sense. As science takes over, the prophetic narrative transforms and gradually finds its way into science fiction. In turn, science fiction not only nourishes the imagination and aspirations of scientists-to-be (most scientists were sci-fi funs when they were children), but also fuels the media debate whenever ethical issues on science and technology are raised. The latter happens because contemporary media is primarily a narrative-transformation machine that recycles stories and threads of stories by adding sensationalism, in order to attract attention.
I think that there are three distinctive “doomsday stories” that haunt us today. The first I label “the post-apocalyptic primitivism”. It implies an ecological catastrophe. This could happen either as a result of a nuclear war, or change in the climate, or a run-away virus, or a hit by a meteorite, etc. The result is prophesized as a collapse of civilization and the regression of the human race (providing anyone survives) to a primitive state. Post-apocalyptic primitivism is the logical extension of the Book of revelation (or the Nordic myth of the twilight of the gods, if you prefer another context).
The second doomsday narrative for the future I will label the “AI Singularity”. This is the assumed point in the future when machine intelligence surpasses the human one. At this point the narrative is broken suddenly. Nothing can be further predicted. An impenetrable discontinuity appears. The “event horizon” of the AI Singularity implies the end of the power of prediction, the nullification of prophecy and, to my mind, suggests the absolute negation of science.
The third doomsday narrative may be referred to as the “post-human scenario”. This implies a more controlled process for history, where technology fuses with humans and transforms the world and society. Humans become Cyborgs, either as independent units incorporating a variety of mechano-electronic and biochemical paraphernalia, or as interdependent units hooked up in a grid, a kin of super-organism that fuels progress. This third narrative is the most optimistic of the three, mainly because it is inspired by utopian (or dystopian, depending upon your emotional inclination) ideas.
But I must return to these narratives later and analyze each in turn.

Climate, Apocalypse

Our planet has been warming up since the Industrial Revolution mainly due to the accumulation of carbon dioxide and methane in the atmosphere, gases that result from the burning of fossil fuels that spur and sustain our economic development. This is a scientific fact that no one doubts. Paleoclimatologists place the current trend in a wider context by comparing past periods of Earth’s history where warming up had occurred due to natural processes. We also know of the Milankovitch 100,000-year cycles that determine the phasing of ice ages with interglacials. According to those cycles we ought to be entering into an ice age. Temperatures ought to be dropping and ice on the polar caps ought to be thickening. Ironically, our pumping of greenhouse gases to the atmosphere seems to compensate for all that.
Science can describe with relative accuracy the past and the processes by which we got where we are today. Logic, and some rather crude computer models that run on computers, predict that if we continue with business-as-usual many nasty things will happen to our environment. Indeed many of those things are happening already. Earth is sick; there can be no question about it. And, very probably - why almost certainly -it has been made sick because of human industrial activity.
Two questions follow:
Question 1: Can we do something about it?
Question 2: Providing we can what should we do?

The Kyoto Protocol, as well as the recent international discussion at Bali for the successor agreement, emphatically answer “Yes” to the first question and “Curb carbon emissions” to the second. There seems to be worldwide consensus on those answers and, with the exception of the US government and a handful of die-hard skeptics, the rest of the world seems willing to bite the bullet and proceed with a more responsible, equitable and collective stewardship of the planet, at a nominal cost. Sounds all right, doesn’t it?
Well it does and perhaps it is. However, I will examine the nature of the two questions posed in order to argue that the climate change debate is not really about climate science, or economics. Inexactness is in the method and nature of both, and one could argue ad nauseum about the merits of different approaches, analyses and the like. Evidently, the issue is not an academic one, although scientists are being involved in an unprecedented way and are being awarded not the Nobel Prize for Physics or Economics but the Nobel Prize for Peace, a distinction usually reserved for politicians or activists.
Additionally, I would argue that the debate is not even a political one. Of course, many heads of state would envy Al Gore and would like a piece of his action and fame. Climate change seems to galvanize electorates across the globe, even when no one really bothers to explain the details to them. But let us leave the cunning politicos aside, for their perfidiousness is well-documented. Even honest, well-wishing politicians fall back on inexact science in order to validate their decisions and by doing so fall into the abyss of folly. Their decisions usually evoke the insurance argument: i.e. curb emissions at a premium now in order to avoid dire consequences in the future. But the insurance argument is a weak one. Its weakness lies firstly in selecting and prioritizing future risks, and secondly in defining the premium. Climate change is not the only thing threatening future generations. If the world wants to buy insurance on its future safety then funds need to be established in order to deal with rogue asteroids, supervolcanic eruptions, supernova explosions in the vicinity of our solar system, pandemics, plume explosions, and – why not – invasion of Earth by an alien civilization. The list of possible threats can go almost forever. And how about that premium? How much does it cost? How can one be certain that forgoing A% of the world’s output is optimal and not A+B%, when the science of prediction is so inexact? If one is talking about the future of the planet shouldn’t one be generous with premiums? If the alternative is life on a scorched planet without wildlife, a barren rock in space, shouldn’t we consider eliminating greenhouse gases altogether as soon as possible? And if this sounds illogical, where is the logic in defining an optimal premium?

All in all, I will argue that the debate on climate change is principally and foremost an emotional one.

Both questions I asked above are scientifically unanswerable because they deal with certainties. The fact that many scientists have become evangelical in their predictions is unassailable evidence of emotionalism blurring their better judgment. Healthy skepticism has been replaced by much-applauded scientific fundamentalism, which is being rewarded the Nobel. There have been reports of “tears” during the Bali meeting. And the manner by which the meeting proceeded reminds one of an operetta. Why so much passion? The emotional charge that permeates all debates on climate needs to be analyzed by sociologists and psychologists. By being “alarmist” and “apocalyptic” cunning politicians join the chorus of scientists-turned-Bible-prophets in a replay of a very old story, namely the herding of the human flock under an ideological banner, this time the banner being “eco-friendly”. The threat is nature’s revenge on the sinful humankind. God has been replaced by mystical natural forces, by a cybernetic Gaia. Often, the high moral ground is being hijacked by atheists who re-discover faith dressed up as computer simulations of looming Apocalypse. The end is at hand ladies and gentlemen! Repent! Shut the factories down! Shun your riches! Be poor in body and mind! Love thy neighbor! And redemption will surely come!

There is obviously a positive side to all this that needs to be mentioned. Al Gore has been explicit about it too. I will rephrase it as the dawning of a new era in international politics where leaders adopt a common, environment-centered, agenda that, ultimately, can lead only to cooperation and peace. In a world that is about to fall apart there is no point fighting. Or isn’t there?
Well, you see, when the debate is so emotionally charged, when logic and the inexactness of scientific argument have been replaced by certainties, feelings can swing either way unpredictably. You could have Al Gore’s fantastic vision of world cooperation and mutual support but, alas, you could also have war and mutual annihilation. And this is precisely the danger that the world faces as we are being herded into taking decisions about the future, eluding ourselves that we are powerful enough to engineer climate by adopting an equitable sharing of greenhouse gas quotas.

Simulation and non-local realism

Realism is the viewpoint according to which an external reality exists independent of observation. According to Bell’s theorem any theory that is based on the joint assumption of realism and locality (meaning that local events cannot be affected by actions in space-like separated regions – something that Einstein would not swallow) clash with many quantum predictions. In such cases, “spooky action at a distance” is necessarily assumed in order to explain phenomena such as quantum entanglement. This is called non-local realism. A recent paper by Simon Groblacher et al (An experimental test of non-local realism, Nature, Vol. 446, 19th April 2007, pp. 871-5) showed that giving up the concept of non-locality is not sufficient to be consistent with quantum experiments, unless certain intuitive features of realism are also abandoned. Let us see how these results may correspond to assumptions made into our simulation-based New Narrative.
I would like to take an example from cosmology, indeed the very simulation of the standard cosmological model. The logic of the simulation goes like this. Assumption 1: The universe maps unto an external reality which, somehow (i.e. via known, or unknown-as-yet, natural laws) maps also unto our coupled media of detection instruments plus consciousness. This may be called “the perceived universe”. It usually depicts an image of the cosmos, galaxies and gas clusters spreading in all directions and in all magnificence. A mathematical model is then developed based on the prevailing cosmological theory that aims to explain the “perceived universe”. This model runs on a computer, which is the external reality substrate of the simulation. In other words, there is, or so we assume, a “reality” of hardware that runs our simulation (assumption 2). The result of the simulation is also an image of the cosmos. Comparing the two images we refine the model further until the two images appear identical. When we achieve an identical pair of images then we conclude that our mathematical model has been a successful one, i.e. a valid description of external reality.
It is obvious that our conclusion may be potentially flawed, on the basis of our two main assumptions. Furthermore, our assumptions call upon the quantum nature of the cosmos which, as the aforementioned paper has demonstrated, seems to reject non-local realism. Thus, we are left with a revision of assumptions about realism.
Which happen to be inherent in the New Narrative, the most poignant revision of reality being the decoherence of Selfhood.
By deconstructing the Self, by rejecting the narcissism of psychoanalysis, by re-introducing a mystical layer of dualism in the nature of consciousness, we arrive at a counterfactual definiteness and in a world not completely deterministic. The result may be a chorus of out-of-tune artwork but it is also a result closer to what our best validated experiments show. If “external reality” is a wonderland of curious objects and events then our revised “inner reality” of the New Narrative is an equally exotic place. But should we take such a phantasmagoric correspondence as a sign of progress? Should we convince ourselves that we have, at last, by entering our self-simulated world, arrived serendipitously at the Holy Grail of “reality”? Ironically perhaps, the very dynamics of decoherence prevent us from answering such questions. When determinism is thrown out of the window, then all one can do is lean onto the ledge and peer outside, in wander and astonishment, to the changing view of a perplexing world beyond our wildest imagination. And that is exactly what the New Narrative tries, and forever fails, to describe.

The Idea Delusion

Recently, a postgraduate student asked me, as part of his thesis (http://vasilis-thesis.blogspot.com/) , to comment on the following claims by Keynes and Galbraith.
Keynes claimed that: "The ideas of economists and political philosophers (etc), both when they are right and when they are wrong, are more powerful than is commonly understood. Indeed the world is ruled by little else.” While Galbraith argued that: "Ideas would be powerful only in a static world because they are inherently conservative."
The student’s question was: These statements were said almost 45, and more, years ago. According to your opinion, which of the fore mentioned statements seems valid in the modern world?
My answer was: “I would definitely side with Galbraith. Of course ideas appear to be powerful and they seem to exercise enormous influence in the way we perceive and interpret the phenomena of the world. I believe that Pol Pot is a case in fact, as well as the hapless millions that were killed because of political ideas that he carried inside his head. But were those millions killed because of political ideology, or because this ideology "happened" to take root in Pol Pot's head, who "happened", through an amazing series of coincidences to live long enough and become powerful enough in order to enact those ideas against the poor people of Cambodia? Alas, our world is not shaped by decisions but by happenstance, by the dynamic confluence of sometimes interconnected and sometimes disparate factors, that lead one way or the other, by virtue of systemic forces beyond our control - or comprehension. Only by hindsight do economists and political philosophers apply ideas to the facts and develop their "theories". The feeling of "power" ascribed in their ideas is therefore an illusion, and it has been so not only today but always and forever.”

I would like to expand somewhat on my answer. Firstly, let me point out that I am referring to economic and political philosophy, restricting therefore my tackle of the term “Idea” - at least for now - to these fields.

I believe the ultimate acid test that shows the delusion that I am claiming is very simple. Economic and political theorists develop ideas which explain (or try to explain) the past. Even those theorists who claim that they speak about “today” or the “present” are in fact speaking only about the past; the recent past but the past nevertheless. The “present” is physically and intellectually unattainable as any Zen story will easily convince you. All economic and political theory, every “big idea” – Marxism, anarchism, liberalism, whatever - fails in various degrees when it comes to predicting the future. Why is this so? Why can’t anyone tell us how the world or the economy will be in five, or ten years time or, in fact, tomorrow morning?

Two reasons: Firstly, economic and political theories do not have – not yet anyway – a solid scientific base on the natural sciences. Secondly, economic and political theorists do not really believe that they need such a base in order to develop and debate their ideas. They inherently accept that the economy and the society are governed by non-natural forces and laws and that they are “human-made” – whatever that may mean. Therefore, they strongly contend that their theories and ideas are, or potentially could be, the driving forces of economic and social phenomena. That, for example, “believing” in market economics, “accepting” the theory and developing instruments that support it (e.g. IMF, World Bank, etc.) will surely transform the world into a free market economy. That invading Iraq and installing a western-type democracy will transform Iraq into a western-type democracy. These notions are delusions, of course. It should be very obvious to any rationally thinking person that human beings and societies are not controlled, “closed” environments of deterministic interactions. We are a social class of animals partaking in the evolution of the cosmos whether we want it, think it, or not.
I suggest that the roots of this “Idea Delusion” are Platonic and rest with Plato’s Republic. It is, however, about time, that such a delusion is revised and economists and sociologists begin to approach the workings of society in the same way as natural philosophers and natural scientists do. We need a new paradigm of social and economic theory based on biology and systems theory with predictive powers. Ideologies belong to the past.

Literary Constructivism

Immanuel Kant held that there is a world of “things in themselves” but owing to the its radical independence from human thought, that is a world we can know nothing about; thus stating the most famous version of constructivism. Thomas Kuhn took the point further by stating that the world described by science is a world partly constituted by cognition, which changes as science changes.
Constructivism as an idea becomes very obscure when one tries to determine the connection of “things in themselves”, e.g. the “reality” of, say, electrons, and the “scientific concept” of an electron. My take of the problem is that science is a human endeavor and therefore inherently and implicitly bound by our brain’s capabilities, however, we do not “imagine” the world. The world exists, it is just not catalogued.
Things become a lot less murky when we shift from science to literature. Here, the author does not pretend to understand “things in themselves”. The literary agenda differs from the scientific one because it is assumed to be human-made. Books and book-worlds are of the imagination and for the imagination. Whenever I read about a flower I know that it is not the flower “out there”, in the “real” garden, but it is an imaginary flower, a rose inside the imagination of the writer given to me to sample and behold in my own imagination. Thus, free of scientific intentions and false pretensions about realities, literature is constructivist by its own nature. The writer constructs her own world. There are no electrons in that world, only thoughts, of electrons, sometimes, perhaps – but words nevertheless.
But, what is the real difference between a literary and a scientific narrative? There is much difference, one would argue. The Big Bang could be described by words, and therefore constitute a narrative in a literary sense, however, that narrative is underpinned by observation, experimentation and – most importantly perhaps – mathematicalization. Yes, math seems to make the big difference. The mathematical correspondence between scientific theories and the ticking of natural events, is the definitive factor that appears to differentiate a scientific narrative from a literary one.
But does it, really? Can’t one “translate” maths into narratives too? I would suggest that it ispossible. Even the most obscure of mathematical entities can be described in words, and indeed it must be described in words if it is to be understood. If one draws a parallel between math and music, then yes music is beyond words, but so is everything else of the Kantian “things in themselves”, and only when music is descriptively articulated (as an “experience”), it becomes part of the communal pool of ideas. Therefore, one arrives at a scientific narrative that is multi-layered and has many subplots, but starts to look like a literary narrative more and more. Could we then take the point one step further and suggest that, as literary criticism is the evolutionary force of literature, guiding it towards new directions, scientific peer review does similar wonders with determining directions of scientific research, shifting paradigms, and revolutionizing theories (i.e. narratives)?

Utopias and Dystopias

Thomas More published in 1516 the book Utopia, based on the Republic of Plato. The Greek origins of the term, as well as the inspiration are rather telling. The word means the place that does not exist. Plato, in his original work, which is aiming to provide a context for what he defines as “virtue” (αρετή), positions his thesis on the unattainable. True virtue can only be realized in the world of ideas “out there”; not in our coarse world of shadows. The Republic is by definition unattainable. For the mind of Plato’s contemporaries, for his listeners if you will, “utopianism” did not hold the same meaning as it did in the 16th century, or indeed in our present time. In other words, the unattainable did not imply the will to attain; the interpretation must have been more literal, the way mathematics is, or even better perhaps geometry. There is no such thing as a perfect sphere. A perfect (platonic) sphere is unattainable, but that does not mean that having less than perfect spheres in the real world is a let down. The Republic is a “perfect” society (at least according to its writer) in the same sense. That is exactly the difference in the narrative that I am trying to underline: that the utopian of ancient times was a different animal from the utopian of modern times. The modern version is someone who has not given up on the idea of attainment. In fact, rather the contrary: the attainment of a utopia is a goal in itself. This is profoundly evident in the totalitarian dreams and nightmares of fascism, Nazism, as well as Marxists: the Perfect society and the Perfect man (and woman). This is a major shift in understanding utopias between the ancient and the modern, and I could not stress enough the importance and repercussions of this shift. When Aldous Huxley published “Brave New World” in 1932, the foundations of a utopian/dystopian narrative have already been laid. Indeed they have replaced liberal realism – if ever such thing existed. Utopias or dystopias are attainable. Either by action or inaction. Climate change is a point of fact, as well as the idealist-utopian perception of a westernized Middle-East that led President Bush to invade Iraq. The New Narrative at work is aiming to fulfil its unattainable prophecies.

Downloading Consciousness

Frank Tipler, in his new book “The Physics of Christianity” makes a number of interesting – some would even say amusing - claims with regards the “end of days”, as he sees it. I would like to focus on two of those claims, predictions in fact, which Tipler estimates that will occur by the year 2050. The claims are:

1. Intelligent machines [will exist] more intelligent than humans
2. Human downloads
[will exist], effectively invulnerable, and far more capable than human beings

I should go very quickly through the first claim, suggesting that – depending how one measures “intelligence”, computers far outsmart humans even today. The few remaining computational problems that deal mostly with handling uncertainty and comprehending speech I expect to be effectively sorted out sooner that the date Tipler suggests. I see no trouble with that. Artificial Intelligence (AI) is an engineering discipline solving an engineering problem, i.e. how to furnish machines with adequate intelligence in order to perform executive tasks in situations and/or environments humans would better not.
The second claim, however, is truly fascinating. To suggest a human download is equivalent to suggesting codification of a person’s person into a digital (or other, but digital should suffice) format. Once a “digital file” of a person exists then downloading, copying, and transmitting the file are trivial problems. But should we really expect downloading human consciousness by the year 2050 – or ever? There are four possible answers to this question. Yes (Tipler or the techno-optimist), No (the absolute negativist), Don’t know (the agnostic crypto-techno-optimist), Cannot know even if it happened (the platonic negativist).
Let us now take these four responses in turn, and in the context of the loosely defined term “consciousness” as the sum total of those facets that when acting together produce the feeling of being “somebody” in the I of each and every one of us.

1. The Techno-optimist (Yes). This view assumes life as an engineering problem and thus falls back to the AI premise of solving it. The big trouble with this view is that if consciousness is indeed an engineering problem (i.e. a tractable, solvable problem), then it is also very likely that it is hard problem indeed. “Hard” is engineering can be defined in relation to the resources one needs in order to solve a problem. Say for example that I would like to build a bridge from here to the moon. Perhaps I could design such a bridge but when I get down to develop the implementation plan I will probably find out that the resources I need are simply not available. Similarly, with consciousness, one may discover that in order to codify consciousness in any meaningful way one might need computing resources unavailable in this universe. This may not be as far-fetched as it sounds. For example, if we discover that the brain evokes the feeling of I, by means of complex interactions between individual neurons and groups of neurons (which seems to be a reasonable scenario to expect), then the computational problem becomes exponentially explosive with each interaction. To dynamically “store” such a web of interactions one would need a storage capacity far exceeding the available matter in our universe. But let us not reject the techno-optimist simply on these grounds. What we know today may be overrun tomorrow. So let us for the time try to keep our options open and say that the “Yes” party appears to have a chance of being proven right.

2. The absolute negativist (No). Negativists tend to see the glass half-empty. In the case of Tipler’s claim, the “No” party would suggest that the engineering problem is insurmountable. Further, they would probably take issue with the definition of consciousness, claiming that you cannot even start with solving a problem which you cannot clearly define. I would say that both these arguments fall short. Engineering problems are very often ill-defined and yet solutions are being found. And the “impossibility” of finding adequate memory to code someone’s mind is something that we will have to wait and see if it truly turns out this way. The negativists, in this case, may also include die-hard dualists.

3. The agnostic (crypto-techno-optimist) responds “skeptically” and is a subdivision of the techno-optimist. She is a materialist at heart but no so gun-ho as the true variety of the techno-optimist.

4. The platonic negativist (cannot know even if it happened). Now here is a very interesting position, the true opposite of the techno-optimist. The platonic negativist refuses to buy Tipler’s claim on fundamental grounds. She claims that it is not possible to tell whether such thing will occur. In other words, the engineer of 2050 may be able to demonstrate downloading someone’s consciousness but she, the platonic negativist, will stand up and question the truth of the demonstration. How will she do such a thing? I will have to expand on this premise – which is, in fact, the neo-dualist attack on scientific positivism. Suffice it to say that she will base her antithesis on the following: Any test to confirm that someone’s consciousness has been downloaded will always be inadequate, based on Gödel’s theorem.

Of course, the very essence of the aforementioned debate, i.e. whether or not consciousness can be downloadable, lies at the core of the New Narrative with respect to the revisionist definition of humaneness. But this is a matter that needs to be further discussed.

Science as Religion

Science and Religion can be easily compared in the context of narrative. Both are narratives (see also “The Book of the Universe”). Religious narratives always interrelate an explanation of the world as well as a set of moral instructions. Within the set of moral instructions we must include ritual, which is by definition a moral instruction too. Science, since Descartes, has excluded itself from advising on rituals or good behavior and has concentrated solely in the business of explanation.
Although much doubt on the explanatory power of Science still exists amongst many of our fellow human beings, I will argue that any well-meaning person with the capability - and courage - of rational thinking ought to ultimately accept Science as the best of all possible explanations of the world. One may even argue that one does not need to “believe” in Science in order to accept its validity; in contrast to a belief system such as Religion. I, on the contrary, will argue that acceptance of Scientific Truth and belief in Religious Truth are not as dissimilar as they appear.
Religion assumes a Higher Intelligence in trying to make sense of the cosmos; this intelligence (a “God”) may be self-conscious (“theism”) or not (“deism”); but without It there can be no complete explanation. Science, based on its methodology and the amazing and self-evident observation that the Universe is so finely tuned for life, has arrived at a very puzzling conclusion: that the Universe is either a product of blind chance (“one of many universes”) or that there is a “fifth element” at work which always “obliges” a Universe to arrive at a life-supporting version. This “fifth element” could be an as yet unknown law of nature that shapes a causelessly-created Universe into an anthropic one. The first hypothesis is supported by the “democrats” and the second one by the “aristocrats” (see “Spontaneous Dichotomy in scientific debate”).
Arguably, both scientific hypotheses require a big amount of belief, at least for now, since none can be proved or disproved. String theory as well as closed-loop quantum theory are trying to suggest feasible experiments in order to examine which one of the two hypotheses is the true one; but until now none has succeeded in doing so. One hopes that they soon will. But if Science and Religion both share a certain degree of belief, what of the moral instructions? If I am to compare the two then I should take issue with the second axis of a religious narrative too, which tells me how -and why - I should behave in a certain way; eat fish and not pork, for example, or face Mecca and bow five times a day, or sacrifice a cock on a full moon, or never to tell lies. Lately (see also “The Book of the Universe") it has been argued by many prominent scientists and philosophers that Science can also do exactly that: teach Humankind a moral code of self-regulation and mutual respect. Science is becoming more and more like Religion.

The New Narrative as a Simulation

The end of WWII may be regarded as a watershed in the political and cultural history of the West. The devastated European hinterland, the hecatombs of the Holocaust, the radioactive emptiness of Hiroshima and Nagasaki, demonstrated adequately enough that there were no limits to what human beings can do when it came to war and systematic killing. History, the narrative of human affairs marked by the recurrent outburst of butchery, came to a sudden halt. What caused the halt were the horrors of WWII.
Since then war, and therefore history (the description of wars), became sublimated, in a pseudo-Nietzschenean sense. This pseudo-sublimation is in fact a simulation. Thus, the US and the Soviet Union, instead of continuing after WWII with a nuclear war, they went into “Cold War”, i.e. a non-war, a sublimation of war, a simulated war, a televised war (in Korea, Vietnam, etc.). History became a simulation of history.
The biggest victim of this pseudo-sublimation was the collective rejection of reality, evident in culture and politics. Culture became a simulation of culture and politics a simulation of politics.
By the word simulation I mean the representation of reality in an iconic form, in order to control it. Examples are the edifices of gods or the orthodox icons: the object becomes with time an object of worship, it “attains” a holiness of its own, it becomes a “superobject”.
Naturally, as the simulation becomes better and better it begins to control its creators. The one who worships the icon gets furious at someone who does not – in this case the iconoclast receives the wrath of the icon-worshiper. Another example is simulated war games. The “adversaries” act as if they are truly fighting inside a computer-simulated environment of a war theatre. Very soon, however, the war becomes “real” in the sense that the adversaries “forget” that this is just a simulation and become emotionally involved in the process. They actually “feel” pain. Of course, they do not die when shot at, but the “die” in every other sense; and in a simulation this is just as good as dying for real.
History is simulated not through some supercomputer of course. We are not plugged into some kind of “Matrix”. I will argue, on the definition of the New Narrative that I already have given, that this simulation is the New Narrative. What do I mean by that? The New Narrative began before WWII, but it became the dominant expression of culture soon after the fall of the Berlin Wall in November 1989. Derrida has declared “Il n’y a pas des ‘hors texte’”. I understand the sense of the « texte » as the New Narrative, the main characteristics of which are the ubiquity of television, and the impact of advertising on the collective consciousness. Of course, literature, the visual arts and cinema, are also part of the New Narrative. But I believe that it is television and advertising that support the pseudo-sublimation of the animal instincts in us that in any other case would have used nuclear weapons without second thought. Of course, in order for the simulation to work violence must be there too, which explains why there is so much violence in TV, video games, etc.
It is very interesting here to draw a parallel with the Edo period in feudal Japan, following centuries of bloody interstrife. During the Edo period, war was all but outlawed. The sogun kept in Kyoto the heads of the lord families, thereby securing their obedience. The samurai, the warrior class, suddenly out of a business (or way of life), developed the martial arts as a “way of the mind”, incorporating elements of Zen. At the same time the arts flourished, but as a detached approach to life. Zen, is the ultimate simulation. It is nothingness. In the beginning of the 21st century western culture (and with it the rest of the world too), adheres daily to Zen-fascist slogans of the type “Just do it!” (of Nike shoes). It lives for the moment. The future is constructed through computer models that simulate climate change, orbits of threatening meteorites, spreads of epidemics – in other words threats to survival that do not exist in the NOW. Why is that? Why is the world so afraid, when the world environment has never been more secure? Again, the reason is that history is simulated so that we retain the existence of fear, so important in order to feel anything at all, whilst at the same time we inherently trust the system to secure us from “real” mutual obliteration.
I will explore further the results of this phenomenon. Suffice to say for the time being, that post-humanism is the logical consequence of simulated history. Neuro-prostheses, moral relativism and refusal of external realities (ie. the “sinking” of minds into minds) construe the new social reality, ever more distanced from the real. The problem of course with this is that the simulation has created such an environment that it is virtually impossible to distinguish what is real and what is not. The image was fused with the object.