Monday, April 27, 2009

The AI Singularity

The AI Singularity has been defined as a future point in history when machine intelligence will surpass human. A technological transition event of this magnitude has been compared to a cosmological "black hole" with an "event horizon" beyond which it is impossible to know anything.

The idea of the AI fails philosophically because it assumes the following clause: “I am not so smart to know what smarter is”. This clause is implicit in “recognizing” a hyper-intelligence. The acid test of passing through the AI “event horizon” does not suffice in you or I being flabbergasted by the amazing smartness of machines, or human-machine fusion. The acid test is not to be able to comprehend anything! Smartness after the AI Singularity event horizon is defined as so huge that no ordinary human mind can realize that is there. It is similar to asking a chimp to realize how smarter a human is. The chip cannot possibly know that. The question (for the chimp) is meaningless. And equally meaningless would be the question for the human of the future crossing the event horizon of the AI Singularity. For all we know, there might exist today - or have existed for centuries, or millennia – higher intelligences that ours. Perhaps, we live in a “Matrix” world created by smarter than us intelligences. Perhaps we crossed the AI Singularity event horizon many centuries ago, or last year. But we can never possibly know that.

The AI Singularity reduces thus to the “brain-in-a-vat” argument. However, a brain-in-a-vat cannot possibly know that is a brain-in-a-vat because to state that you are a brain-in-a-vat is a self-refuting statement. Therefore, to claim such a thing is nonsense.

The AI Singularity also fails scientifically. In order to have any sensible discussion on intelligence - human, machine or otherwise - we need a theory of intelligence; which we do not currently have. Major questions are looming, the most significant of which is the relationship between a mind and a brain. Until such scientific problems have been clearly defined, researched and reduced to some testable causal explanatory model, we cannot even begin to imagine “machine intelligence”. We can of course (and we do) design and develop machines that perform complex tasks in uncertain environments. But it would be a leap of extreme faith to even compare these machines with “minds”.

The AI Singularity fails sociologically too. It is a version of transhumanism, which is based on a feeble and debatable model of human progress. It ignores an enormous corpus of ideas and data relating to the human psychological and cultural development. It assumes a value system of ever better, faster, stronger, longer, i.e. a series of superlatives which reflect a social system of intense competition. However, not all social systems are ones of competition. Indeed, the most successful ones are of collaboration. In the social context of collaborative reciprocity, superlatives act contrary to the common good. To imagine a society willing to invest resources into building the intelligence of its individual members would be to imagine a society bent on self-annihilation. To put it in simple terms: if I am the smartest person in the world, if I can solve any problem that comes my way, if I can be happy by myself - why should I need you?

The Precautionary Principle

Stavros Dimas, the European Union’s Environment Commissioner, ignoring the opinion of the European Food Safety Authority (EFSA), recently [nb. 2008] indicated that the Commission will ban two genetically engineered varieties of corn, because of the potential harm it may cause on certain beneficial insects. At present, only one transgenic crop can be cultivated in Europe: Monsanto’s MON810 insect-resistant maize, which now comprises nearly 2% of maize grown in Europe.

In the on-going debate about Genetically Modified Organisms (GMOs) in Europeans’ plates the mantra is the so-called “precautionary principle”; the idea that regulation should prevent or limit actions that raise even conjectural risks, particularly when the scientific evidence is inconclusive. Add to this the widespread feeling in society that science is moving too far ahead becoming more and more incomprehensible – and should therefore be made to slow down, or even stop – and you get a neo-luddite backlash to anything biotechnology has to offer.

But is this the way to go? If the world rejects GMOs, what other options do we have in order to feed ourselves? Traditional farming, through the intensive use of pesticides and fertilizers, is known to be linked to serious health hazards such as cancer, and is using great amounts of fossil fuels which contribute to climate change. Organic farming is often quoted as a valid alternative, but the figures simply do not add up. We are six billion people on this planet, two billion of whom live on the edge of starvation. Put in this wider context organic farming, with all its splendid benefits, begins to look more like a luxury to be afforded by rich westerners only.

GMOs have gotten a bad name. They are considered an environmental risk. Releasing mutated organisms in nature, say their opponents, could spread havoc to natural evolution and cause untold damage to ecosystems. What one usually does not hear is that most traditional plant-breeding techniques are simply imprecise forms of genetic engineering. Cross-fertilization and cross-breeding are the most obvious ones but there is also mutagen breeding, whereby plants are bombarded by X-rays, gamma rays, fast neutrons and a variety of toxic elements in an attempt to induce favorable chromosomal changes and genetic mutations. The difference is that genetic engineering is a more targeted and precise method, which has the potential to avoid large scale environmental contamination.

The second big fear is impact on health. I have often been told that “mutant organisms cause cancer, everyone knows that!” The truth may in fact be the very opposite. Traditional farming causes cancer, and there have been numerous epidemiological studies that confirm this. GMO farming may even prevent cancer, as in the case of the transgenic corn that Dimas wants banned. The particular corn product releases a protein that is toxic to insects but harmless to mammals (such as humans). By preventing the corn to be invaded by insects it protects the product from a very dangerous fungal toxin called Fumonisin, which is a known carcinogen and a cause of neural tube defects in newborns. The transgenic corn has been shown to contain 900 percent fewer fungal toxins than the non-GMO corn variety grown by traditional and organic farmers.

Nevertheless, Europe says no. Maybe because we Europeans suspect that big American-based multinational crop companies, such as Monsanto and Syngenta, want to monopolize the agro-food business, to the detriment of our environment and health, no matter what scientists say. After all, scientists could be on their payroll too. I can agree that one should not trust corporations with the public good, and that government regulations as well as alert and well-informed citizens are society’s best defenses. However, the current market of big monopolies selling traditional - as well as transgenic - crops to farmers may be about to change, and change rapidly too.

Synthetic biology is nowadays taught at undergraduate level, and any biology student can create her own artificial organism on a Petri dish. Techno-optimists, such as physics professor Freeman Dyson, are heralding a new era where “domesticated biotechnology, once it gets to the hands of housewives and children, will give us an explosion of diversity of new living creatures, rather than the monoculture crops that big corporations prefer.” Just imagine that future: you can grow whatever you like in your back yard, creating your own varieties of plants and animals, free from the monopolizing corporations. On the other end of the spectrum, technophobes see this as a nightmare scenario where bioterrorism runs rampant and Earth’s ecology is disrupted beyond control. GMOs is the tip of a great big iceberg that floats to our shores, whether we want it, ban it, or not.

The risk of being wrong

However, let me focus on the infamous precautionary principle. The first, and most obvious perhaps, problem with the precautionary principle is that it takes no account of the cost of non-action. For example, let us say that I have a serious heart problem and my doctor thinks I should be given an artificial heart. Unfortunately, no-one can absolutely guarantee that the artificial heart will not fail, thus resulting in my death. By applying the precautionary principle I should refuse the treatment. However, if I do not get the treatment my death is certain. The principle, in this case, should be overridden. And yet, when one moves away from managing the risk of a certain technology on her own life or well-being, and arrives at decisions by governments or by the European Union, the Precautionary Principle of risk management has increasing appeal to politicians and policy-makers. It is a completely different thing for me to decide what to do with my health than deciding what should be done with everyone else’s health.

In the latter case the appeal of the Precautionary Principle is irresistible to politicians, and policy-makers, as well as various advocates of the public good. They are certain to win favors with society because citizens will tend to support a precautionary policy, because human instinct prevails.

Reality vs. experiments:

Reality, idealistically

Experimental results

- (not harmful)

+ (harmful)

- (not harmful)

True

False negative

+ (harmful)

False positive

True

Evolution has programmed us to avoid false negatives at all costs (an error made when we fail to connect A to B, when A is truly connected to B – for example, I did not flee upon hearing a noise, when in fact the noise was a lion coming to get me!). In the case of GMOs a false negative is when the experiment shows that something is harmless when in fact it is harmful (see table above). By the same token evolution has made us more relaxed with respect to false positives (an error made when A is falsely connected to B, for example I hear something which may be a lion about to attack me, and I flee – but there was no lion). In the case of GMOs a false positive is when the experiment shows that something is harmful when in fact it is harmless (see table above). Our survival has depended over eons upon evaluating false negatives as being more risky than false positives. In other words, since we are not absolutely certain that GMOs are safe, let us ban them.

The Gap

In the case of technology, however, the possibility of false negatives will always be with us. Owing to the intrinsic nature of statistical errors, as well as the philosophical impossibility of knowing for certain the link between cause and effect, there can be no absolute knowledge of experimental consequences. Science is a systematic, logical and experimental method of probing into an “ideal” realm that we hypothesize it exists and is called “reality”. We will never know if “reality” really exists. Therefore, the sum total of all our experiments does not exclude the possibility of a false negative.

In the case of science a false negative is a blessing, as it may falsify a given theory. This is another reason for the usual lack of understanding between science and society. Scientists love false negatives, but society does not. Scientists are usually more ready to take risks with new technologies than citizens.

If we do not wish to return to pre-industrial times, there can be no other way forward than through technology. One may argue that this is a recipe for humankind’s eventual doom. I would, however, like to remind of Malthus and his predictions about an unsustainable world which was not supposed to feed the billions of people. Only technology can beat demographics, as the “green revolution” of intensive farming proved. In a projected world of 9 billion people, everyone should be given an equal chance of economic and social development. We cannot hope to achieve this by banning or over-regulating technological progress. The only thing we can do is develop better channels of communication between science and society, to bridge the gap mentioned above, and prepare society for the changes ahead.

When I see a tree what do I, really, see?

The question of what comprises an external reality is ancient. The frustrated physicist cries “shut up and measure”; and he is right for all that we can know is only that which we can measure. The rest, we cannot know. The rest is the unknowable. We can only describe the knowable, and so we do through science, and sometimes through art. Optimists amongst us contend that we could also describe the relationship between the knowable and the unknowable. I think that we can do so only as a conjecture. Let’s call it “the objective world conjecture”. Our favorite tool here is logical abstractions; that is; thinking about non-objects. In the abstract, therefore, we can assume that there exists U, the unknown. And the K – the known – is, somehow (via the mysterious chance mechanisms of evolution perhaps), configured within U. This is a conjecture that, alas, we can never prove. The objective world will be forever unknowable. The reason for this should be obvious by the definition we give to U; K is always a subset of U; K+K1 ,where K1 is a new discovery, is also a subset of U and so on, for any Ki, ad infinitum.
From this U-neverland arise the ghosts of our measurements, the contents of our consciousness, and the K-objects of our senses, emotions and feelings. When I look at a tree I see the only thing that can be seen. I call it a “tree” and, if I utter the word in any language, or draw a “tree”, the overwhelming majority of my species will intuitively imagine a “tree”, different in its details but similar in its essence. Our “essential tree” is an object of K made up of things we decide to call “cells” and “atoms” and so on.
Interestingly, as we expand through scientific observation the limits of K into the vast (conjecturally) expanse of U, we arrive at logical paradoxes. The objective world conjecture is paradoxical in itself, since we assume an increasing infinite progression of Knowledge (ΣΚi) which forever remains a subset of U.
The more abstract our reasoning the more paradoxical it becomes. And we thus arrive at the ultimate walls of K. When classical “objects” fade into quantum ghosts, our “K-trees” become U-trees, things of the unknowable; and we are fenced inside the realm of subjectivity, the only reality knowable.

Social Echo and Rational Economics

In the construction of economic theories, from Plato and Aristotle to Adam Smith, Karl Marx, John Maynard Keynes and beyond, assumptions about human nature have taken their toll to the detriment of effective predictability of markets and systems. The need to predict, plan and reason about systemic behaviour, too strong to admit postponement, overrun the obvious lack of knowledge about the human animal. Economics thus framed new ideologies, such as Marxism or capitalism, which in turn guided scientific exploration. An example of ideological framing causing a sociological paradox is eugenics. Since Galton and Darwin, eugenics has been the logical projection of evolutionary theory applied to humans. But although the idea was accepted in the beginning by liberals and leftists, it became an abomination following its corruption by the Nazis. Since the end of WWII whenever it resurfaces it is being slammed down by an almost hysterical reaction from academia and the Press (James Watson being a recent victim of this). And yet the idea persists, albeit dressed in other hides; in socialist-inspired models of egalitarianism, in laws that regulate abortions to the detriment of the middle classes, in genetics research (cloning in particular), and in educational systems. Eugenics makes one huge assumption: that by selecting for higher intelligence humanity will become, in a few generations, less violent, more altruistic and wiser. And yet, the correlation between higher intelligence and the desired attributes of peace, social cohesion and wisdom is a weak one. It smacks of wishful thinking and cultural bias. Human beings are highly intelligent animals, not angels, not creatures that stand apart from the rest. We are apes, and when push comes to shove we act like ones too – regardless of our intelligence, or good manners. War is the most obvious testament to our innate cruelty. If anything, our ingenuity seems to have been invested more in engineering war machines that in anything else.
Recent discoveries in neuroscience promise to shed much-needed light in what constitutes a human being. Deciphering the unconscious circuitry would be a task of monumental proportions dwarfing the Human Genome Project by several degrees of magnitude. Alas, the result - if successful - will be of limited use. As in the case of genes, interactions in the whole seem to play a role more important than the interacting parts alone. The infusion of neuroscience into sociology and economics is a welcome development. Nevertheless, owing to the infancy of neuroscience it should be expected that social framing will once again take the upper hand, leading to new “realistic” economic theories that reason the animal inside us. Once more, experts will reason about social systems forgetting that human activity is diffused and dominated by unconscious, autonomic, neuropsychological systems that enable people to function effectively without always calling upon the brain's scarcest resource– attentional and reasoning circuitry. To reason about non-reason is a paradox that we cannot escape. We can only understand an echo of what the human society is truly all about. The dilemma will always be how much we want to believe in it.

Whence narrative?

Recently, I happened to be coordinating a public discussion on literature at the National Research Foundation. Being the organizer of the discussion, my objective was to explore, with the aid of an English Literature professor and a writer/critic, fictional narratives as inroads to humanness. Indeed, what could have been more profound than that! Still, my “ulterior” motive was to compare literary narratives to scientific ones.
Psychologists use personal narrative routinely, as a way to explore their patients’ personalities, but I was more interested to see if there was some connection to non-personal narratives, such as the Big Bang Theory, or the Evolutionary Theory. Then the question came from a member of the audience: why does the brain produce narratives? In a way, the question relates – to a higher level – to the structure of memory. And yet, memory has a biological foundation. What could that foundation be?
I am no expert, but the only way that I could have answered the question (“whence narrative?”) would be to point out our “sense of time”. Why do we have a perception of time? Obviously there have been evolutionary reasons for it. The interchange of light and darkness and the resulting circadian rhythm commission us with the sense of “before”, as distinct from “now” and “after”. And yet: of all the things that modern physics tell us about the Universe, of all the quantum paradoxes of electrons being at many places at the same time, the most unimaginable of all is that time does not flow but is still. That time is an “illusion”. This seems not only counter-intuitive (so much of modern physics is) but unimaginable. A universe where time is like length, or width, a dimension upon which points stretch and exist regardless of us being there, of time being still in other words, is a universe without narrative. Therefore, a universe without before, now and after. Our minds are evolved to produce narratives and therefore the only way we can comprehend anything succinctly and effectively is by incorporating it somehow into a narrative. The more explicit the narrative the more comprehendible the object, and vice-versa. Abstraction, the loss of narrative, equates with artistic amnesia, it makes time freeze to a still, space reduced to a point and communication to a silent pause.

Doomsday Narratives

When discussing the formation of ideas for the Society of the Future, a most important element of the synthesis is prophecy. Prophecy is an important contributor to the development of the western thought. Although elements of ritualistic foretelling can be found across many cultures, it is in the West that foretelling was historically institutionalized, whether one refers to the Oracle of Delphi, or the Prophets of the Bible. The underlying theme of prophecy is the juxtaposition of teleology with moral conditioning. In other words, do what you must do, for if you do otherwise you will be lost in the whirl of upcoming events. Therefore, prophecy frames the ethical debate of today by stressing the daunting onslaught of a frightful tomorrow. Redemption is offered only if one falls back.
The idea of prophecy has transformed into the idea of predictability, as the western world moved away from a mostly metaphysical explanatory model towards a materialistic one. Science and engineering succeeded in the social arena because they offered consistently predictable results. Indeed, it is in the core of scientific ideology that experimental results must be verified by their idependent repetition. This means that a prediction made by a theory - a mini-prophecy in disguise if you will - must be verified in various labs to hold any water.
Prophecy in the pre-scientific Christian world was dominated by the Book of Revelation, which in turn expressed pre-existing ideas of “telos”, a word meaning both “end” in the temporal sense, as well “end” in the purpose sense. As science takes over, the prophetic narrative transforms and gradually finds its way into science fiction. In turn, science fiction not only nourishes the imagination and aspirations of scientists-to-be (most scientists were sci-fi funs when they were children), but also fuels the media debate whenever ethical issues on science and technology are raised. The latter happens because contemporary media is primarily a narrative-transformation machine that recycles stories and threads of stories by adding sensationalism, in order to attract attention.
I think that there are three distinctive “doomsday stories” that haunt us today. The first I label “the post-apocalyptic primitivism”. It implies an ecological catastrophe. This could happen either as a result of a nuclear war, or change in the climate, or a run-away virus, or a hit by a meteorite, etc. The result is prophesized as a collapse of civilization and the regression of the human race (providing anyone survives) to a primitive state. Post-apocalyptic primitivism is the logical extension of the Book of revelation (or the Nordic myth of the twilight of the gods, if you prefer another context).
The second doomsday narrative for the future I will label the “AI Singularity”. This is the assumed point in the future when machine intelligence surpasses the human one. At this point the narrative is broken suddenly. Nothing can be further predicted. An impenetrable discontinuity appears. The “event horizon” of the AI Singularity implies the end of the power of prediction, the nullification of prophecy and, to my mind, suggests the absolute negation of science.
The third doomsday narrative may be referred to as the “post-human scenario”. This implies a more controlled process for history, where technology fuses with humans and transforms the world and society. Humans become Cyborgs, either as independent units incorporating a variety of mechano-electronic and biochemical paraphernalia, or as interdependent units hooked up in a grid, a kin of super-organism that fuels progress. This third narrative is the most optimistic of the three, mainly because it is inspired by utopian (or dystopian, depending upon your emotional inclination) ideas.
But I must return to these narratives later and analyze each in turn.

Climate, Apocalypse

Our planet has been warming up since the Industrial Revolution mainly due to the accumulation of carbon dioxide and methane in the atmosphere, gases that result from the burning of fossil fuels that spur and sustain our economic development. This is a scientific fact that no one doubts. Paleoclimatologists place the current trend in a wider context by comparing past periods of Earth’s history where warming up had occurred due to natural processes. We also know of the Milankovitch 100,000-year cycles that determine the phasing of ice ages with interglacials. According to those cycles we ought to be entering into an ice age. Temperatures ought to be dropping and ice on the polar caps ought to be thickening. Ironically, our pumping of greenhouse gases to the atmosphere seems to compensate for all that.
Science can describe with relative accuracy the past and the processes by which we got where we are today. Logic, and some rather crude computer models that run on computers, predict that if we continue with business-as-usual many nasty things will happen to our environment. Indeed many of those things are happening already. Earth is sick; there can be no question about it. And, very probably - why almost certainly -it has been made sick because of human industrial activity.
Two questions follow:
Question 1: Can we do something about it?
Question 2: Providing we can what should we do?

The Kyoto Protocol, as well as the recent international discussion at Bali for the successor agreement, emphatically answer “Yes” to the first question and “Curb carbon emissions” to the second. There seems to be worldwide consensus on those answers and, with the exception of the US government and a handful of die-hard skeptics, the rest of the world seems willing to bite the bullet and proceed with a more responsible, equitable and collective stewardship of the planet, at a nominal cost. Sounds all right, doesn’t it?
Well it does and perhaps it is. However, I will examine the nature of the two questions posed in order to argue that the climate change debate is not really about climate science, or economics. Inexactness is in the method and nature of both, and one could argue ad nauseum about the merits of different approaches, analyses and the like. Evidently, the issue is not an academic one, although scientists are being involved in an unprecedented way and are being awarded not the Nobel Prize for Physics or Economics but the Nobel Prize for Peace, a distinction usually reserved for politicians or activists.
Additionally, I would argue that the debate is not even a political one. Of course, many heads of state would envy Al Gore and would like a piece of his action and fame. Climate change seems to galvanize electorates across the globe, even when no one really bothers to explain the details to them. But let us leave the cunning politicos aside, for their perfidiousness is well-documented. Even honest, well-wishing politicians fall back on inexact science in order to validate their decisions and by doing so fall into the abyss of folly. Their decisions usually evoke the insurance argument: i.e. curb emissions at a premium now in order to avoid dire consequences in the future. But the insurance argument is a weak one. Its weakness lies firstly in selecting and prioritizing future risks, and secondly in defining the premium. Climate change is not the only thing threatening future generations. If the world wants to buy insurance on its future safety then funds need to be established in order to deal with rogue asteroids, supervolcanic eruptions, supernova explosions in the vicinity of our solar system, pandemics, plume explosions, and – why not – invasion of Earth by an alien civilization. The list of possible threats can go almost forever. And how about that premium? How much does it cost? How can one be certain that forgoing A% of the world’s output is optimal and not A+B%, when the science of prediction is so inexact? If one is talking about the future of the planet shouldn’t one be generous with premiums? If the alternative is life on a scorched planet without wildlife, a barren rock in space, shouldn’t we consider eliminating greenhouse gases altogether as soon as possible? And if this sounds illogical, where is the logic in defining an optimal premium?

All in all, I will argue that the debate on climate change is principally and foremost an emotional one.

Both questions I asked above are scientifically unanswerable because they deal with certainties. The fact that many scientists have become evangelical in their predictions is unassailable evidence of emotionalism blurring their better judgment. Healthy skepticism has been replaced by much-applauded scientific fundamentalism, which is being rewarded the Nobel. There have been reports of “tears” during the Bali meeting. And the manner by which the meeting proceeded reminds one of an operetta. Why so much passion? The emotional charge that permeates all debates on climate needs to be analyzed by sociologists and psychologists. By being “alarmist” and “apocalyptic” cunning politicians join the chorus of scientists-turned-Bible-prophets in a replay of a very old story, namely the herding of the human flock under an ideological banner, this time the banner being “eco-friendly”. The threat is nature’s revenge on the sinful humankind. God has been replaced by mystical natural forces, by a cybernetic Gaia. Often, the high moral ground is being hijacked by atheists who re-discover faith dressed up as computer simulations of looming Apocalypse. The end is at hand ladies and gentlemen! Repent! Shut the factories down! Shun your riches! Be poor in body and mind! Love thy neighbor! And redemption will surely come!

There is obviously a positive side to all this that needs to be mentioned. Al Gore has been explicit about it too. I will rephrase it as the dawning of a new era in international politics where leaders adopt a common, environment-centered, agenda that, ultimately, can lead only to cooperation and peace. In a world that is about to fall apart there is no point fighting. Or isn’t there?
Well, you see, when the debate is so emotionally charged, when logic and the inexactness of scientific argument have been replaced by certainties, feelings can swing either way unpredictably. You could have Al Gore’s fantastic vision of world cooperation and mutual support but, alas, you could also have war and mutual annihilation. And this is precisely the danger that the world faces as we are being herded into taking decisions about the future, eluding ourselves that we are powerful enough to engineer climate by adopting an equitable sharing of greenhouse gas quotas.

Simulation and non-local realism

Realism is the viewpoint according to which an external reality exists independent of observation. According to Bell’s theorem any theory that is based on the joint assumption of realism and locality (meaning that local events cannot be affected by actions in space-like separated regions – something that Einstein would not swallow) clash with many quantum predictions. In such cases, “spooky action at a distance” is necessarily assumed in order to explain phenomena such as quantum entanglement. This is called non-local realism. A recent paper by Simon Groblacher et al (An experimental test of non-local realism, Nature, Vol. 446, 19th April 2007, pp. 871-5) showed that giving up the concept of non-locality is not sufficient to be consistent with quantum experiments, unless certain intuitive features of realism are also abandoned. Let us see how these results may correspond to assumptions made into our simulation-based New Narrative.
I would like to take an example from cosmology, indeed the very simulation of the standard cosmological model. The logic of the simulation goes like this. Assumption 1: The universe maps unto an external reality which, somehow (i.e. via known, or unknown-as-yet, natural laws) maps also unto our coupled media of detection instruments plus consciousness. This may be called “the perceived universe”. It usually depicts an image of the cosmos, galaxies and gas clusters spreading in all directions and in all magnificence. A mathematical model is then developed based on the prevailing cosmological theory that aims to explain the “perceived universe”. This model runs on a computer, which is the external reality substrate of the simulation. In other words, there is, or so we assume, a “reality” of hardware that runs our simulation (assumption 2). The result of the simulation is also an image of the cosmos. Comparing the two images we refine the model further until the two images appear identical. When we achieve an identical pair of images then we conclude that our mathematical model has been a successful one, i.e. a valid description of external reality.
It is obvious that our conclusion may be potentially flawed, on the basis of our two main assumptions. Furthermore, our assumptions call upon the quantum nature of the cosmos which, as the aforementioned paper has demonstrated, seems to reject non-local realism. Thus, we are left with a revision of assumptions about realism.
Which happen to be inherent in the New Narrative, the most poignant revision of reality being the decoherence of Selfhood.
By deconstructing the Self, by rejecting the narcissism of psychoanalysis, by re-introducing a mystical layer of dualism in the nature of consciousness, we arrive at a counterfactual definiteness and in a world not completely deterministic. The result may be a chorus of out-of-tune artwork but it is also a result closer to what our best validated experiments show. If “external reality” is a wonderland of curious objects and events then our revised “inner reality” of the New Narrative is an equally exotic place. But should we take such a phantasmagoric correspondence as a sign of progress? Should we convince ourselves that we have, at last, by entering our self-simulated world, arrived serendipitously at the Holy Grail of “reality”? Ironically perhaps, the very dynamics of decoherence prevent us from answering such questions. When determinism is thrown out of the window, then all one can do is lean onto the ledge and peer outside, in wander and astonishment, to the changing view of a perplexing world beyond our wildest imagination. And that is exactly what the New Narrative tries, and forever fails, to describe.

The Idea Delusion

Recently, a postgraduate student asked me, as part of his thesis (http://vasilis-thesis.blogspot.com/) , to comment on the following claims by Keynes and Galbraith.
Keynes claimed that: "The ideas of economists and political philosophers (etc), both when they are right and when they are wrong, are more powerful than is commonly understood. Indeed the world is ruled by little else.” While Galbraith argued that: "Ideas would be powerful only in a static world because they are inherently conservative."
The student’s question was: These statements were said almost 45, and more, years ago. According to your opinion, which of the fore mentioned statements seems valid in the modern world?
My answer was: “I would definitely side with Galbraith. Of course ideas appear to be powerful and they seem to exercise enormous influence in the way we perceive and interpret the phenomena of the world. I believe that Pol Pot is a case in fact, as well as the hapless millions that were killed because of political ideas that he carried inside his head. But were those millions killed because of political ideology, or because this ideology "happened" to take root in Pol Pot's head, who "happened", through an amazing series of coincidences to live long enough and become powerful enough in order to enact those ideas against the poor people of Cambodia? Alas, our world is not shaped by decisions but by happenstance, by the dynamic confluence of sometimes interconnected and sometimes disparate factors, that lead one way or the other, by virtue of systemic forces beyond our control - or comprehension. Only by hindsight do economists and political philosophers apply ideas to the facts and develop their "theories". The feeling of "power" ascribed in their ideas is therefore an illusion, and it has been so not only today but always and forever.”

I would like to expand somewhat on my answer. Firstly, let me point out that I am referring to economic and political philosophy, restricting therefore my tackle of the term “Idea” - at least for now - to these fields.

I believe the ultimate acid test that shows the delusion that I am claiming is very simple. Economic and political theorists develop ideas which explain (or try to explain) the past. Even those theorists who claim that they speak about “today” or the “present” are in fact speaking only about the past; the recent past but the past nevertheless. The “present” is physically and intellectually unattainable as any Zen story will easily convince you. All economic and political theory, every “big idea” – Marxism, anarchism, liberalism, whatever - fails in various degrees when it comes to predicting the future. Why is this so? Why can’t anyone tell us how the world or the economy will be in five, or ten years time or, in fact, tomorrow morning?

Two reasons: Firstly, economic and political theories do not have – not yet anyway – a solid scientific base on the natural sciences. Secondly, economic and political theorists do not really believe that they need such a base in order to develop and debate their ideas. They inherently accept that the economy and the society are governed by non-natural forces and laws and that they are “human-made” – whatever that may mean. Therefore, they strongly contend that their theories and ideas are, or potentially could be, the driving forces of economic and social phenomena. That, for example, “believing” in market economics, “accepting” the theory and developing instruments that support it (e.g. IMF, World Bank, etc.) will surely transform the world into a free market economy. That invading Iraq and installing a western-type democracy will transform Iraq into a western-type democracy. These notions are delusions, of course. It should be very obvious to any rationally thinking person that human beings and societies are not controlled, “closed” environments of deterministic interactions. We are a social class of animals partaking in the evolution of the cosmos whether we want it, think it, or not.
I suggest that the roots of this “Idea Delusion” are Platonic and rest with Plato’s Republic. It is, however, about time, that such a delusion is revised and economists and sociologists begin to approach the workings of society in the same way as natural philosophers and natural scientists do. We need a new paradigm of social and economic theory based on biology and systems theory with predictive powers. Ideologies belong to the past.

Literary Constructivism

Immanuel Kant held that there is a world of “things in themselves” but owing to the its radical independence from human thought, that is a world we can know nothing about; thus stating the most famous version of constructivism. Thomas Kuhn took the point further by stating that the world described by science is a world partly constituted by cognition, which changes as science changes.
Constructivism as an idea becomes very obscure when one tries to determine the connection of “things in themselves”, e.g. the “reality” of, say, electrons, and the “scientific concept” of an electron. My take of the problem is that science is a human endeavor and therefore inherently and implicitly bound by our brain’s capabilities, however, we do not “imagine” the world. The world exists, it is just not catalogued.
Things become a lot less murky when we shift from science to literature. Here, the author does not pretend to understand “things in themselves”. The literary agenda differs from the scientific one because it is assumed to be human-made. Books and book-worlds are of the imagination and for the imagination. Whenever I read about a flower I know that it is not the flower “out there”, in the “real” garden, but it is an imaginary flower, a rose inside the imagination of the writer given to me to sample and behold in my own imagination. Thus, free of scientific intentions and false pretensions about realities, literature is constructivist by its own nature. The writer constructs her own world. There are no electrons in that world, only thoughts, of electrons, sometimes, perhaps – but words nevertheless.
But, what is the real difference between a literary and a scientific narrative? There is much difference, one would argue. The Big Bang could be described by words, and therefore constitute a narrative in a literary sense, however, that narrative is underpinned by observation, experimentation and – most importantly perhaps – mathematicalization. Yes, math seems to make the big difference. The mathematical correspondence between scientific theories and the ticking of natural events, is the definitive factor that appears to differentiate a scientific narrative from a literary one.
But does it, really? Can’t one “translate” maths into narratives too? I would suggest that it ispossible. Even the most obscure of mathematical entities can be described in words, and indeed it must be described in words if it is to be understood. If one draws a parallel between math and music, then yes music is beyond words, but so is everything else of the Kantian “things in themselves”, and only when music is descriptively articulated (as an “experience”), it becomes part of the communal pool of ideas. Therefore, one arrives at a scientific narrative that is multi-layered and has many subplots, but starts to look like a literary narrative more and more. Could we then take the point one step further and suggest that, as literary criticism is the evolutionary force of literature, guiding it towards new directions, scientific peer review does similar wonders with determining directions of scientific research, shifting paradigms, and revolutionizing theories (i.e. narratives)?

Utopias and Dystopias

Thomas More published in 1516 the book Utopia, based on the Republic of Plato. The Greek origins of the term, as well as the inspiration are rather telling. The word means the place that does not exist. Plato, in his original work, which is aiming to provide a context for what he defines as “virtue” (αρετή), positions his thesis on the unattainable. True virtue can only be realized in the world of ideas “out there”; not in our coarse world of shadows. The Republic is by definition unattainable. For the mind of Plato’s contemporaries, for his listeners if you will, “utopianism” did not hold the same meaning as it did in the 16th century, or indeed in our present time. In other words, the unattainable did not imply the will to attain; the interpretation must have been more literal, the way mathematics is, or even better perhaps geometry. There is no such thing as a perfect sphere. A perfect (platonic) sphere is unattainable, but that does not mean that having less than perfect spheres in the real world is a let down. The Republic is a “perfect” society (at least according to its writer) in the same sense. That is exactly the difference in the narrative that I am trying to underline: that the utopian of ancient times was a different animal from the utopian of modern times. The modern version is someone who has not given up on the idea of attainment. In fact, rather the contrary: the attainment of a utopia is a goal in itself. This is profoundly evident in the totalitarian dreams and nightmares of fascism, Nazism, as well as Marxists: the Perfect society and the Perfect man (and woman). This is a major shift in understanding utopias between the ancient and the modern, and I could not stress enough the importance and repercussions of this shift. When Aldous Huxley published “Brave New World” in 1932, the foundations of a utopian/dystopian narrative have already been laid. Indeed they have replaced liberal realism – if ever such thing existed. Utopias or dystopias are attainable. Either by action or inaction. Climate change is a point of fact, as well as the idealist-utopian perception of a westernized Middle-East that led President Bush to invade Iraq. The New Narrative at work is aiming to fulfil its unattainable prophecies.

Downloading Consciousness

Frank Tipler, in his new book “The Physics of Christianity” makes a number of interesting – some would even say amusing - claims with regards the “end of days”, as he sees it. I would like to focus on two of those claims, predictions in fact, which Tipler estimates that will occur by the year 2050. The claims are:

1. Intelligent machines [will exist] more intelligent than humans
2. Human downloads
[will exist], effectively invulnerable, and far more capable than human beings

I should go very quickly through the first claim, suggesting that – depending how one measures “intelligence”, computers far outsmart humans even today. The few remaining computational problems that deal mostly with handling uncertainty and comprehending speech I expect to be effectively sorted out sooner that the date Tipler suggests. I see no trouble with that. Artificial Intelligence (AI) is an engineering discipline solving an engineering problem, i.e. how to furnish machines with adequate intelligence in order to perform executive tasks in situations and/or environments humans would better not.
The second claim, however, is truly fascinating. To suggest a human download is equivalent to suggesting codification of a person’s person into a digital (or other, but digital should suffice) format. Once a “digital file” of a person exists then downloading, copying, and transmitting the file are trivial problems. But should we really expect downloading human consciousness by the year 2050 – or ever? There are four possible answers to this question. Yes (Tipler or the techno-optimist), No (the absolute negativist), Don’t know (the agnostic crypto-techno-optimist), Cannot know even if it happened (the platonic negativist).
Let us now take these four responses in turn, and in the context of the loosely defined term “consciousness” as the sum total of those facets that when acting together produce the feeling of being “somebody” in the I of each and every one of us.

1. The Techno-optimist (Yes). This view assumes life as an engineering problem and thus falls back to the AI premise of solving it. The big trouble with this view is that if consciousness is indeed an engineering problem (i.e. a tractable, solvable problem), then it is also very likely that it is hard problem indeed. “Hard” is engineering can be defined in relation to the resources one needs in order to solve a problem. Say for example that I would like to build a bridge from here to the moon. Perhaps I could design such a bridge but when I get down to develop the implementation plan I will probably find out that the resources I need are simply not available. Similarly, with consciousness, one may discover that in order to codify consciousness in any meaningful way one might need computing resources unavailable in this universe. This may not be as far-fetched as it sounds. For example, if we discover that the brain evokes the feeling of I, by means of complex interactions between individual neurons and groups of neurons (which seems to be a reasonable scenario to expect), then the computational problem becomes exponentially explosive with each interaction. To dynamically “store” such a web of interactions one would need a storage capacity far exceeding the available matter in our universe. But let us not reject the techno-optimist simply on these grounds. What we know today may be overrun tomorrow. So let us for the time try to keep our options open and say that the “Yes” party appears to have a chance of being proven right.

2. The absolute negativist (No). Negativists tend to see the glass half-empty. In the case of Tipler’s claim, the “No” party would suggest that the engineering problem is insurmountable. Further, they would probably take issue with the definition of consciousness, claiming that you cannot even start with solving a problem which you cannot clearly define. I would say that both these arguments fall short. Engineering problems are very often ill-defined and yet solutions are being found. And the “impossibility” of finding adequate memory to code someone’s mind is something that we will have to wait and see if it truly turns out this way. The negativists, in this case, may also include die-hard dualists.

3. The agnostic (crypto-techno-optimist) responds “skeptically” and is a subdivision of the techno-optimist. She is a materialist at heart but no so gun-ho as the true variety of the techno-optimist.

4. The platonic negativist (cannot know even if it happened). Now here is a very interesting position, the true opposite of the techno-optimist. The platonic negativist refuses to buy Tipler’s claim on fundamental grounds. She claims that it is not possible to tell whether such thing will occur. In other words, the engineer of 2050 may be able to demonstrate downloading someone’s consciousness but she, the platonic negativist, will stand up and question the truth of the demonstration. How will she do such a thing? I will have to expand on this premise – which is, in fact, the neo-dualist attack on scientific positivism. Suffice it to say that she will base her antithesis on the following: Any test to confirm that someone’s consciousness has been downloaded will always be inadequate, based on Gödel’s theorem.

Of course, the very essence of the aforementioned debate, i.e. whether or not consciousness can be downloadable, lies at the core of the New Narrative with respect to the revisionist definition of humaneness. But this is a matter that needs to be further discussed.

Science as Religion

Science and Religion can be easily compared in the context of narrative. Both are narratives (see also “The Book of the Universe”). Religious narratives always interrelate an explanation of the world as well as a set of moral instructions. Within the set of moral instructions we must include ritual, which is by definition a moral instruction too. Science, since Descartes, has excluded itself from advising on rituals or good behavior and has concentrated solely in the business of explanation.
Although much doubt on the explanatory power of Science still exists amongst many of our fellow human beings, I will argue that any well-meaning person with the capability - and courage - of rational thinking ought to ultimately accept Science as the best of all possible explanations of the world. One may even argue that one does not need to “believe” in Science in order to accept its validity; in contrast to a belief system such as Religion. I, on the contrary, will argue that acceptance of Scientific Truth and belief in Religious Truth are not as dissimilar as they appear.
Religion assumes a Higher Intelligence in trying to make sense of the cosmos; this intelligence (a “God”) may be self-conscious (“theism”) or not (“deism”); but without It there can be no complete explanation. Science, based on its methodology and the amazing and self-evident observation that the Universe is so finely tuned for life, has arrived at a very puzzling conclusion: that the Universe is either a product of blind chance (“one of many universes”) or that there is a “fifth element” at work which always “obliges” a Universe to arrive at a life-supporting version. This “fifth element” could be an as yet unknown law of nature that shapes a causelessly-created Universe into an anthropic one. The first hypothesis is supported by the “democrats” and the second one by the “aristocrats” (see “Spontaneous Dichotomy in scientific debate”).
Arguably, both scientific hypotheses require a big amount of belief, at least for now, since none can be proved or disproved. String theory as well as closed-loop quantum theory are trying to suggest feasible experiments in order to examine which one of the two hypotheses is the true one; but until now none has succeeded in doing so. One hopes that they soon will. But if Science and Religion both share a certain degree of belief, what of the moral instructions? If I am to compare the two then I should take issue with the second axis of a religious narrative too, which tells me how -and why - I should behave in a certain way; eat fish and not pork, for example, or face Mecca and bow five times a day, or sacrifice a cock on a full moon, or never to tell lies. Lately (see also “The Book of the Universe") it has been argued by many prominent scientists and philosophers that Science can also do exactly that: teach Humankind a moral code of self-regulation and mutual respect. Science is becoming more and more like Religion.

The New Narrative as a Simulation

The end of WWII may be regarded as a watershed in the political and cultural history of the West. The devastated European hinterland, the hecatombs of the Holocaust, the radioactive emptiness of Hiroshima and Nagasaki, demonstrated adequately enough that there were no limits to what human beings can do when it came to war and systematic killing. History, the narrative of human affairs marked by the recurrent outburst of butchery, came to a sudden halt. What caused the halt were the horrors of WWII.
Since then war, and therefore history (the description of wars), became sublimated, in a pseudo-Nietzschenean sense. This pseudo-sublimation is in fact a simulation. Thus, the US and the Soviet Union, instead of continuing after WWII with a nuclear war, they went into “Cold War”, i.e. a non-war, a sublimation of war, a simulated war, a televised war (in Korea, Vietnam, etc.). History became a simulation of history.
The biggest victim of this pseudo-sublimation was the collective rejection of reality, evident in culture and politics. Culture became a simulation of culture and politics a simulation of politics.
By the word simulation I mean the representation of reality in an iconic form, in order to control it. Examples are the edifices of gods or the orthodox icons: the object becomes with time an object of worship, it “attains” a holiness of its own, it becomes a “superobject”.
Naturally, as the simulation becomes better and better it begins to control its creators. The one who worships the icon gets furious at someone who does not – in this case the iconoclast receives the wrath of the icon-worshiper. Another example is simulated war games. The “adversaries” act as if they are truly fighting inside a computer-simulated environment of a war theatre. Very soon, however, the war becomes “real” in the sense that the adversaries “forget” that this is just a simulation and become emotionally involved in the process. They actually “feel” pain. Of course, they do not die when shot at, but the “die” in every other sense; and in a simulation this is just as good as dying for real.
History is simulated not through some supercomputer of course. We are not plugged into some kind of “Matrix”. I will argue, on the definition of the New Narrative that I already have given, that this simulation is the New Narrative. What do I mean by that? The New Narrative began before WWII, but it became the dominant expression of culture soon after the fall of the Berlin Wall in November 1989. Derrida has declared “Il n’y a pas des ‘hors texte’”. I understand the sense of the « texte » as the New Narrative, the main characteristics of which are the ubiquity of television, and the impact of advertising on the collective consciousness. Of course, literature, the visual arts and cinema, are also part of the New Narrative. But I believe that it is television and advertising that support the pseudo-sublimation of the animal instincts in us that in any other case would have used nuclear weapons without second thought. Of course, in order for the simulation to work violence must be there too, which explains why there is so much violence in TV, video games, etc.
It is very interesting here to draw a parallel with the Edo period in feudal Japan, following centuries of bloody interstrife. During the Edo period, war was all but outlawed. The sogun kept in Kyoto the heads of the lord families, thereby securing their obedience. The samurai, the warrior class, suddenly out of a business (or way of life), developed the martial arts as a “way of the mind”, incorporating elements of Zen. At the same time the arts flourished, but as a detached approach to life. Zen, is the ultimate simulation. It is nothingness. In the beginning of the 21st century western culture (and with it the rest of the world too), adheres daily to Zen-fascist slogans of the type “Just do it!” (of Nike shoes). It lives for the moment. The future is constructed through computer models that simulate climate change, orbits of threatening meteorites, spreads of epidemics – in other words threats to survival that do not exist in the NOW. Why is that? Why is the world so afraid, when the world environment has never been more secure? Again, the reason is that history is simulated so that we retain the existence of fear, so important in order to feel anything at all, whilst at the same time we inherently trust the system to secure us from “real” mutual obliteration.
I will explore further the results of this phenomenon. Suffice to say for the time being, that post-humanism is the logical consequence of simulated history. Neuro-prostheses, moral relativism and refusal of external realities (ie. the “sinking” of minds into minds) construe the new social reality, ever more distanced from the real. The problem of course with this is that the simulation has created such an environment that it is virtually impossible to distinguish what is real and what is not. The image was fused with the object.

Spontaneous dichotomy in scientific debate

Scientific debate is the rigorous process by which scientific theories, ideas and explanations are tested by the scientific community. The debate incorporates a variety of instruments, such as peer reviews, conference discussions, duplication of experiments and experimental results and, ultimately, discussion upon the interpretation of experimental results. An important result of scientific debate is the distillation of a scientific paradigm, which finds its way to school and university textbooks as the “most up to date knowledge” on a particular scientific field. However, this result is always partial and always refers to the part of the debate upon which closure has been arrived at. Paradigms are therefore temporary truces in the on-going scientific debate. As truces, they are only compromises. In order to understand the nature of this compromise better, I will argue that within every scientific debate arises a spontaneous dichotomy which can be identified by the labels “aristocratic” and “democratic” – labels alluding to the two political parties of the ancient Athenian democracy. The “aristocratic” party is elitist, skeptical and Platonic. The “democratic” party is populist, rational and Aristotelian. This dichotomy arises from a cardinal requirement on any interpretation of experimental results which aspire to uphold a theory: that the elements of an explanation must be both necessary as well as sufficient in explaining a natural phenomenon. This dual explanatory requirement is at the root of the spontaneous dichotomy. This is because, although almost everyone can agree on the “necessary” part of the explanatum, it is the “sufficient” part that always leaves a door open. Through this door the aristocratic party of the scientific debate will always try to introduce doubt on the finality of the explanatum. At the same time the democratic party will interpret the aristocratic skepticism as a betrayal of scientific integrity and accuse the aristocrats for undermining the edifice of scientific debate. The democratic argument always rests on the well-meaning conviction that one cannot have a rational debate without facts. Since the aristocrats will always adhere to the possibility of a quintessent “missing” element undermining the sufficiency of the explanatum, there can be no closure in the debate. The history of science, so far, has always justified the side of the aristocrats. From the Ptolemaic, to the Copernican to the Newtonian to the Einsteinian, paradigms fall because the aristocrats – who are the “revolutionaries” in this case – always doubt the status quo paradigm and always question its validity.

Universe's Book

Modern science is a hypertext narrative describing the birth and evolution of the Universe. Its chapters interconnect in multifarious ways with the many branches of scientific enquiry – and this includes the humanities - and many of the chapters are being written even today. Many important details are still missing, but arguably most of the work has already been done. Some, the “Platonists” (see “Spontaneous dichotomy in scientific debate”), would argue that the Scientific Corpus may be totally revised in the future and that indeed we may be very near that tipping point in history. I will argue that this could conceivably happen but it will only affect a small part of the Book of the Universe. It will revise the understanding we have for its beginning, it might even revise the understanding that we have about the origins of life; but it will not re-write the Book of the Universe. The narrative has been written and delivered, what is left to do is editorial work.
It is not therefore surprising that many contemporary philosophers and scientists, notably Richard Dawkins, Daniel Dennet and others, argue that science has already given to the world an excellent explanation of just about everything. They argue, in the name of universal peace and brotherhood that the Book of the Universe is taught across the globe, to everyone, to children of all nations, so that scientific understanding replaces religious belief. Their argument, which I happen to approve, is that religion and irrational belief systems in general, are too dangerous ideas to permeate the nations of a technologically advanced, nuclear-armed world like ours. On the contrary Science, as narrated in the Book of the Universe, unites in a rational and wondrous way all races of the planet, in the common appreciation and respect of nature, so instrumental in establishing some kind of peaceful co-existence and, ultimately, survival.
What interests me about their argument is that I find in it a fine example of the interplay between literature, science and the society of the future. But I will have to return to this point that I am making and expand it further.

The New Narrative

I need to define what I mean by the word “Narrative”. Since the Lacroix cave paintings - or even earlier perhaps - humanity has felt the compelling need to record itself. Human social evolution has thus been intricately related to descriptions of events, personalities and, most importantly, ideas. These descriptions are narratives by definition, i.e. they follow a specific structure which reflects the way human minds understand the world. Although narratives can be both explicit and implicit, their structure is always relational: ideas, events and personalities are always related to one another and to their time. Even when referring to things past or prophesize things of the future, narratives are always interpreted in the present; this is a very important point which explains why different eras interpret the same narratives in a different way. Narratives are stored in Libraries. By this term I am generalizing on the concept of narrative storage, which takes place in a variety of media. Media are the storages of narratives and every society uses technology to improve on the media. Thus a Library can be made up from a collection of media, such as stone or clay tablets, papyri, scrolls, books, museums, architecture, etc.
Narratives are not just for show. Their role is not decorative. Narratives, once born, define society. They are the steam engine of societal progress, or regress. Because they are the cumulative repositories of ideas narratives are the drivers of change. Whenever great civilizations fell, it was because they somehow lost their Libraries. I mean this in the literal as well as the metaphorical sense. Forgetfulness is the loss of narrative, and this is true not only in various well-documented amnesias but also in the case of societies and cultures at large. Examples abound throughout history, but I will only mention here the case of the Mayan civilization which collapsed as soon as the greater part of its narrative was lost.
Having established the definition of Narrative, Library and Media, let us now turn our attention to the definition of the “New Narrative”.
I will claim that the New Narrative was born out of the Internet revolution (complemented by cable TV and satellite communications) in the 1990s and that it differs from all previous narratives of the past because its Libraries are interconnected. The New Narrative by virtue of its genesis created a New Hyper-Library where every other narrative that has survived the test of time is stored somewhere there, as a node within a vast network of interconnectedness enabled by contemporary technology. Our society, like the societies of the past, is also defined by its narrative; in our case our New Narrative of networked media. The most prominent example of how the New Narrative affects societal evolution is advertising. Advertising is a spontaneous synthesis of ideas derived from media archives, the synthesis providing an extension of the New Narrative. A television ad reflects what we believe for ourselves, or what we think that we believe. It compels us to consume because the New Narrative can only survive through the constant maintenance and expansion of its Library; and this can only safely occur in a liberal, free-market world. But one needs to return to the interconnectedness of the New Narrative to liberal politics.

Literature, Science and the Society of the Future

Thomas S. Kuhn in his landmark opus “The Structure of Scientific Revolutions” surmises from the study of contemporary historiography that the big revolutions in science have been changes of World View. I will argue in this short introduction that World Views are expressed through Literary Narratives and are defined by them. In so doing, Literary Narratives compose a vision for the Future. This composition is undertaken by society and its exponents, namely intellectuals, and, in the case of our contemporary society, intellectuals linked through the network of media at large. They, acting as attractors in a large chaotic system, synthesize the aspirations and the fears of the present; interpret the memories of the past – which in themselves are also narratives; and ultimately generate a New Narrative, which is always a blueprint for the Future. This blueprint does not have to be concise. An intrinsic quality of any narrative is that it may be self-contradictory. It may indeed encompass circular arguments of the Gödelean type without any danger of self-destruction. Narratives hold, like minds do, because they do not have to be logically consistent. By containing the seeds of internal dissent they continuously evolve. One may thus consider society as a work in progress, the work being historiography by hindsight and debate by aspiration. Science, as the main drive of societal evolution since the European Renaissance, is the key arbitrator in this narrative process. Technology, the offspring of science, not only empowers societal change but, increasingly since the Industrial revolution and throughout the 20th century, acts as the principal mirror upon which society reflects during the process of self-evaluation and redefinition of itself. The process of the New Narrative is an interactive composition that takes place in the now. The most explicit case of this process, where narrative connects science, technology and the Vision of the Future, is science fiction literature, or any kind of literature that fuses and integrates scientific ideas – which in our times is virtually all literature. I argue “virtually all literature” because I believe that even those who make an effort not to include science and technology in their stories are implicitly partaking in the process of the New Narrative, by assuming a position of self-ascribed “innocence”. You can view them as heroins of Marquis de Sad, partaking in the orgy by default whilst touting their unassailable virginity. Under “Literature” one should also include - apart from novels, plays and poetry - new forms of literary narrative such as films, video and installations, and – in the wider sense of narrative art, the rest of the fine arts too.

Is Earth 6,000 years old?


Darwin used to be a deeply religious man. He studied theology at Cambridge and as a young man he looked forward to the lifelong service of a cleric. His professional outlook changed dramatically after he embarked in 1831 on a five-year voyage around the world aboard the HMS Beagle. His observations led him to develop the theory of natural selection, the process by which organisms that are best adapted to their environment produce more offspring while those less suited eventually die out. In a stroke of unparalleled genius he obtained the scintillating insight that all living beings are connected through a long lineage that goes back billions of years, to a common ancestor. That indeed all life is one.

For the next twenty years he meticulously crafted and weighted the implications to his theory. One the one hand, evolution tied well with his egalitarian and anti-slavery ideas. It demonstrated in a scientific and rational way that all humans were born equal regardless of race or color of skin. But Darwin was deeply concerned because his insight clashed directly with the biblical description of life. So he decided to keep his work virtually secret, confiding only to a few of his closest friends. Fortuitously, Darwin was a man of his time. In the heart of the Victorian Era, there was much evidence pointing to the uneasy fact that Earth was very much older than the 6,000 years religious scholars estimated to be. Lyell’s geological notions and Malthus’s theory on demographics spurred Darwin on, while torrents of collected fossils poured into England from all corners of the British Empire illustrating that Earth had been through many previous epochs when other, now extinct, creatures lived. Why didn’t the Bible say anything about that? The untimely death of Darwin’s daughter Anne in 1851 at the age of ten threw a devastating blow to Darwin’s religious beliefs. Nine years later he finally decided to publish his seminal book on “The Origin of Species”.

Just as Darwin predicted, the reaction to his ideas from religious believers was a combination of denial and ridicule…which happened mostly on the other side of the Atlantic. By the beginning of the 20th century many States in the US had passed laws that banned the teaching of evolution at schools, a situation which lasted well into the 60s. It was the Soviets putting Sputnik in orbit that prompted President Eisenhower to reexamine the quality of scientific education for young Americans, and therefore insist that science, and indeed evolution, be taught at secondary level. America had to regain its scientific edge in order to win the Cold War, and the Congress agreed that the constitutional separation between Church and State ought to be enforced in the schools. Federal Courts begun to try cases where teachers were persecuted from State School Boards, for teaching evolution. In a series of famous rulings the courts defended science and declared “Creationism” – the notion that Earth and Humans were created by God just like it says in the Book of Genesis – as well its pseudoscientific offshoot “Intelligent Design” to be religion, not science.
And yet, despite all that, recent polls indicate that only 14% of Americans believe that humans evolved over millions of years from less advanced forms of life. On the contrary, nearly 45% completely trust the Biblical version of Adam and Eve. It seems that no matter what Federal Courts decide and what scientists publically decry, minds are next to impossible to change.
But why is this so important? Because confusion between evidence-based science and faith-based religion can only lead to the detriment of the former. Take, for example, alchemy and chemistry. Both deal with matter and reactions between chemical substances. Their respective laboratories may look strikingly similar to the untrained eye. And yet they differ in something very fundamental. Alchemy is obsessed with the transmutation of metals to gold. After many centuries of trial and error it produced nothing. Chemistry on the other hand, once it separated from faith-based alchemy and adopted the scientific method, managed to produce the wonders of our technological civilization, including medicinal drugs that have saved - and save - millions of lives. Abandoning scientific enquiry, rejecting evidence whenever it clashes with scripture, sanctioning academic liberties on the basis of dogma, and obsessing with “proving” religious notions, can only result in our world returning to the Middle Ages. That’s what is at stake here.

And is not an American problem only. Greece has the lowest public acceptance of evolution in the EU. Thirty per cent of the population accepts the Genesis narrative, and another twenty percent strongly doubt that humans evolved from apes. A few years ago the then General Secretary of the Ministry of Education and Religious Affairs (sic), when asked about the Ministry’s view on evolution, responded that “evolution is just a theory”, meaning that it has not been proven yet! Evolutionary theory, although included in the school’s textbooks, is almost never taught because it is part of the material that Greek students are examined upon. Elsewhere in Europe the situation is only slightly better, and even in the United Kingdom evolution is seriously challenged by Christian fundamentalists and Islamic hardliners.

So is there a way to reconcile evolution with the Bible? Pope John Paul II in 1996 all but accepted the fossil record and the scientifically calculated age of Planet Earth as true. And yet the Catholic Church remains ambivalent as to the mechanisms that guide the evolution of species. Where most scientists see the mindless hand of chance genetic mutations, Catholics see the guiding spirit of divine providence. The dialogue seems like an impossible discussion between deaf people separated by a thick wall. Recently, and rather amusingly perhaps, it took the form of advertising banners on double-decker buses across London, with Christian fundamentalists defending the doctrine of God-the-Creator and atheists refuting biblical belief as nonsense.
At the heart of the debate lies the irreconcilable contrast between two fundamental narratives of Western civilization: one written by science, the other purportedly by the Word of God. It is the Book of Genesis versus the Origin of Species, the former remaining unchanged through the ages, the later continuously updated as new data come to light - from genetics and fossils - that support the workings of natural selection. You have to believe Genesis to be true, but you do not have to do the same with evolution. You can doubt evolution all you like, for science is the art of skepticism and doubt. The caveat it that science also requires you to be brave enough to accept the evidence, even when it upsets your most deeply-held beliefs.
There are of course people of faith who regard the Bible as an allegory and therefore claim to have no qualms accepting a scientific narrative too. But as Richard Dawkins has poignantly commented, they do not seem to truly appreciate the fact that was so profound to Darwin: that if one accepts evolution one must automatically exclude God from life’s equation. Because evolution explains everything. There are no missing factors or mysteries left. The fundamentalists in Texas understand this well and hence their relentless struggle to eradicate Darwinism from their schools’ curricula. So if you happen to be one who does not believe that Earth is 10,000 years old, and yet you still believe in a biblical God, think again. Unfortunately, you cannot have the cake of science and eat it too.
Published in the Athens News on 27th March 2009

Temnothorax de Condorcet


How the collective intelligence of social animals can provide a new paradigm for ecological decision-making (notes for a forthcoming lecture at Panteion University)

Marquis de Condorcet (1743-1794) was a pioneer in applying mathematics to the social sciences. His jury theorem states that if each member of a voting group is more likely than not to make a correct decision, the probability that the highest vote of the group is the correct decision increases as the number of members of the group increases. The sum of information available per juror is also proportional to the probability of the correct decision, which means that the more individuals jurors know (or understand) the more likely is that a correct decision will be reached. Democracies are thus theoretically, or potentially, better than dictatorships because they can solve problems better by means of collective decisions. However, as a recent article in the Economist correctly pointed our, human societies are prone to groupthink which neutralizes the benefits of the jury theorem.

Groupthink is typically found in modern parliaments where members vote along party lines, regardless of information available. In societies at large groupthink often manifests as a result of brainwashing by the media. For example, once the financial crisis became headline news terrorists “disappeared”. Terrorists and terrorism, was the mainstay of media output since 9/11, as if the world was at the brink of being blown up by a mad suicide atom-bomber. Groupthink in the western world meant that citizens conditioned their political thinking under the spectre of terrorist threat, mainly coming from so-called Islamo-fascists dreaming of resurrecting the caliphate of the Middle Ages. All that has disappeared now, and replaced with a new enemy of the people: the amoral bankers. The new fear is losing everything, job, savings, home in a black financial hole.

Along with the terrorists global warming has also disappeared from the foreground. Who cares about the melting of arctic ice when there is a meltdown of financial institutions? And so on.

So the question arises how are we, the human race, become able to deal with global problems (poverty, ecology, financial institutions, epidemics, etc.) when we are constantly swayed by the forever trembling cyclopic eye of the media? How can we, the jurors, arrive at the best possible decisions, when information is restricted, or conditioned by groupthink?

A possible answer may come from evolutionary biology. Darwinian sociobiologists studying the behaviour of social animals such as bees and ants, are beginning to understand the way those creatures chose between various options. The ant species Temnothorax albipennis, studied by Nigel Franks of the University of Bristol, establishes a new nest by attenuating information-sharing of best routes amongst scouts. When a suitable place is identified the scouts begin to lead other scouts to the new site. To speed things up, the ants have developed a strategy whereby efficiency is increased by means of leading scouts back to the nest via the quickest route during the phase of migration. Thus, by going to and fro, more scouts become familiar with the route and the speed by which migration takes place increases. This type of dynamic – termed by the researches “reverse tandem runs” – resembles the loading of connections in a network.

Perhaps then, the ants shows us the way too. Human networks may provide the answer to getting over groupthink, and utilizing human collective intelligence and collective decision-making. There are two main characteristics in human networks: firstly, that most individuals have few connections (some friends and family members only) and only a few are highly connected (the “connectors”); secondly, networks are clustered, i.e. they tend to build around specific social groups (e.g. sharing the same profession, or hobby, etc.). If we manage to find strategies in human networks whereby we connectors increase the load of connections by means of increased information, then we may be able to get over the groupthink problem. Connectors have an obvious evolutionary motive to perform this task, namely to safeguard - or increase - their prominence and status in the network. To validate information passed to a network by the connectors, one may use the power of clustering. In clusters, peers are able to perform instant validation of information. This is apparent in readers’ commentary in news websites, as well as wikis.

Collective decisions are complex but so are the problems that currently face humanity. Current global institutions, such the UN, or the IMF, or the G7, are built on 20th century ideas and technologies. We must now develop new technologies and ideas in order to tap on the collective intelligence of human networks. The democracy of the 21st century will have to be less representative and more direct, in a totally new way.