More on consciousness (ii). On its evolution and the Turing test.

Given the complete perplexity that consciousness seems to generate, it is nevertheless striking that we do not have any conceptual, or even perhaps any practical problems, in envisaging its evolution from some sort of completely unconscious organism to a fully conscious organism (or at least to ourselves).  Partly this is, of course, because we ourselves undergo striking changes in consciousness every day.  We fall asleep, we awake, we pay or lose attention to things, and so on.  
The fact that conscious has evolved implies it must play some causal role in our lives and not be a merely decorative add-on.  But we do not understand what this might be, given that we can, as already noted, give pretty decent functional/behavioural accounts of ourselves without any reference to consciousness at all.  This touches on the famous "philosophical zombie" problem; no matter how we behave, it always seems possible to claim that such behaviour could be produced by something that was nevertheless unconscious.  Yet if this is so, then why would consciousness evolve?  This suggests after all that there really are some aspects to behaviour that demand consciousness (or at least, have consciousness as an unavoidable side-effect).  What might these be?
  We know from cases of 'petit mal' and so on that it is quite possible to act in a fairly complex and consistent way whilst not, apparently, being conscious; yet we can also be conscious when doing far less complex things.  This strikes me as a problem for the "Turing test"; the idea that anything that behaves identically to something that is conscious must be considered conscious itself: for there is no linear relationship between complexity of behaviour and degree of consciousness.  Furthermore, the fact that we can blank out while doing things, with or without epilepsy, implies that there really are, at least to some degree, philosophical zombies.  
If there really is a point to the Turing test, then, it must be to identify that particular set of features that must be accompanied by consciousness; and the examples I've just given I think rule out an awful lot of other things.
Given that absence seizures (= petit mal) are associated with learning difficulties in animal models and humans, perhaps learning is something that requires consciousness.  
The other bracket that can be put around "what consciousness is for" is provided by animals.  Curiously enough, the pioneer in the evolution of animal consciousness was William James.  Given that it would be overwhelmingly strange if at least mammals and birds were not conscious (with the added implication that reptiles are too); then as they have no language and thus are not reasoning, it seems the consciousness did not evolve for this either.  What is it for then?  It seems that without consciousness we can do complex things; conversely, consciousness evolved before the really complex things we do.  Furthermore, as far as we can see, it would be not at all surprising if the cephalopods were conscious (this must have happened convergently to our own consciousness); and I have recently been rather impressed by insects too.  I think this raises even more problems for the Turing test.  One can indeed imagine a "squid simulator", and indeed this might really be much much easier than, say, a Norwegian simulator.   But why would we want to ascribe consciousness to the artificial squid any more than to the artificial Norwegian?  Or take an even more simple model, the celebrated sea hare, Aplysia, long a model for neurobiologists.  Aplysia has a set of stereotypical responses to stimuli including its ability to release ink when stressed.  They do not live very complicated lives, and it would presumably be rather easy to build an artifical sea hare that would pass the sea-hare equivalent of the Turing test.  But would we want to ascribe consciousness to such a machine?  I think the point is that we would not: but that would not tell us anything about whether or sea hare's themselves are in some way conscious.  And what would be so different in the case for a human simulator?
What is the bottom line out of all this?  Consciousness evolved and therefore does something real; but that just confirms our own intutions. As conscious beings we really intervene in the world and our consciousness makes a difference, in terms of decisions, aesthetics, learning and so on.  It seems we have to draw a distinction between different types of zombies: in actlity, it seems they do not exist, but conceptually they could.  And we still have made no progress in understanding consciousness or how it emerged.


Kommentera inlägget här:

Kom ihåg mig?

E-postadress: (publiceras ej)



RSS 2.0