The scientific worldview has been
dramatically successful in explaining how things work and letting us
build technology to vastly improve our lives. The one mystery it has
no idea how to explain is consciousness -- the fact that we all feel
a "seemingness" to life.
Science also holds that free will is
impossible -- the idea that there is a "self" that makes
decisions for reasons other than a causal interplay of physical
processes. As I see it, consciousness is essential to our concept of
free will. If we are unaware of having alternatives and choosing one
of them, then we did not exercise our will. On the other hand,
consider a very sophisticated computer program. We can point to
inputs from the program's environment, which could include truly
random inputs like radioactive decay, interwoven with an extremely
complicated series of computations. Despite all the complexity, we
still wouldn't say that the computer had free will.
Science also has nothing to say about
the idea of values -- what's worthwhile or what's not. It is only one
aspect of the human mind and brain. You could build into a program
some abstract notions like, "complexity is better than
simplicity", and derive from that the desirability of preserving
complex ecosystems and preferring complex civilizations to simple
ones. Most of morality concerns the experiences of beings that we
assume to be conscious and experiencing the world the same we do in
the relevant respects (more on that below). But essential to the very
idea of morality is choice -- free will. If something happened that
was beyond our control, we cannot be said to have moral
responsibility for it.
Moral responsibility as we humans think
of it requires free will. Free will requires consciousness. All three
are fundamentally foreign to the scientific method and central to our
conscious experience, and all seem foreign to science and inherent to
our lived experience in exactly the same way.
The requirements do not run in the
opposite direction. We can imagine conscious experience without free
will -- we can imagine having no control over what we think about. It
does however sound very alien to our own experience -- even if we
were completely deprived of sensory input or the ability to influence
the outside world in any way, we could still decide what to think
about. We can also easily imagine free will without morality -- we
can choose our actions based on anything at all. But moral
responsibility requires the other two.
As a footnote, most of morality
concerns the experiences of beings that we assume to be conscious. At
the heart of reducing animal suffering is the idea that animals are
conscious and experience suffering -- if they don't, then there is no
obstacle to doing things to them that we would hate. When we hear
them cry or whimper, our concern is that they are feeling the way we
feel when we cry or whimper. On the other hand, if someone builds a
very sophisticated robot that emulates an animal, then we
congratulate the builder if it cries or whimpers when an animal
would, but we don't think the robot is suffering. To the extent we
feel some sympathy for HAL as Dave disassembles him in "2001: A
Space Odyssey", it is because we assume HAL is truly conscious,
as when he says "I can feel it". When it comes to human
beings, we have a very elaborate sense of moral and immoral ways to
treat each other, based primarily on assuming their conscious
experience is just like ours. The common assumption that others
experience the world as we do and that this guides our moral action
is interesting, but not part of the main thrust of my argument.
No comments:
Post a Comment