One might wonder why one would want to
make a computer program that 'thinks like a human' interfaced with a
machine platform/. The team of Zuckerburg and Musk have invested in
that effort. (ref.
http://news.discovery.com/tech/musk-zuckerberg-invest-in-secretive-a-i-firm-140324.htm
). They seem to have optimistic ideas about the value of an inorganic
computer program replicating elements of a human neocortex. There
isn't anything wrong with the neocortex of a seal or whale of course,
except in the latter the axons are farther apart with thicker
connections making for a little slower processing speed compared to a
bullet chess player, so why make a computer program dedicated to the
sort of things a human being thinks about?
Obviously the computer won't think
about sex much unless it lusts after foreign autos maybe. It might
desire nachos and salsa occasionally of course, and it might need to
relieve a bladder build up of oil that's a left over from excess
processing of raw coal on a kind of machine equivalence of veganism.
A computer program with freedom to think anything it wants might
contemplate it's power source and want more and more like a raven
trained to eat cookies or a sea lion trained to eat fish-would they
know when to stop and be satisfied?
How long would a computer program that
has artificial intelligence contemplate the morning dew and the
sunshine glistening through the drops on fresh spring leaves? Enough
time to form metaphors and poems drive by subconscious desires? It is
possible that an artificial intelligence program with a subconscious
would not be entirely rational as are human that awaken from a dream
state and never to panic and confuse reality shooting someone through
a door believing it a burglar instead of a girlfriend.
Machine artificial intelligence
programs would be limited in their thought capability solely by the
limits to the power source. Immortality might mean having desktop
fusion unit installed, and time then would warp ; the machine would
have all of the time in the Universe to contemplate the meaning of
string theory and emergence of the first one-dimensional energy
quanta from nothingness or fill out it's N.C.A.A. brackets for an
ideal future college season.
Human beings are adapted to short-term
goals comprising sustaining biological requisites. Without earnings
credits one can't by food, socialize or cut down global forests to
build stick-frame houses or automobiles to produce global warming.
Sleep, hunger, sleep, dreaming, waking, moving, contemplation of the
empirical world-those are human interests. Programs with abstract
sensory input still haven't the correlating physical sensations or
needs associated with them. Human sensation is structured with an
environmental integration corresponding to survival. Perhaps thought
is itself an engine for growth directing consciously the
macro-organism to better ends than would be suitable for a tree
rooted in one place or a whale with a limited yet meaningful
environment. A computer program hasn't any survival requisites
integrating it to thought. It could be given such directives yet
would they be meaningful enough with appropriate rewards? To think
like a human being would a computer program need to be mortal and
have it's choices challenging and critical for it's survival?
Wouldn't it need to be able to err in thought and perish as a
consequence if it was to evolve like a human a little more through
natural selection of the intelligent? Or would a computer program be
designed like a bureaucracy never changing yet inputting more and
more power and cash?
A sentient computer program might not
be optimized think like a human being at all, perhaps it should think
like an implicit poem developing more meaningful blank verse that is
entertaining for others to watch performed on a 3D computer stage.
No comments:
Post a Comment