Monday, November 22, 2010

I Emote, Move, and Choose, Therefore I Am?

Neil Postman relates the fable of  a land far away that decided to give up their guns, and return to using the sword. He writes,
"And so the politicians, the soldiers, the businessmen, and the plain folk decided it was best to give up their guns. This did not happen all at once, for people never agree to a thing one hundred percent. Some gun makers, for example, were not pleased until they realized that it was more fun and almost as profitable to return to making swords. And, of course, there were some soldiers who had never learned the art of swordsmanship and who worried about their future. But, eventually, people began to throw away their guns or sell them to the government, which was happy to destroy them. The government even paid the gun makers not to make guns, the way Americans pay their farmers not to grow food. In a short time, all the guns were gone. There were still wars, of course, for even in a fable the demons that make men war on each other cannot be wished away. But for two hundred years, the sweet song of the nightingale was never drowned by the retort of the rifle or the roar of the cannon. And the children slept peacefully, as they had done many years before."
The irony is that this actually happened in Japan in the 16th century. For Postman and others, this flies in the face of those who believe that there is no context in which we can turn back the technological clock and that technology is beyond the control of those who make machines or use them! This is a wonderful article to read and you must in order to truly reflect on its themes and ideas! At one point Postman compares two beginning questions for decisions we make, as in, we should question the 'why' rather than 'how' we should proceed in our technological endeavors.

And now that I have set the scene..

Beyond Human: Erasing the Line Between Man and Machine

This is part 9 of 9 clips from the series, Beyond Human, collected at SpaceRip This clip explores what the future may hold as we shape a tomorrow in which "robots will walk and work among us."

Together with chapters 6 and 7 of Donald A. Norman's book, Emotional Design, this series elicits different emotions, feelings and responses. On the one hand I can recognize the huge benefits such machines already have, and will increase in capabilities for,  in dealing with dangerous and life threatening situations, search and rescue missions, and other such contexts, in which it would be deemed preferable not to put humans at risk. At a stretch, I can even see and appreciate how beneficial a more "seemingly human" machine may alleviate the chores and stresses for, say, an elderly person, or physically challenged individual, in need of daily living assistance. On the other hand, even in these contexts, I wonder at one point we give control and compassion over to such mechanical devises. In the second examples I can imagine, if you will, the "societal convenience" of giving over responsibility for care and mundane tasks to an automaton, but at one point are we to question our own sense of humanity and provide for the unique relationships implicit in human interaction and physical and emotional contact? As my blog title suggests, I am uncomfortable with the idea that the more we provide, what appears to be, more realistic and appropriate emotional reactions, physical movements, and decision making capabilities to a robot, the more we also appear to be heading in a less than human direction towards each other.


I guess this is the age old question we need to keep asking: Just because we can, should we? Do we really want and need to pass over care of our children, and as suggeted by Norman, our childrens' education, to robots? Are we so enamoured of what appear to be 'progress' that we are willing to forgo the few human interactions and joys that remain? 

From the original meaning of the term, Robot, we find that it refers to 'drudgery' and 'servitude.' Is this how we wish to view our elderly and our youths needs? Are we so willing to define and reduce our relationships across generations to the idea of slavery? Maybe we should be asking what makes us human?


In clip #7 from the same Beyond Human series, the questions revolves around how to build and define human-robot relationships. I don't think we have figured out the human-human one as yet! Add to that thought that we have global human-human issues that we have struggled throughout history to resolve, and without much success!



Part 6 of the series asks, "What are the equations of emotion, the factorials of feelings, of consciousness itself?" Hmm! Indeed, I am worried that this mathematical metaphor is being applied whilst diminishing the attributes associated with say, the 'Mind" or the "Conscience" or "Compassion" or "Empathy" etc. In that they are considered at all, we are effectively reducing them to mechanical responses devoid of real meaning within the landscape of the New Robotics.

Here's a suggestion: Go to the YouTube pages wherein these Beyond Human series of videos are uploaded and then read the level of discourse around them! Hmm!

On a slightly less 'existential' level, I also wonder about the trend to robotize childrens' toys, as Sherry Turkle, a techno-sociologist at MIT states, as quoted in Future Courses:

"technology is fast propelling us into an entirely new paradigm of child development. 'Today's children are growing up with "psychological machines...they have become accustomed to the idea that objects that are not alive might nonetheless have a psychology and even consciousness."' (p. 33)

Then the critical question becomes, "Do toys that think--or pretend to think--also spur our children to think?" (p. 33)


As you can gather I am caught in a dilemma between what these 'Toys" and "Robots" can actually help us to achieve, enjoy and or, solve, and what it takes away from us at the same time! Consider this article on Qantos's Close Call, from George Jonas at the National Post.


We read that,

"They were the human minds in competition with the "beast," as someone nicknamed the computerized aircraft, that had a mind of its own. Like a person, it could be right, wrong, or merely unfathomable." 

because, as the article continues,

"The computer kept warning the pilots that the air-craft's aft centre-of-gravity limit had been reached, a condition that needed to be remedied by a forward transfer of fuel, but this was followed immediately by a warning that the forward transfer pumps were unserviceable. It must have been an eerie form of Catch-22 to orbit an Indonesian island with a hole in the port wing, while the computer kept advising a function it immediately warned couldn't be performed." 


and suggested that,

"It wasn't so much man fighting machine as pilots trying to save a plane from its designers....It suggested that computers are quite capable of developing suicidal tendencies."

Ah! There's the rub! Robots and technological machines are still designed and developed by humans! Can we truly design, and develop flawless Robots that can operate by Asimov's Four Laws of Robotics?

(Also on page 197 in Norman's Emotional Design )


One other quick thought: an overall philosophy regarding how we approach the reality of Robots and their "existence;" the need for a definition of what is considered human; the issues around ethics, morals, and life; the relationship between any mechanical or digital device and humans; and finally, the laws and legal ramifications of life versus not-life, are surely issues we should hear more discourse about today! Consider, for example, this issue that came to the fore at the 2008 Beijing Olympics, when Oscar Pistorius, was barred from competing as he had biltaral prosthetic legs designed specifically for running.




So, as you can see, I wonder if Robots will say, "I Emote, Move, and Choose, Therefore I Am"

No comments:

Post a Comment