Tuesday, February 28, 2017


NYTIMES 5FEB2017. Artificial Intelligence (AI) and robotics applied to weaponry. 

The US and China (maybe others?) are making great advances. The future of warfare looks like it could be robotic weapons that can make their own decisions (at present, new weapons are targeted by humans, but can make decisions on-the-fly as conditions change).

Other uses of AI and robotics include medical technology and self-driving vehicles. These sound like beneficial uses of technology, although, any technology can have a dark side in the wrong hands.

Is weaponization of AI and robotics a good idea? Are we dooming Homo sapiens to either be destroyed by the machines, or made to be subservient to them? If weapons are capable of making decisions, will they at some point be able to make the "push the button" decision? We know of numerous examples in the United States when a technological problem made it look like the U. S. was being attacked, and only the hesitation of a human to initiate the defensive response averted catastrophe. How likely is it that robots will be programmed to hesitate because of a "feeling" that something just doesn't make sense? 

If you are a Star Trek Generations fan, think about the character Data. Data is a robot, an android with a "neural net" instead of a brain. Data is programmed to respond instantly to every permutation of information and conditions. If Data's neural net concludes that it is being attacked, it responds without hesitation. How many times have we seen Captain Jean Luc Piccard use intuition, emotion, common sense to avert a wrong reaction? 

Perhaps robots of the future will have the programming to intuit, emote and use common sense; however, if they do, will we humans still be in charge? 

So U.S., China and all others, do you really want to continue down the path of robotic weaponry that uses artificial intelligence to "think" for themselves?  Can't we humans be human enough to move towards ending warfare once and for all? 

Think about it while you still can.


No comments:

Post a Comment