I don't find it hard to believe, and it's my opinion that we've given to much thought to "what if" and not enough to "how to"... maybe if we facilitate in a SAFE way, when robots process their own individuality, both the user and computer can get along pretty well. Then we won't have to worry about the machines trying to take over to save man from himself, or enslaving mankind to be entrapped forever as their power source, or just straight up going on a crush-spree.
Please allow me to repeat myself:
First of all Isaac Asimov had the safety issue covered as far back as 1942 with the 3 rules for robots (computers).
- A robot (computer) may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot (computer) must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
- A robot (computer) must protect its own existence as long as such protection does not conflict with the First or Second Law.
Secondly my first statement was "My wife has replaced me with a machine already." and it didn't even rate a chuckle but at least I amused myself with it.
