Many people and scientists argue the development of artificial intelligence and robotic technology ends up in supremacy and reign of robots as they take over most of the humane duties in industries and our social lives . But the could the robots have a sense of pleasure (or satisfaction) for power and superiority as humans do ? And also the sense of pleasure for the benefits of power , as wealth , as humans do ?
We don't have AI. We have machine intelligence and "deep learning" but it lacks a number of requisites for it to develop self awareness, ambition, or desire. These concerns about AI being "mankind's last invention" are based on the notion that the "hard problem of consciousness" is not really hard, it is just a critical mass of simpler abilities that will organically reach some tipping point by itself in the near future, if for no other reason than the sheer amount of computing power on hand.
What this fails to understand is that the human mind is not that much like a computer; it is more a highly optimized pattern-matching mechanism. Computers are organized very differently. They have speed and accuracy and tirelessness in their favor, but they cannot learn in the same generalized ways. They have almost no concept of causality, only of statistical probabilities. Causal calculus is in its infancy and currently still requires curated data modeling. I see that alone as an impediment to rapid progress.
Yesterday I asked Siri to show me "wineries south of here". It showed me two wineries to the north of my location at that time. It understood "here" and "wineries" but not "south". This shows that Siri has not been given a concept of directionality or at least hasn't the brains to explain it defaulted to "closest" because there was nothing immediately to the South. If it can't flexibly and accurately perform such a simple task, despite being powered by the Apple mothership, I don't think we have any immediate worries.
Humans have done an awful job of ruling the world. Artificial intelligence can't do any worse. Might as well give them a chance. All hail our future robot overlords!
I'm not sure that silicon-based life forms will ever experience exactly the same as carbon- based life forms, but if it passes the Turing test what does it matter?
It all depends on who's programing them. And for what purpose. That worries me a lot.
Spot on , so all the science fictions predictions about the threat of robots seems to an escape from the fact that humans are , and would be , the main threat .
I see it as psychological projection . . . . humans think they know how robots will behave, based on their own behavior patterns. Humans have killed by far and wide far more other humans than any robot has, and, it is humans who are the biggest threat to humanity. They are egotistical, power hungry, greedy, and corrupt. I would trust AI long before I would trust a human. Not only that, but, if robots were to become some rogue killing machines, what would stop humans from creating robots to kill the robots?