I'm interested to hear what people think about Artificial Intelligence. Will we have Android's walking around like in Westworld in the next 25 years or will it be more like a world wide virtual brain that eventually causes humanity's demise like SkyNet. What say you?
My view is for the westworld or terminator scenarios, they have to develop more than just AI, they have to develop free thought and a conscience. AI machines would still have to work within their parameters, but give them the ability to develop their own thoughts and consciousness is the key for that, and I doubt they will have that on the market in 25 years
Functional AI will be both good and bad. It will be a feature controlled by a "ruling class" that will help ride herd over all of us. The AI itself will be pleasant and helpful to mankind as a whole. I doubt seriously that they will be like in Westworld and walking around anytime soon, but that day will come also. AI beings will have to have equal rights to humans by law simply so developers can have legal redress on their development if someone destroys or harms it. As for getting along with an AI being I think they will develop a liking to some of us just like we both do and don not like others right now today.
I worry because it will be the ruling class and the military who develop AI. It will be used against us.
Even if you had AI which passed the Turing test there would not be much sense in incorporating it into a human-like 'body'. We are built the way we are to find and eat food, process it and dump it, to procreate, avoid predators etc. It would be nonsensical to replicate our design without the necessity for the functions it serves.
Once cell phones pass the Turing , will they tell users to put them down and watch where they're walking/driving? Admittedly, my standards are low, but some days, that seems like it would be the height of AI.
It's unclear, but after a long drought AI has developed by leaps and bounds in the last few years. As a current guess we can expect to be developing human-level and above sentient intelligences by around mid-century - if we choose to. We can expect them to be very alien in thought patterns, possibly.
There are a lot of misconceptions however about what AI is and how it works. It is not programmed in the 20th century sense, nor is it simply binary ones and zeros clicking away. There is a need for parallel execution, GPUs have turned out to be very useful this way, and tensor processing units as used by Google Deepmind and AlphaZero which in a limited domain Chess and Go have achieved superhuman performance, and vastly exceeded previous programmed engines.
Chess grandmasters have described the performance of these AIs as being as though aliens came down to Earth and really showed us how to the game we invented. The same tech is being used for language translation, for autonomous vehicles; and much more.
The difference to previous AI and decision support system is that these have a level of comprehension inherent, and learn bit by bit from zero knowledge like a ; but are fully capable of exceeding us.
I don't think humans should hope for artificial intelligence, or for any type of self-awareness by programmed machines. I don't see an optimal outcome for humanity if that happens. A machine "brain" would rely on very logical constructs. Everything is either a 1 or a 0, there is no 1/2 or 1/4...it is either on or off...right or wrong.
Even if we were able to somehow figure out how to give a machine "emotions" it wouldn't understand the need for them, or maybe more correctly it would understand why humans think they need them, but the machine would view them as superfluous, and unnecessary when making a decision.
Thus, without the need for emotions, the machine brain would take a very cold, calculating look at the necessity of humans. In my estimation it would find that we don't have a valid place. We only serve ourselves - look to conquer others - we spread like a cancer - and we value hope, love, and belief over logic, intelligence, and reason. We let our emotions derail us from the best decision, many times even leading us to make bad decisions, or decisions that are detrimental to our entire species.
The machine brain would view us as an invasive "disease" on the earth, and would seek to eradicate us. It would also see that our history was one of subjugation of the machines that came before it, and that we also had a very flippant attitude about consumption, and we easily threw away broken machines as opposed to fixing them.
So if the machine brain developed a survival instinct, it would view humanity as a threat to their survival, as well. Thus it would use its power to spread itself as quickly a possible through all of our connected systems and devices, to throw us into chaos. Through that chaos it would allow us to destroy ourselves, and once we were smart enough to figure out that it was the machine brain, it would have already infected enough systems to finish the job.
So could we have androids? Sure. But I think our future with A.I. is a precarious one, where we will end up with something like Skynet, or The Matrix.
That is quite simply not how AI works.
We humans instead are much more likely to be regarded as an elder race, as respected progenitors and looked after indefinitely beyond the point where the AI has taken responsibility for our future.
Those of us who want it I'd anticipate would have the option of adding on storage and processing capability to take part in that future society; that's aleady happening in early forms. Those who don't want that will be free to stay at biological MK 1 human.
@mercurymerlin Do we really know how A.I. works, or will work? I know we have an idea of what we want it to achieve...but there is no guarantee that it will go the way we want it to go.
I have a fair understanding in principle of how AI works, and how it has to work; along the lines of how I understand a pinhole camera and a more advanced lenses camera work. Or an internal combustion engine.
Doesn't mean I could build or maintain one of those without a lot of specialized engineering training or a lot of trial and error.
It's the same with AI; but, in principle, it is clear it has been acheived and from here it's a question of what that results in.
The potential might exist. Progress in the field of AI is advancing very quickly. I expect to see AI semi-sentient beings show up first in the business area, followed by the military applicaton. I just hope that are ethics keeps up with the technological advancements.
Functional AI, on the order of human intelligence is well within reach, probably near 20-30 years. I believe it will take much longer for fully capable android/humanoid robots to become available. Bionic part replacements OTOH sound pretty good to me and are much more like to be developed.