Although i believe we are a very long long way from being replaced as the dominant intelligence on the planet and people in general tend to fear change how do you as part of this community feel about Artificial intelligence because coming to be a reality if even vary slowly. [jpartalv.wordpress.com]
I think it is neutral. My main concern is how we threat ai when they become sentient. A lot of moral and ethical question will have to be answered when we come to that.
So, what if an artificial intelligence developed ego? Is that cause for unease?
Far from being a threat i.e. in itself ( not in the hands of the state) A.I. could lead very quickly to the introduction of the universal income. That is a gaurenteed amount to live on without having to earn it. So all work becomes voluntary.
I'm currently re-reading Max Tegmark's "Human 3.0".
.
He's convinced me AI is something to be concerned about.
.
I'm quite convinced we are a long way from "self-aware" AI.
.
My biggest concern isn't the dystopian apocalypse of Hollywood.
.
It's that we will create a self-aware entity before we have the wisdom to treat it better than we treat each other.
.
We piss off an AI of any significant power and we are, frankly, fucked.
.
This is NOT something we want to do without careful thought and planning.
.
Hell, what could go wrong? ????????????
I fear the self-driving cars, and the first day someone hacks them for their amusement. It WILL happen. If man can make it, man can break it.
I especially fear the day they become MANDATORY. When the Nanny State decides the car is a better driver than I am. If that technology is in any car I buy, my first step will be to locate and rip out the fuse that controls it. I will NEVER, let me make this clear, NEVER ride in a car that drives itself.
One of the main concerns Tegmark discusses in Human3.0 is that worry - that we don't write unhackable AI.
.
Tegmark knows what he's talking about. Look at the message our internet software is in.
.
His work is looking at this issue.
.
I've written my share of code and share his concern.
.
Microsoft and Apple have led the way with buggy software that is dumped on a market that effectively does the testing that should have been done before the release of the software.
.
We cannot follow this model with AI.
I am a Science Fiction writer. I touch on this topic from time to time from all angles, but speaking now in serious science terms, I am convinced that AI will be the stepping stone to achieving Type I Civilization status. That it will help us to overcome our tribal instincts by removing our dependence on owning or controlling this or that. Eliminating the burden of need and the destruction of that fantasy called 'economy'.
I see great things in store for us if we can just survive the next 50 years or so. I think all the concerns over AI attaining sentience are the result of rampant paranoia. Think about it.
I did. The AI is only as intelligent as the person that programs it. 99% of all humans are controlled by laziness, greed, and sex. So AI will be used to do tasks that people don't want to do, to make money for the owner, and as sex-bots. But because the inventors themselves are lazy and greedy, and thinking about getting out of work to go try to have sex with somebody, the AI will be poorly designed and expensive. It won't work and will cost too much.
Consider that Google Chrome is supposed to be the best browser in the world, yet I have to manually reload the page every time I turn the computer on to get the latest web page. That's the kind of AI we will have.
A mite on the cynical side there Paul4747, but I understand. We differ in that I don't minimize people in that way. AI is not a programmed machine, other than its original algorithms. It becomes what it will be based upon what and how it learns, then how it relates to the information it gains. Yes, there is potential for having an unwanted result, but the odds are extremely low.
I hate to be so cynical, but I find myself so often justified. I like people, but I don't trust them to act altruistically. The profit motive is too overwhelming.
It's basically inevitable. The risks of us screwing up badly seem high now. The military don't seem especially likely to screw up. The incentives to making it sooner are very strong compare to making it safer.
People are adapting to it already. Look at our supermarkets, factories, and even our schools where kids are now placed in front of a computer while teachers read the newspaper. Okay, so 'The Matrix' or 'I Robot' hasn't quite happened yet, but I just see people falling in line as they're currently doing w/all the tech changes we've had over the last 20 years. Look, they're already using drones to deliver food, and restaurants without actual people serving you is already happening. This is what's truly taking jobs once performed by humans, and the more AI becomes prevalent, the more chaos there will be due to unemployment, and also people not learning to do anything themselves anymore- like the driverless car.
I think of it in the short term as long term it has too many twists and turns. Short term it will take huge numbers of jobs and I fear that will deprive people of meaning in life even if government figures out ways to give those folks income. In that regard I fear it. Maybe everyone will just stare at their smart phones 24/7.
AI will be a threat in itself. Machine logic is far superior so once a machine can understand the concept of’self’ then we are going to have problems. This comes back to the ‘soul’ question asked on another post. The soul, whatever it is, is not our consciousness or this ability to have abstract thought, so machines will no need one to be a threat.
Like any tool, it has the potential to be used for our good or our destruction and everywhere in between. The potential for using it to control people and do harm, grows greater as we become more dependent on technology and the younger generations become virtually clueless on how to live without technology.
AI, like any technology, is of itself neutral in terms of is it good or is it bad. All technologies can be used for good, or used for evil purposes, it is entirely in the hands of the users.