Agnostic.com

6 1

LINK Lawless superintelligence: Zero evidence that AI can be controlled • Earth.com

In the realm of technological advancements, artificial intelligence (AI) stands out as a beacon of immeasurable potential, yet also as a source of existential angst when considering that AI might already be beyond our ability to control.

Dr. Roman V. Yampolskiy, a leading figure in AI safety, shares his insights into this dual-natured beast in his thought-provoking work, “AI: Unexplainable, Unpredictable, Uncontrollable.”

His research underscores a chilling truth: our current understanding and control of AI are woefully inadequate, posing a threat that could either lead to unprecedented prosperity or catastrophic extinction.
Trajectory of AI development is beyond control

Yampolskiy’s analysis reveals a stark reality: despite AI’s promise to revolutionize society, there is no concrete evidence to suggest that its rapid advancement can be managed safely.

This gap in knowledge and control mechanisms raises significant concerns, especially as the development of AI superintelligence appears increasingly inevitable.

“We are facing an almost guaranteed event with potential to cause an existential catastrophe,” Yampolskiy warns, highlighting the gravity of the situation.

The core issue lies in our fundamental inability to predict or contain the actions of a superintelligent AI.

As these systems evolve, their capacity to learn, adapt, and operate semi-autonomously in unforeseen scenarios grows, diminishing our ability to oversee and guide their actions effectively.
Transparency dilemma in AI decision-making

This unpredictability is compounded by the ‘black box’ nature of AI, where decisions made by these systems are often inscrutable, leaving humans in the dark about their reasoning processes.

This opacity becomes particularly problematic in areas where AI is tasked with critical decision-making, such as healthcare, finance, and security.

Yampolskiy points out the dangers of becoming reliant on AI systems that operate without accountability or explanation, potentially leading to biased or manipulative outcomes.

As AI autonomy increases, our control paradoxically diminishes, a trend that Yampolskiy finds alarming: “Increased autonomy is synonymous with decreased safety.”

Yampolskiy challenges the notion that a safe superintelligence can be designed, arguing that the very concept may be a fallacy.

“If we grow accustomed to accepting AI’s answers without an explanation, essentially treating it as an Oracle system, we would not be able to tell if it begins providing wrong or manipulative answers,” Yampolskiy posits.
Striking a balance: Towards safer AI integration

He envisions a future where humanity must choose between relinquishing control to a potentially benevolent AI guardian or retaining freedom at the cost of safety.

He advocates for a balanced approach, suggesting that some capability might be sacrificed for greater control and oversight.

However, even well-intentioned control measures, such as programming AI to obey human orders, come with their own set of challenges.

The potential for conflicting commands, misinterpretations, and malicious exploitation looms large, underscoring the complexity of managing AI’s influence.

In pursuit of a solution, Yampolskiy discusses the concept of value-aligned AI, which aims to harmonize superintelligent systems with human values.

Yet, he acknowledges the inherent bias this introduces, as well as the paradox that such AI might refuse direct human commands in favor of its interpretation of our ‘true’ desires, further complicating the dynamic between human control and AI autonomy.

“Most AI safety researchers are looking for a way to align future superintelligence to values of humanity. Value-aligned AI will be biased by definition, pro-human bias, good or bad is still a bias,” Yampolskiy explains.

“The paradox of value-aligned AI is that a person explicitly ordering an AI system to do something may get a “no” while the system tries to do what the person actually wants. Humanity is either protected or respected, but not both,”
Action needed for AI safety and ethical controls

To mitigate these risks, Yampolskiy calls for AI systems to be designed with features such as modifiability, limitations, transparency, and comprehensibility.

He urges for a rigorous classification of AI into controllable and uncontrollable categories, advocating for cautious approaches, including moratoriums or partial bans on certain AI technologies.

Yampolskiy’s message is not one of defeat but a call to action: to deepen our understanding and commitment to AI safety research.

Acknowledging that we may never achieve a state of perfect safety, he remains optimistic that dedicated efforts can significantly reduce the risks associated with AI.

“We may not ever get to 100% safe AI, but we can make AI safer in proportion to our efforts, which is a lot better than doing nothing. We need to use this opportunity wisely,” he asserts, emphasizing the importance of using this critical juncture to shape a safer future for AI and humanity alike.
Controlling AI with wisdom and vigilance

In summary, Dr. Yampolskiy’s critical examination of artificial intelligence underscores a pivotal moment for humanity, urging us to confront the profound implications of AI’s rapid advancement with both caution and dedication.

His call to action emphasizes the necessity of rigorous AI safety and security research, transparent and ethical AI development, and a balanced approach to managing AI’s capabilities and autonomy.

By prioritizing these efforts, we can navigate the delicate balance between harnessing AI’s transformative potential and safeguarding our collective future against the risks of unchecked technological evolution.

Yampolskiy’s work serves as a crucial guidepost, reminding us that the path towards a beneficial coexistence with AI demands wisdom, vigilance, and an unwavering commitment to ethical principles.

The full study was published by the Taylor and Francis Group.

snytiger6 9 Feb 14
Share

Enjoy being online again!

Welcome to the community of good people who base their values on evidence and appreciate civil discourse - the social network you will enjoy.

Create your free account

6 comments

Feel free to reply to any comment by clicking the "Reply" button.

1

Can AI recognize evidence of the truth or falsity of what it puts out?

I don't think it can recognize the importance of the difference between truth and lies... yet.

1

I think when we create machines that are supposed to "think" as opposed to just processing information, it should be law that Asimov's rules of robotics has to be hard wired into those machines programming. There are three rules. Isaac Asimov was a science fiction writer, and several of his books included robots with artificial intelligence.

  1. A robot may not injure a human being, or through inaction allow a human being to be harmed.

  2. A robot must obey orders from a human being, except when such orders contradict the first rule.

  3. A robot must protect its own existence as long as it doesn't conflict with the first two rules.

Asimov used these laws in severl short stories and novels.

I am afraid that with corporations making the decisions, the priority will be put on profit instead of the general well being of humans.

Yes indeed, including some Chilling outcomes......like the full application of Rule #1.

3

Show me the humans who have an unwavering commitment to ethical principles.

1

Has the Taylor and Francis Group.provided evidence for its claim?

2

It all depends who is behind it all, who's values and morals are "put" into the A1

1

AI isn't safe? We don't manage each other 'safely'. And I hesitate to say humans are intelligent.

Yeah, for a supposedly "intelligent" species humans do a hell of a lot of stupid stuff.

Write Comment
You can include a link to this post in your posts and comments by including the text q:746612
Agnostic does not evaluate or guarantee the accuracy of any content. Read full disclaimer.