Agnostic.com

1 1

Food for thought

Pausing AI Developments Isn't Enough. We Need to Shut it All Down

An open letter published today calls for “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”

This 6-month moratorium would be better than no moratorium. I have respect for everyone who stepped up and signed it. It’s an improvement on the margin.

I refrained from signing because I think the letter is understating the seriousness of the situation and asking for too little to solve it.

The key issue is not “human-competitive” intelligence (as the open letter puts it); it’s what happens after AI gets to smarter-than-human intelligence. Key thresholds there may not be obvious, we definitely can’t calculate in advance what happens when, and it currently seems imaginable that a research lab would cross critical lines without noticing.

Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that would happen.”

Read on: [time.com]

Ryo1 8 Apr 5
Share

Enjoy being online again!

Welcome to the community of good people who base their values on evidence and appreciate civil discourse - the social network you will enjoy.

Create your free account

1 comment

Feel free to reply to any comment by clicking the "Reply" button.

1

These AI programs aren't as smart as the media and corporations are making them out to be. It's all hype. These AIs are simply parrots. They are fed data and can re-arrange it and spew it back out. They can't create anything actually original. They are not "thinking". These AIs aren't going to become self-aware and turn into Skynet.

I tend to agree. AIs need to be programed and fed with information by humans. It is a misconception that AIs are becoming more and more like humans. It assumes that we completely understand what humans are like - we do not.
AIs are more capable of storing information than the human brain, though. they don't foget, either. They are also capable of calculating probabilities based on which they act, like 'according to probabilty calulation, I am supposed to become very angry and start shouting at this moment'. They may be able to describe certain emotions but they will never be able to organically experience any. They may not be able to think creatively, either. Apparently, there are AIs which are politically biased, which suggests that there could be AIs that are fed with racist views and act based on those racist views... It's all down to how we, humans, want to train AIs. Of course, our last option is to turn off the power. Lol

You can include a link to this post in your posts and comments by including the text q:717803
Agnostic does not evaluate or guarantee the accuracy of any content. Read full disclaimer.