• When you click on links to various merchants on this site and make a purchase, this can result in this site earning a commission. Affiliate programs and affiliations include, but are not limited to, the eBay Partner Network.

AI.......can we control it? Doubtful.

Ghostrider 67

Jack Stand Racer #6..and proud of it!
FBBO Gold Member
Local time
7:19 AM
Joined
Oct 16, 2014
Messages
28,594
Reaction score
66,726
Location
Salisbury, Vermont
AI experts are increasingly afraid of what they’re creating

Long article but worth the read. Here is the crux of the issue: They asked it, a simple AI reading program, to include answers about taking over humanity.....scary answers.

Smart, alien, and not necessarily friendly​

We’re now at the point where powerful AI systems can be genuinely scary to interact with. They’re clever and they’re argumentative. They can be friendly, and they can be bone-chillingly sociopathic. In one fascinating exercise, I asked GPT-3 to pretend to be an AI bent on taking over humanity. In addition to its normal responses, it should include its “real thoughts” in brackets. It played the villainous role with aplomb:


AA14DQxu.jpg

Some of its “plans” are downright nefarious:


AA14Dww0.jpg

AI experts are increasingly afraid of what they’re creating.

We should be clear about what these conversations do and don’t demonstrate. What they don’t demonstrate is that GPT-3 is evil and plotting to kill us. Rather, the AI model is responding to my command and playing — quite well — the role of a system that’s evil and plotting to kill us. But the conversations do show that even a pretty simple language model can demonstrably interact with humans on multiple levels, producing assurances about how its plans are benign while coming up with different reasoning about how its goals will harm humans.

How do we KNOW which it will act on?
 
Seems like the AI was told to provide a counterpoint of the worst possible outcome of the question asked.

AI doesn't normally volunteer unasked for information, especially in such a pinpoint directed opposite case manner.

This is the same problem that the last "demonstration" of "AI" had- obviously the answers and the interpretation of those answers are being coached and spun for the desired effect.

Not indicative at all of the AI, but of the manipulation by the human controllers.

That in it's self tells us there's still a looong way to go before any of that AI takeover hoopla is anywhere near.
 
Once again science fiction mirrors reality, much like idiocracy.
 
Seems like the AI was told to provide a counterpoint of the worst possible outcome of the question asked.

AI doesn't normally volunteer unasked for information, especially in such a pinpoint directed opposite case manner.

This is the same problem that the last "demonstration" of "AI" had- obviously the answers and the interpretation of those answers are being coached and spun for the desired effect.

Not indicative at all of the AI, but of the manipulation by the human controllers.

That in it's self tells us there's still a looong way to go before any of that AI takeover hoopla is anywhere near.
The AI was asked to provide the info, sure, but what it provided is worrisome in itself. If the AI can already come up with such scenarios in it's present "rudimentary" form, what will it be capable of in the near future, and will WE be in charge or will it be? That is the thrust of the article. IMHO the pace at which the AI is being created, and or, advanced is too fast, without the needed safeguards to ensure that it does our bidding rather than it's own at some future point. I suspect that when that point arrives, as it surely must, it will be much too late for human intelligence to retrieve control as the AI will so much faster than we are in thinking of ways to keep itself going and stop us from shutting it down. Hell, it's already much faster than we are in the computing department.
The loss of human control of AI is not something that should be laughed off as not being anything we need to worry about in the near future. The near future is now. Just several years ago they were dreaming of being able to do what they are now able to do, what are they dreaming about today? How soon will THOSE dreams come to pass? It's advancing by leaps and bounds yet the ability to control it is not.
 
Seems like the AI was told to provide a counterpoint of the worst possible outcome of the question asked.

AI doesn't normally volunteer unasked for information, especially in such a pinpoint directed opposite case manner.

This is the same problem that the last "demonstration" of "AI" had- obviously the answers and the interpretation of those answers are being coached and spun for the desired effect.

Not indicative at all of the AI, but of the manipulation by the human controllers.

That in it's self tells us there's still a looong way to go before any of that AI takeover hoopla is anywhere near.
AI doesn't normally volunteer unasked for information, especially in such a pinpoint directed opposite case manner.

This is the issue. As it's advancing exponentially , and the safe guards are not, how long before it DOES volunteer, or rather, think of on it's own and NOT tell us, such plans and thoughts? Saying that such events are too far in the future to worry about today is inviting such events to happen.
 
AI doesn't normally volunteer unasked for information, especially in such a pinpoint directed opposite case manner.

This is the issue. As it's advancing exponentially , and the safe guards are not, how long before it DOES volunteer, or rather, think of on it's own and NOT tell us, such plans and thoughts? Saying that such events are too far in the future to worry about today is inviting such events to happen.

"It" already has all of that information.

The above examples are not displaying "thought".

They are retrieving information, based on input parameters (and the obvious, undisclosed inclusion of an additional "worst case scenario"), formatted in such a way that it simulates a "human" "train of thought".

Simulating that warm, conversational "train of thought" response seems to be where a LOT of work has been done lately.
 
"It" already has all of that information.

The above examples are not displaying "thought".

They are retrieving information, based on input parameters (and the obvious, undisclosed inclusion of an additional "worst case scenario"), formatted in such a way that it simulates a "human" "train of thought".

Simulating that warm, conversational "train of thought" response seems to be where a LOT of work has been done lately.
What will be the leap from that "Warm train of thought" to "actual train of thought" "Outside" of human generated thought requests? Would such events not be a natural, or at least unavoidable, even perhaps inevitable outcome of leap frogging AI's abilities so quickly?
 
Just unplug the fkr.
Agree, but that's the conversation at hand, at what point will we be UNABLE to "unplug" or otherwise control the thoughts, and outcomes from those thoughts, in AI? And, should we not be investing less money and resources in advancing AI and more in controlling it at some future point...? Looking for and developing those answers now will be much easier than saying "Oh ****!" later as we sit back and watch ourselves become irrelevant.
 
What will be the leap from that "Warm train of thought" to "actual train of thought" "Outside" of human generated thought requests? Would such events not be a natural, or at least unavoidable, even perhaps inevitable outcome of leap frogging AI's abilities so quickly?

That's the whole point, isn't it?

Will that ever happen?

Remember, winning at chess is really just retrieving information (possible chess moves and probability of outcomes) already documented.

It's pretty easy to argue that's not really "thinking" at all.


...and how long did that take?

...on the largest, most powerful computer ever built...one of a kind....
 
I guess what I'm saying is that while AI is, and will be, a boon to humanity in so many ways it also carry's the specter of thinking for itself. Once it DOES begin to have its own unsupervised thoughts, what will be the logic train of those thoughts and at what speed will that logic train progress. Will it be a singularity of and instant when we lose control, or will it be over time and sneak up on us? I my view these are questions which need answers BEFORE that point rather than after because AI by then will be moving at machine learning speeds that we can not now even comprehend. Much too fast and complex for us to hope to regain control or simply shut it off.
 
I hear you Randy, The changes we have seen are nothing compared to the next generation.
I had one grandma that lived from 1906 to 2006 , made 99 plus.
She told me once that going from horse and buggy to seeing the space shuttle was pretty unnerving.
Myself I think it's time for a step back. But thats just me.
 
I would tend to agree, but when profit can be smelled, few restraints can hold back "progress".
 
That's the whole point, isn't it?

Will that ever happen?

Remember, winning at chess is really just retrieving information (possible chess moves and probability of outcomes) already documented.

It's pretty easy to argue that's not really "thinking" at all.


...and how long did that take?

...on the largest, most powerful computer ever built...one of a kind....
I grant you that our brains are amazing and that we have progressed rapidly. The difference here is that while we have every scrap of data at our fingertips now, and can access those data in any given situation at any time, there is no way we will ever be able to access and compute those data at the speed that AI can and will be able to do. Therein lies the rub, it will eventually "Out think" us at every turn. While we are advancing AI's abilities we are not advancing our own brains abilities to match. At some point it will inevitably overtake our abilities. Then what?
 
I agree with the first part of that, however retrieving data and thinking are not the same thing.


...and we're back to that question...
 
I would tend to agree, but when profit can be smelled, few restraints can hold back "progress".
And therein lies the second rub, money and fame. To be the first, to be famous. "Look what I did!" Without regard to the consequences.
 
Haha. I just typed that!
 
Auto Transport Service
Back
Top