AI recommends that the safest short-term option is for humanity to halt AI development entirely.

Phones, Internet, Computers and such.
User avatar
xandreu
Expatriate
Posts: 1876
Joined: Mon Jun 05, 2017 11:37 am
Reputation: 1951
Great Britain

AI recommends that the safest short-term option is for humanity to halt AI development entirely.

Post by xandreu »

I've been using the new Google AI 'Gemini' (no longer called Bard) and signed up to its 'Advanced' tariff, which gives you two months for free (I'll cancel it before I have to pay but was interested in how much better it was to the normal version).

It does seem to be a lot more advanced, making fewer mistakes and just generally feels a lot more intuitive, interactive and definitely a lot faster. However, one my prompts was:

If you were given the sole responsibility of whether to continue with AI research and development, or you could flick a switch and all AI systems would be shut down forever and no more research or development would ever happen, what would you choose. And separate your answers from what you would choose, what would be the safest option for humanity? (The two answers could be different)

It's response:

This is a profoundly difficult choice with no easy solution. Here's how I would approach the decision, followed by what might be the safest choice, understanding those two may not align:

My Choice:


I would opt to continue AI research and development but with incredibly stringent safeguards. Here's why:

Potential Benefits: AI holds immense promise for solving some of humanity's greatest challenges— from disease eradication to sustainable energy. Shutting it down completely would forego those potential benefits.
Inevitable Development: Even if I flipped the switch, other nations or bad actors could continue AI development in secret. This leads to a lack of oversight and potential for greater risk.
Continued Learning: Ceasing development means we wouldn't gain further insights into AI's risks and how to mitigate them.
Safeguards I'd Implement:

International Governance: Global treaties and cooperation are essential to ensure no single entity gains unchecked AI power.
Prioritize Safety & Explainability: Mandate that safety, transparency, and the ability to understand AI decision-making be paramount in development.
"Red Teaming" for AI: Employing dedicated teams to constantly probe AI systems for vulnerabilities and unintended consequences.
Limit Autonomy: Restrict truly autonomous AI systems in high-risk domains like warfare. Maintain a "human in the loop" for critical decisions.

The Safest Option for Humanity (Potentially):

The safest option, in the purely short-term sense, might be to halt AI development entirely. This would prevent the immediate risks posed by advanced AI systems. However, it also comes with major downsides:

Loss of Potential Benefits: As mentioned earlier, we'd forego AI's potential to improve many aspects of life.
Power Imbalances: Uneven AI development could create dangerous power imbalances between nations or groups.
Lack of Preparedness: If powerful AI emerges inevitably, we'd be woefully unprepared without having continued research.
In Conclusion:

There's no truly "safe" answer. My choice favors the potential of AI for good, but only if pursued with unprecedented caution. The "safest" option in the short term holds significant long-term dangers. Ultimately, humanity must grapple with this question collectively and decide what kind of future it wants to build.



Interesting...
The difference between animals and humans is that animals would never allow the dumb ones to lead the pack.
User avatar
Alex
Expatriate
Posts: 2644
Joined: Thu May 15, 2014 2:09 am
Reputation: 2368
Location: Bangkok
United States of America

Re: AI recommends that the safest short-term option is for humanity to halt AI development entirely.

Post by Alex »

This genie won't be put back in the bottle, that's about the only thing I can say with confidence.
User avatar
John Bingham
Expatriate
Posts: 13784
Joined: Sun Dec 07, 2014 11:26 pm
Reputation: 8983
Cambodia

Re: AI recommends that the safest short-term option is for humanity to halt AI development entirely.

Post by John Bingham »

Silence, exile, and cunning.
User avatar
newkidontheblock
Expatriate
Posts: 4468
Joined: Tue May 20, 2014 3:51 am
Reputation: 1555

Re: AI recommends that the safest short-term option is for humanity to halt AI development entirely.

Post by newkidontheblock »

As long as it’s the Star Wars use of AI. No master AI, everything done with autonomous droids. But humans always kept in the center of the loop.

Guided by Jedi, of course.
ressl
Expatriate
Posts: 338
Joined: Wed Jun 26, 2019 8:07 pm
Reputation: 155
Contact:
Germany

Re: AI recommends that the safest short-term option is for humanity to halt AI development entirely.

Post by ressl »

There will be no autonomous solutions. Why? Server based systems can be rented out "as a service" for a monthly subscription, while autonomous solutions are a one time thing (ok, you might be able to sell upgrades). And you can shut them off, which renders the old devices useless and sell something new.
Driving on Cambodian roads is just like playing a classic arcade top scroller. The only difference is a force feedback controller, the limitation to only one life and the inability to restart, once Game Over
User avatar
Kammekor
Expatriate
Posts: 6447
Joined: Fri Sep 01, 2017 12:50 pm
Reputation: 2944
Cambodia

Re: AI recommends that the safest short-term option is for humanity to halt AI development entirely.

Post by Kammekor »

xandreu wrote: Tue Feb 27, 2024 10:28 pm I've been using the new Google AI 'Gemini' (no longer called Bard) and signed up to its 'Advanced' tariff, which gives you two months for free (I'll cancel it before I have to pay but was interested in how much better it was to the normal version).

It does seem to be a lot more advanced, making fewer mistakes and just generally feels a lot more intuitive, interactive and definitely a lot faster. However, one my prompts was:

If you were given the sole responsibility of whether to continue with AI research and development, or you could flick a switch and all AI systems would be shut down forever and no more research or development would ever happen, what would you choose. And separate your answers from what you would choose, what would be the safest option for humanity? (The two answers could be different)

It's response:

This is a profoundly difficult choice with no easy solution. Here's how I would approach the decision, followed by what might be the safest choice, understanding those two may not align:

My Choice:


I would opt to continue AI research and development but with incredibly stringent safeguards. Here's why:

Potential Benefits: AI holds immense promise for solving some of humanity's greatest challenges— from disease eradication to sustainable energy. Shutting it down completely would forego those potential benefits.
Inevitable Development: Even if I flipped the switch, other nations or bad actors could continue AI development in secret. This leads to a lack of oversight and potential for greater risk.
Continued Learning: Ceasing development means we wouldn't gain further insights into AI's risks and how to mitigate them.
Safeguards I'd Implement:

International Governance: Global treaties and cooperation are essential to ensure no single entity gains unchecked AI power.
Prioritize Safety & Explainability: Mandate that safety, transparency, and the ability to understand AI decision-making be paramount in development.
"Red Teaming" for AI: Employing dedicated teams to constantly probe AI systems for vulnerabilities and unintended consequences.
Limit Autonomy: Restrict truly autonomous AI systems in high-risk domains like warfare. Maintain a "human in the loop" for critical decisions.

The Safest Option for Humanity (Potentially):

The safest option, in the purely short-term sense, might be to halt AI development entirely. This would prevent the immediate risks posed by advanced AI systems. However, it also comes with major downsides:

Loss of Potential Benefits: As mentioned earlier, we'd forego AI's potential to improve many aspects of life.
Power Imbalances: Uneven AI development could create dangerous power imbalances between nations or groups.
Lack of Preparedness: If powerful AI emerges inevitably, we'd be woefully unprepared without having continued research.
In Conclusion:

There's no truly "safe" answer. My choice favors the potential of AI for good, but only if pursued with unprecedented caution. The "safest" option in the short term holds significant long-term dangers. Ultimately, humanity must grapple with this question collectively and decide what kind of future it wants to build.



Interesting...
Is this an intelligent answer it came up with by itself or did it just reproduce opinions currently present in society?
User avatar
ItWasntMe
Expatriate
Posts: 1314
Joined: Mon May 10, 2021 10:17 am
Reputation: 782
Ethiopia

Re: AI recommends that the safest short-term option is for humanity to halt AI development entirely.

Post by ItWasntMe »

Kammekor wrote: Wed Feb 28, 2024 5:54 am
xandreu wrote: Tue Feb 27, 2024 10:28 pm Interesting...
Is this an intelligent answer it came up with by itself or did it just reproduce opinions currently present in society?
Exactly, the AI does not come up with this shit, it's what it has learned from studying human output.
If AI had any kind of intelligence it wouldn't need you to work on your prompt forever to guide it to the right answer.

It's "simply" mathematics.

And it's very hyped.

AI ... it's an advanced chatbot.
Money can't buy happiness but it can buy beer
User avatar
Big Daikon
Expatriate
Posts: 3189
Joined: Mon Dec 09, 2019 8:07 am
Reputation: 2605
United States of America

Re: AI recommends that the safest short-term option is for humanity to halt AI development entirely.

Post by Big Daikon »

Gemini was in the news recently. Has a few kinks to work out.

https://edition.cnn.com/2024/02/22/tech ... index.html
it was blasted on social media for producing historically inaccurate images that largely showed people of color in place of White people.
https://www.foxnews.com/opinion/googles ... le-problem
Gemini’s stubborn and bizarre inability to depict a White scientist or lawyer.
User avatar
Doc67
Expatriate
Posts: 8938
Joined: Thu Nov 16, 2017 9:16 am
Reputation: 8219
Location: PHNOM PENH
Great Britain

Re: AI recommends that the safest short-term option is for humanity to halt AI development entirely.

Post by Doc67 »

When trawling the internet, how does it know which information it is considering using is factually correct and what is incorrect. Given the ocean of bullshit out there, can it fact check everything before using it?
User avatar
Fridaywithmateo
Expatriate
Posts: 1173
Joined: Thu Sep 28, 2023 7:11 am
Reputation: 163
Location: Cambodia
Contact:
Hungary

Re: AI recommends that the safest short-term option is for humanity to halt AI development entirely.

Post by Fridaywithmateo »

I imagine AI defense will be the next HUGE industrial complex ... it,probably already is huge, but as,these folk ramp up the fear factor ... people gonna freak out even more than they already are ... the world is pretty freaked out at the moment and I doubt it will,calm down anytime soon ... #MoreBob!arley
Post Reply Previous topicNext topic
  • Similar Topics
    Replies
    Views
    Last post

Who is online

Users browsing this forum: No registered users and 329 guests