AI recommends that the safest short-term option is for humanity to halt AI development entirely.
AI recommends that the safest short-term option is for humanity to halt AI development entirely.
I've been using the new Google AI 'Gemini' (no longer called Bard) and signed up to its 'Advanced' tariff, which gives you two months for free (I'll cancel it before I have to pay but was interested in how much better it was to the normal version).
It does seem to be a lot more advanced, making fewer mistakes and just generally feels a lot more intuitive, interactive and definitely a lot faster. However, one my prompts was:
If you were given the sole responsibility of whether to continue with AI research and development, or you could flick a switch and all AI systems would be shut down forever and no more research or development would ever happen, what would you choose. And separate your answers from what you would choose, what would be the safest option for humanity? (The two answers could be different)
It's response:
This is a profoundly difficult choice with no easy solution. Here's how I would approach the decision, followed by what might be the safest choice, understanding those two may not align:
My Choice:
I would opt to continue AI research and development but with incredibly stringent safeguards. Here's why:
Potential Benefits: AI holds immense promise for solving some of humanity's greatest challenges— from disease eradication to sustainable energy. Shutting it down completely would forego those potential benefits.
Inevitable Development: Even if I flipped the switch, other nations or bad actors could continue AI development in secret. This leads to a lack of oversight and potential for greater risk.
Continued Learning: Ceasing development means we wouldn't gain further insights into AI's risks and how to mitigate them.
Safeguards I'd Implement:
International Governance: Global treaties and cooperation are essential to ensure no single entity gains unchecked AI power.
Prioritize Safety & Explainability: Mandate that safety, transparency, and the ability to understand AI decision-making be paramount in development.
"Red Teaming" for AI: Employing dedicated teams to constantly probe AI systems for vulnerabilities and unintended consequences.
Limit Autonomy: Restrict truly autonomous AI systems in high-risk domains like warfare. Maintain a "human in the loop" for critical decisions.
The Safest Option for Humanity (Potentially):
The safest option, in the purely short-term sense, might be to halt AI development entirely. This would prevent the immediate risks posed by advanced AI systems. However, it also comes with major downsides:
Loss of Potential Benefits: As mentioned earlier, we'd forego AI's potential to improve many aspects of life.
Power Imbalances: Uneven AI development could create dangerous power imbalances between nations or groups.
Lack of Preparedness: If powerful AI emerges inevitably, we'd be woefully unprepared without having continued research.
In Conclusion:
There's no truly "safe" answer. My choice favors the potential of AI for good, but only if pursued with unprecedented caution. The "safest" option in the short term holds significant long-term dangers. Ultimately, humanity must grapple with this question collectively and decide what kind of future it wants to build.
Interesting...
It does seem to be a lot more advanced, making fewer mistakes and just generally feels a lot more intuitive, interactive and definitely a lot faster. However, one my prompts was:
If you were given the sole responsibility of whether to continue with AI research and development, or you could flick a switch and all AI systems would be shut down forever and no more research or development would ever happen, what would you choose. And separate your answers from what you would choose, what would be the safest option for humanity? (The two answers could be different)
It's response:
This is a profoundly difficult choice with no easy solution. Here's how I would approach the decision, followed by what might be the safest choice, understanding those two may not align:
My Choice:
I would opt to continue AI research and development but with incredibly stringent safeguards. Here's why:
Potential Benefits: AI holds immense promise for solving some of humanity's greatest challenges— from disease eradication to sustainable energy. Shutting it down completely would forego those potential benefits.
Inevitable Development: Even if I flipped the switch, other nations or bad actors could continue AI development in secret. This leads to a lack of oversight and potential for greater risk.
Continued Learning: Ceasing development means we wouldn't gain further insights into AI's risks and how to mitigate them.
Safeguards I'd Implement:
International Governance: Global treaties and cooperation are essential to ensure no single entity gains unchecked AI power.
Prioritize Safety & Explainability: Mandate that safety, transparency, and the ability to understand AI decision-making be paramount in development.
"Red Teaming" for AI: Employing dedicated teams to constantly probe AI systems for vulnerabilities and unintended consequences.
Limit Autonomy: Restrict truly autonomous AI systems in high-risk domains like warfare. Maintain a "human in the loop" for critical decisions.
The Safest Option for Humanity (Potentially):
The safest option, in the purely short-term sense, might be to halt AI development entirely. This would prevent the immediate risks posed by advanced AI systems. However, it also comes with major downsides:
Loss of Potential Benefits: As mentioned earlier, we'd forego AI's potential to improve many aspects of life.
Power Imbalances: Uneven AI development could create dangerous power imbalances between nations or groups.
Lack of Preparedness: If powerful AI emerges inevitably, we'd be woefully unprepared without having continued research.
In Conclusion:
There's no truly "safe" answer. My choice favors the potential of AI for good, but only if pursued with unprecedented caution. The "safest" option in the short term holds significant long-term dangers. Ultimately, humanity must grapple with this question collectively and decide what kind of future it wants to build.
Interesting...
The difference between animals and humans is that animals would never allow the dumb ones to lead the pack.
Re: AI recommends that the safest short-term option is for humanity to halt AI development entirely.
This genie won't be put back in the bottle, that's about the only thing I can say with confidence.
- John Bingham
- Expatriate
- Posts: 13785
- Joined: Sun Dec 07, 2014 11:26 pm
- Reputation: 8983
Re: AI recommends that the safest short-term option is for humanity to halt AI development entirely.
Silence, exile, and cunning.
- newkidontheblock
- Expatriate
- Posts: 4468
- Joined: Tue May 20, 2014 3:51 am
- Reputation: 1555
Re: AI recommends that the safest short-term option is for humanity to halt AI development entirely.
As long as it’s the Star Wars use of AI. No master AI, everything done with autonomous droids. But humans always kept in the center of the loop.
Guided by Jedi, of course.
Guided by Jedi, of course.
Re: AI recommends that the safest short-term option is for humanity to halt AI development entirely.
There will be no autonomous solutions. Why? Server based systems can be rented out "as a service" for a monthly subscription, while autonomous solutions are a one time thing (ok, you might be able to sell upgrades). And you can shut them off, which renders the old devices useless and sell something new.
Driving on Cambodian roads is just like playing a classic arcade top scroller. The only difference is a force feedback controller, the limitation to only one life and the inability to restart, once Game Over
Re: AI recommends that the safest short-term option is for humanity to halt AI development entirely.
Is this an intelligent answer it came up with by itself or did it just reproduce opinions currently present in society?xandreu wrote: ↑Tue Feb 27, 2024 10:28 pm I've been using the new Google AI 'Gemini' (no longer called Bard) and signed up to its 'Advanced' tariff, which gives you two months for free (I'll cancel it before I have to pay but was interested in how much better it was to the normal version).
It does seem to be a lot more advanced, making fewer mistakes and just generally feels a lot more intuitive, interactive and definitely a lot faster. However, one my prompts was:
If you were given the sole responsibility of whether to continue with AI research and development, or you could flick a switch and all AI systems would be shut down forever and no more research or development would ever happen, what would you choose. And separate your answers from what you would choose, what would be the safest option for humanity? (The two answers could be different)
It's response:
This is a profoundly difficult choice with no easy solution. Here's how I would approach the decision, followed by what might be the safest choice, understanding those two may not align:
My Choice:
I would opt to continue AI research and development but with incredibly stringent safeguards. Here's why:
Potential Benefits: AI holds immense promise for solving some of humanity's greatest challenges— from disease eradication to sustainable energy. Shutting it down completely would forego those potential benefits.
Inevitable Development: Even if I flipped the switch, other nations or bad actors could continue AI development in secret. This leads to a lack of oversight and potential for greater risk.
Continued Learning: Ceasing development means we wouldn't gain further insights into AI's risks and how to mitigate them.
Safeguards I'd Implement:
International Governance: Global treaties and cooperation are essential to ensure no single entity gains unchecked AI power.
Prioritize Safety & Explainability: Mandate that safety, transparency, and the ability to understand AI decision-making be paramount in development.
"Red Teaming" for AI: Employing dedicated teams to constantly probe AI systems for vulnerabilities and unintended consequences.
Limit Autonomy: Restrict truly autonomous AI systems in high-risk domains like warfare. Maintain a "human in the loop" for critical decisions.
The Safest Option for Humanity (Potentially):
The safest option, in the purely short-term sense, might be to halt AI development entirely. This would prevent the immediate risks posed by advanced AI systems. However, it also comes with major downsides:
Loss of Potential Benefits: As mentioned earlier, we'd forego AI's potential to improve many aspects of life.
Power Imbalances: Uneven AI development could create dangerous power imbalances between nations or groups.
Lack of Preparedness: If powerful AI emerges inevitably, we'd be woefully unprepared without having continued research.
In Conclusion:
There's no truly "safe" answer. My choice favors the potential of AI for good, but only if pursued with unprecedented caution. The "safest" option in the short term holds significant long-term dangers. Ultimately, humanity must grapple with this question collectively and decide what kind of future it wants to build.
Interesting...
Re: AI recommends that the safest short-term option is for humanity to halt AI development entirely.
Exactly, the AI does not come up with this shit, it's what it has learned from studying human output.
If AI had any kind of intelligence it wouldn't need you to work on your prompt forever to guide it to the right answer.
It's "simply" mathematics.
And it's very hyped.
AI ... it's an advanced chatbot.
Money can't buy happiness but it can buy beer
- Big Daikon
- Expatriate
- Posts: 3189
- Joined: Mon Dec 09, 2019 8:07 am
- Reputation: 2605
Re: AI recommends that the safest short-term option is for humanity to halt AI development entirely.
Gemini was in the news recently. Has a few kinks to work out.
https://edition.cnn.com/2024/02/22/tech ... index.html
https://edition.cnn.com/2024/02/22/tech ... index.html
https://www.foxnews.com/opinion/googles ... le-problemit was blasted on social media for producing historically inaccurate images that largely showed people of color in place of White people.
Gemini’s stubborn and bizarre inability to depict a White scientist or lawyer.
Re: AI recommends that the safest short-term option is for humanity to halt AI development entirely.
When trawling the internet, how does it know which information it is considering using is factually correct and what is incorrect. Given the ocean of bullshit out there, can it fact check everything before using it?
- Fridaywithmateo
- Expatriate
- Posts: 1174
- Joined: Thu Sep 28, 2023 7:11 am
- Reputation: 163
- Location: Cambodia
- Contact:
Re: AI recommends that the safest short-term option is for humanity to halt AI development entirely.
I imagine AI defense will be the next HUGE industrial complex ... it,probably already is huge, but as,these folk ramp up the fear factor ... people gonna freak out even more than they already are ... the world is pretty freaked out at the moment and I doubt it will,calm down anytime soon ... #MoreBob!arley
-
- Similar Topics
- Replies
- Views
- Last post
-
- 7 Replies
- 2363 Views
-
Last post by xandreu
-
- 3 Replies
- 1350 Views
-
Last post by rogerrabbit
-
- 4 Replies
- 575 Views
-
Last post by Bongmab69
-
- 7 Replies
- 1970 Views
-
Last post by CEOCambodiaNews
-
- 21 Replies
- 7226 Views
-
Last post by Jack.R.
-
- 20 Replies
- 3487 Views
-
Last post by Wod
Who is online
Users browsing this forum: No registered users and 349 guests