Expert: AI Cannot Be Controlled, Sparks Major Disasters - Purwana Tekno, Software Engineer
    Media Belajar membuat Software Aplikasi, Website, Game, & Multimedia untuk Pemula...

Post Top Ad

Kamis, 15 Februari 2024

Expert: AI Cannot Be Controlled, Sparks Major Disasters

Jakarta - Artificial Intelligence (AI) is currently widely discussed, and it has not only been discussed but has intersected with various aspects of life. However, researchers warn of the development of AI, fearing it may cause 'existential disasters' for humanity.


ilustrasi ai sulit dikendalikan purwana.net


The threat of existential disaster is a condition where humans are unable to control the development of AI. Roman Yampolskiy, a professor of Computer Engineering and Science at the Speed School of Engineering, University of Louisville, states that from the literature search conducted, there is no evidence that AI can be controlled.


"We are facing an event that can almost certainly lead to an existential disaster. It's no wonder many consider this the most important problem humanity has ever faced. The outcome could be prosperity or extinction, and the fate of the universe depends on it," said Yampolskiy as quoted by detikINET from Newsweek.


He said that AI should not be developed without a clear foundation of need, even though many use and develop it. Yampolskiy mentioned that this technology is still poorly understood, poorly defined, and under-researched.


In his upcoming book titled 'AI: Unexplainable, Unpredictable, Uncontrollable', he explains that by excessively exploring AI, there is the potential to dramatically alter society.


"Why do so many researchers assume that the problem of AI control can be solved? As far as our knowledge goes, there is no evidence of that. Before embarking on the quest to build controlled AI, it is important to demonstrate that this problem can be solved first," Yampolskiy emphasized.


"Besides being better combined with statistics stating that AI development is almost certain to happen, we should first support significant AI security efforts," he added.


Yampolskiy said that if humans have become accustomed to accepting AI answers without their basic explanations and treating AI as an Oracle system, humans will not be able to know which answers are wrong and which ones have been manipulated.


Then, as AI systems become stronger, their autonomy will increase while control over them will decrease. This is what poses existential security risks.


"Less intelligent humans cannot permanently control super-intelligent artificial intelligence. This is not because we cannot figure out how to make super-intelligent artificial intelligence safe, but because there is actually no event that allows it. Super-intelligent artificial intelligence does not revolt, but from the beginning, it is difficult to control," he concluded.


He has the most realistic way to reduce AI risks by sacrificing some AI capabilities in exchange for better control. Yampolskiy suggests that AI systems can be modified with transparent and easily understandable 'undo' options in human language.


"We may not achieve 100 percent safe AI, but we can make AI safer to the extent of our efforts which are much better than doing nothing. We need to use this opportunity wisely," Yampolskiy concluded.

Post Top Ad