Review Article
Artificial Intelligence and a Weapon of Mass Destruction
Issue:
Volume 8, Issue 1, June 2024
Pages:
1-14
Received:
8 December 2023
Accepted:
23 December 2023
Published:
5 February 2024
Abstract: For global reality, the boundaries between artificial intelligence and software are not clearly defined. Experts in the field of AI are trying not only to understand the nature of intelligence, but also to create intelligent entities, which is why big problems are created within the framework of an unambiguous understanding of the technical regulations of artificial intelligence. Nick Bostrom (Professor Faculty of Philosophy at Oxford University. He is the founding Director of the Future of Humanity Institute - FHI, a multidisciplinary research center which enables a few exceptional mathematicians, philosophers, and scientists to think carefully about global priorities and big questions for humanity. The FHI is platform to teams working on AI safety, biosecurity, macro strategy, AI policy, the ethics of digital minds, and various other technology or foundational questions) talks about the need to pay very close attention serious attention to the risk that the actions of an explosively developing artificial superintelligence can lead to the arbitrariness of machines and, as a result, causing irreversible damage to human civilization or even to an existential catastrophe - the death of humanity. If humanity wants to survive the advent of super intelligent machines in its world, then the risks from such realities must be taken into account.
Abstract: For global reality, the boundaries between artificial intelligence and software are not clearly defined. Experts in the field of AI are trying not only to understand the nature of intelligence, but also to create intelligent entities, which is why big problems are created within the framework of an unambiguous understanding of the technical regulat...
Show More