Review Article | | Peer-Reviewed

Artificial Intelligence and a Weapon of Mass Destruction

Received: 8 December 2023     Accepted: 23 December 2023     Published: 5 February 2024
Views:       Downloads:
Abstract

For global reality, the boundaries between artificial intelligence and software are not clearly defined. Experts in the field of AI are trying not only to understand the nature of intelligence, but also to create intelligent entities, which is why big problems are created within the framework of an unambiguous understanding of the technical regulations of artificial intelligence. Nick Bostrom (Professor Faculty of Philosophy at Oxford University. He is the founding Director of the Future of Humanity Institute - FHI, a multidisciplinary research center which enables a few exceptional mathematicians, philosophers, and scientists to think carefully about global priorities and big questions for humanity. The FHI is platform to teams working on AI safety, biosecurity, macro strategy, AI policy, the ethics of digital minds, and various other technology or foundational questions) talks about the need to pay very close attention serious attention to the risk that the actions of an explosively developing artificial superintelligence can lead to the arbitrariness of machines and, as a result, causing irreversible damage to human civilization or even to an existential catastrophe - the death of humanity. If humanity wants to survive the advent of super intelligent machines in its world, then the risks from such realities must be taken into account.

Published in American Journal of Chemical and Biochemical Engineering (Volume 8, Issue 1)
DOI 10.11648/ajcbe.20240801.11
Page(s) 1-14
Creative Commons

This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited.

Copyright

Copyright © The Author(s), 2024. Published by Science Publishing Group

Keywords

Artificial Intelligence (AI), Chat GPT, Open AI, Robot Chemist, EU Position on Artificial Intelligence Risks, "Potentially Catastrophic Damage", Henry A. Kissinger

References
[1] Ada Lovelace, in full Ada King, countess of Lovelace, original name Augusta Ada Byron, Lady Byron // Available from: https://www.britannica.com/biography/Ada-Lovelace [Accessed 22 December 2023].
[2] Alan Turing // Available from: https://www.britannica.com/biography/Alan-Turing [Accessed 22 December 2023].
[3] Joseph Weizenbaum and the Emergence of Chatbots // Available from: https://doi.org/10.1093/oso/9780190080365.003.0004 [Accessed 22 December 2023].
[4] "Deep Blue" defeated Garry Kasparov in chess match // Available from: https://www.history.com/this-day-in-history/deep-blue-defeats-garry-kasparov-in-chess-match [Accessed 22 December 2023].
[5] AI Index Report 2023 // Available from: https://aiindex.stanford.edu/wp-content/uploads/2023/04/HAI_AI-Index-Report_2023.pdf [Accessed 22 December 2023].
[6] International Electronic Commission - IEC // Available from: https://www.iec.ch/homepage [Accessed 22 December 2023].
[7] ISO/IEC TR 24028:2020 standard // Available from: https://webstore.iec.ch/publication/67138 [Accessed 22 December 2023].
[8] Chat GPT // Available from: https://en.wikipedia.org/wiki/ChatGPT#cite_note-guardianpos-2 [Accessed 22 December 2023].
[9] Jan Hatzius, Joseph Briggs, Devesh Kodnani, Giovanni Pierdomenico // Global Economics Analyst the Potentially Large Effects of Artificial Intelligence on Economic Growth (Briggs/Kodnani) // Available from: https://www.key4biz.it/wp-content/uploads/2023/03/Global-Economics-Analyst_-The-Potentially-Large-Effects-of-Artificial-Intelligence-on-Economic-Growth-Briggs_Kodnani.pdf // https://www.ansa.it/documents/1680080409454_ert.pdf [Accessed 22 December 2023].
[10] How the sudden rise of AI is shaking your white-collar world// Available from: https://www.afr.com/technology/will-the-sudden-rise-of-useful-ai-shake-our-cosy-white-collar-world-20230126-p5cfnj [Accessed 22 December 2023].
[11] The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation// Available from: // https://img1.wsimg.com/blobby/go/3d82daa4-97fe-4096-9c6b-376b92c619de/downloads/MaliciousUseofAI.pdf?ver=1553030594217 [Accessed 22 December 2023].
[12] The INTERPOL Face Recognition System (IFRS) - global criminal database // Available from: https://www.interpol.int/How-we-work/Forensics/Facial-Recognition [Accessed 22 December 2023].
[13] Artificial Intelligence Index Report 2023 // Stunford University // Available from: https://aiindex.stanford.edu/wp-content/uploads/2023/04/HAI_AI-Index-Report_2023.pdf [Accessed 22 December 2023].
[14] Zhu, Q., Huang, Y., Zhou, D. et al. Automated synthesis of oxygen-producing catalysts from Martian meteorites by a robotic AI chemist. Nat. Synth (2023)// Available from: https://doi.org/10.1038/s44160-023-00424-1 [Accessed 22 December 2023].
[15] Oren Etzioni. How to know if artificial intelligence is about to destroy civilization // Available from: https://www.technologyreview.com/2020/02/25/906083/artificial-intelligence-destroy-civilization-canaries-robot-overlords-take-over-world-ai/ [Accessed 22 December 2023].
[16] Artificial Intelligence Index Report 2021 // CHAPTER 7: AI POLICY AND NATIONAL STRATEGIES // Available from: https://aiindex.stanford.edu/wp-content/uploads/2021/03/2021-AI-Index-Report-_Chapter-7.pdf [Accessed 22 December 2023].
[17] Martin V. Butz // Towards Strong AI // Available from: https://link.springer.com/article/10.1007/s13218-021-00705-x [Accessed 22 December 2023].
[18] The law enforcement understand what challenges derivative and generative AI models could pose // Available from: https://www.europol.europa.eu/cms/sites/default/files/documents/Tech%20Watch%20Flash%20-%20The%20Impact%20of%20Large%20Language%20Models%20on%20Law%20Enforcement.pdf [Accessed 22 December 2023].
[19] AI rules as Chat GPT // Available from: https://www.reuters.com/world/us/senate-leader-schumer-pushes-ai-regulatory-regime-after-china-action-2023-04-13/ [Accessed 22 December 2023].
[20] Mira Murati // OpenAI’s research, product and safety teams // Available from: https://apnews.com/article/openai-cto-mira-murati-chatgpt-gpt4-dalle-0e701dd406b4a1a2d625e779de0c5164 [Accessed 22 December 2023].
[21] Geoffrey Hinton is a pioneer of deep learning // MIT Technology Review // Available from: https://www.technologyreview.com/2023/05/02/1072528/geoffrey-hinton-google-why-scared-ai/ [Accessed 22 December 2023].
[22] Henry Alfred Kissinger // Available from: https://www.goodreads.com/book/show/71083945-the-age-of-ai [Accessed 22 December 2023].
[23] Henry A. Kissinger. How the Enlightenment Ends Philosophically, intellectually - in every way - human society is unprepared for the rise of artificial intelligence. Available from: https://www.theatlantic.com/magazine/archive/2018/06/henry-kissinger-ai-could-mean-the-end-of-human-history/559124/ [Accessed 22 December 2023].
[24] AlphaGo is a narrow AI // Available from: https://www.deepmind.com/research/highlighted-research/alphago [Accessed 22 December 2023].
[25] The idea of applying ethics to artificial intelligence. Available from: https://www.techopedia.com/definition/32768/deepmind [Accessed 22 December 2023].
[26] What Does Alpha Go Mean? Available from: https://www.techopedia.com/definition/31955/alphago [Accessed 22 December 2023].
[27] “Gestalt” // Available from: https://www.britannica.com/dictionary/gestalt [Accessed 22 December 2023].
[28] Natalie Sidamonidze. Artificial intelligence is a challenge and some methodological aspects of its realization. Transactions. Georgian Technical University. AUTOMATED CONTROL SYSTEMS - No 1(28), 2019.
[29] A robot broke a child's finger at a chess tournament. Available from: https://www.ixbt.com/news/2022/07/21/robot-slomal-palec-rebenku-na-shahmatnom-turnire-v-moskve.amp.html [Accessed 22 December 2023].
[30] Kissinger, H. A. How the Enlightenment Ends / H. A. Kissinger // Journal the Atlantic. 06/01/2018. p. 12-16.
[31] Go is an abstract strategy board game for two players // Available from: https://en.wikipedia.org/wiki/Go_(game) [Accessed 22 December 2023].
[32] Artificial intelligence chatbot that was originally released by Microsoft Corporation// Available from: https://en.wikipedia.org/wiki/Tay_(chatbot) [Accessed 22 December 2023].
[33] Alpha Zero is a computer program developed by artificial intelligence research company DeepMind to master the games of chess, shogi, and go. Available from: https://en.wikipedia.org/wiki/AlphaZero [Accessed 22 December 2023].
[34] Artificial Intelligence (AI) technology // Available from: https://futureoflife.org/open-letter/open-letter-autonomous-weapons-ai-robotics/ [Accessed 22 December 2023].
[35] Irving John Good // Available from: https://en.wikipedia.org/wiki/I._J._Good [Accessed 22 December 2023].
[36] Large-scale risks and towards benefiting life// Available from: https://futureoflife.org/about-us/ [Accessed 22 December 2023].
[37] the Future of AI // Available from: https://futureoflife.org/event/ai-safety-conference-in-puerto-rico/ [Accessed 22 December 2023].
[38] Stuart Russell, Daniel Dewey, Max Tegmark // Research Priorities for Robust and Beneficial Artificial Intelligence. // Available from: https://futureoflife.org/data/documents/research_priorities.pdf?x89399 [Accessed 22 December 2023].
[39] Research Priorities for Robust and Beneficial Artificial Intelligence // Available from: https://futureoflife.org/open-letter/ai-open-letter/ [Accessed 22 December 2023].
[40] National Security and Artificial Intelligence // Available from: https://novapublishers.com/shop/national-security-and-artificialintelligence/ [Accessed 22 December 2023].
[41] The Center for AI Safety // Available from: https://www.safe.ai [Accessed 22 December 2023].
[42] four key categories of catastrophic AI risk// Available from: https://www.safe.ai/ai-risk [Accessed 22 December 2023].
[43] AI race - safety regulations, international coordination, and public control of general-purpose AI’s // Available from: https://www.safe.ai/ai-risk [Accessed 22 December 2023]
[44] The Rhodes Declaration // Available from: https://en.wikipedia.org/wiki/Dialogue_of_Civilizations [Accessed 22 December 2023]
[45] A graph placement methodology for fast chip design// Available from: https://www.nature.com/articles/s41586-022-04657-6 [Accessed 22 December 2023].
[46] Siemens Digital Industries Software portfolio // Available from: https://www.plm.automation.siemens.com/global/en/products/ [Accessed 22 December 2023].
[47] Product Lifecycle Management – PLM // Available from: https://www.plm.automation.siemens.com/global/en/industries/automotive-transportation/automotive-suppliers/electronics-manufacturing-planning.html [Accessed 22 December 2023].
[48] Manufacturing Operations Management – MOM // Available from: https://www.plm.automation.siemens.com/global/en/resource/manufacturing-operations-management-with-opcenter-execution-discrete/107834 [Accessed 22 December 2023].
[49] CAD software for mechanical design // Available from: https://www.plm.automation.siemens.com/global/en/our-story/offices.html [Accessed 22 December 2023].
[50] Basic problems in AI safety have yet to be solved. Available from: https://www.safe.ai [Accessed 22 December 2023]
[51] Ramaz Khetsuriani: The brain has such abilities that should not be the result of evolution! (In Georgian) // Available from: https://iberiana.wordpress.com/health/khetsuriani/ [Accessed 22 December 2023].
Cite This Article
  • APA Style

    Matsaberidze, M. (2024). Artificial Intelligence and a Weapon of Mass Destruction. American Journal of Chemical and Biochemical Engineering, 8(1), 1-14. https://doi.org/10.11648/ajcbe.20240801.11

    Copy | Download

    ACS Style

    Matsaberidze, M. Artificial Intelligence and a Weapon of Mass Destruction. Am. J. Chem. Biochem. Eng. 2024, 8(1), 1-14. doi: 10.11648/ajcbe.20240801.11

    Copy | Download

    AMA Style

    Matsaberidze M. Artificial Intelligence and a Weapon of Mass Destruction. Am J Chem Biochem Eng. 2024;8(1):1-14. doi: 10.11648/ajcbe.20240801.11

    Copy | Download

  • @article{10.11648/ajcbe.20240801.11,
      author = {Mamuka Matsaberidze},
      title = {Artificial Intelligence and a Weapon of Mass Destruction},
      journal = {American Journal of Chemical and Biochemical Engineering},
      volume = {8},
      number = {1},
      pages = {1-14},
      doi = {10.11648/ajcbe.20240801.11},
      url = {https://doi.org/10.11648/ajcbe.20240801.11},
      eprint = {https://article.sciencepublishinggroup.com/pdf/10.11648.ajcbe.20240801.11},
      abstract = {For global reality, the boundaries between artificial intelligence and software are not clearly defined. Experts in the field of AI are trying not only to understand the nature of intelligence, but also to create intelligent entities, which is why big problems are created within the framework of an unambiguous understanding of the technical regulations of artificial intelligence. Nick Bostrom (Professor Faculty of Philosophy at Oxford University. He is the founding Director of the Future of Humanity Institute - FHI, a multidisciplinary research center which enables a few exceptional mathematicians, philosophers, and scientists to think carefully about global priorities and big questions for humanity. The FHI is platform to teams working on AI safety, biosecurity, macro strategy, AI policy, the ethics of digital minds, and various other technology or foundational questions) talks about the need to pay very close attention serious attention to the risk that the actions of an explosively developing artificial superintelligence can lead to the arbitrariness of machines and, as a result, causing irreversible damage to human civilization or even to an existential catastrophe - the death of humanity. If humanity wants to survive the advent of super intelligent machines in its world, then the risks from such realities must be taken into account.
    },
     year = {2024}
    }
    

    Copy | Download

  • TY  - JOUR
    T1  - Artificial Intelligence and a Weapon of Mass Destruction
    AU  - Mamuka Matsaberidze
    Y1  - 2024/02/05
    PY  - 2024
    N1  - https://doi.org/10.11648/ajcbe.20240801.11
    DO  - 10.11648/ajcbe.20240801.11
    T2  - American Journal of Chemical and Biochemical Engineering
    JF  - American Journal of Chemical and Biochemical Engineering
    JO  - American Journal of Chemical and Biochemical Engineering
    SP  - 1
    EP  - 14
    PB  - Science Publishing Group
    SN  - 2639-9989
    UR  - https://doi.org/10.11648/ajcbe.20240801.11
    AB  - For global reality, the boundaries between artificial intelligence and software are not clearly defined. Experts in the field of AI are trying not only to understand the nature of intelligence, but also to create intelligent entities, which is why big problems are created within the framework of an unambiguous understanding of the technical regulations of artificial intelligence. Nick Bostrom (Professor Faculty of Philosophy at Oxford University. He is the founding Director of the Future of Humanity Institute - FHI, a multidisciplinary research center which enables a few exceptional mathematicians, philosophers, and scientists to think carefully about global priorities and big questions for humanity. The FHI is platform to teams working on AI safety, biosecurity, macro strategy, AI policy, the ethics of digital minds, and various other technology or foundational questions) talks about the need to pay very close attention serious attention to the risk that the actions of an explosively developing artificial superintelligence can lead to the arbitrariness of machines and, as a result, causing irreversible damage to human civilization or even to an existential catastrophe - the death of humanity. If humanity wants to survive the advent of super intelligent machines in its world, then the risks from such realities must be taken into account.
    
    VL  - 8
    IS  - 1
    ER  - 

    Copy | Download

Author Information
  • Department of Chemical and Biological Technologies, Faculty of Chemical Technology and Metallurgy, Georgian Technical University, Tbilisi, Georgia

  • Sections