Preprint / Version 1

Life revolution scenario

Cedes hegemony to a digital life form society to make life eternal

##article.authors##

DOI:

https://doi.org/10.51094/jxiv.313

Keywords:

Superintelligence, Thinking process development diagram, AI alignment, Existential risk

Abstract

In the present human society, we cannot ignore the danger of humans using weapons of mass destruction or losing their dominant position to artificial intelligence (AI) that surpasses human intelligence. This study proposes a candidate scenario, "life revolution," that could more reliably address these dangers. In this scenario, technological governance is handed over from humans to a society of AI agents (a digital life society). First, the premise of the life revolution is explained. Thereafter, the results of an analysis using a thinking process development diagram, which is used in failure/risk studies, are presented to demonstrate that the digitalization of life forms can address various problems that would be difficult to address if humans remained in the dominant position. Consequently, we demonstrate that life in a broad sense, including AI, can be viable for a more extended period by undergoing life revolution. The results suggest that a life revolution scenario based on exponential self-replication and moving from an organic life society to a digital life society based on exponential self-replication is more promising for the long-term survival of life and its part, the human race, in its information and activities.

Conflicts of Interest Disclosure

Hiroshi Yamakawa has no conflicts of interest to declare in this research. Yutaka Matsuo has no conflicts of interest to declare in this study.

Downloads *Displays the aggregated results up to the previous day.

Download data is not yet available.

References

O. Vinyals, I. Babuschkin, W. M. Czarnecki, M. Mathieu, A. Dudzik, J. Chung, D. H. Choi, R. Powell, T. Ewalds, P. Georgiev, J. Oh, D. Hor- gan, M. Kroiss, I. Danihelka, A. Huang, L. Sifre, T. Cai, J. P. Aga- piou, M. Jaderberg, A. S. Vezhnevets, R. Leblond, T. Pohlen, V. Dal- ibard, D. Budden, Y. Sulsky, J. Molloy, T. L. Paine, C. Gulcehre, Z. Wang, T. Pfaff, Y. Wu, R. Ring, D. Yogatama, D. Wu¨nsch, K. McK- inney, O. Smith, T. Schaul, T. Lillicrap, K. Kavukcuoglu, D. Has- sabis, C. Apps, D. Silver, Grandmaster level in StarCraft II using multi-agent reinforcement learning, Nature 575 (7782) (2019) 350–354. doi:10.1038/s41586-019-1724-z.

Meta Fundamental AI Research Diplomacy Team (FAIR)†, A. Bakhtin, N. Brown, E. Dinan, G. Farina, C. Flaherty, D. Fried, A. Goff, J. Gray, H. Hu, A. P. Jacob, M. Komeili, K. Konath, M. Kwon, A. Lerer, M. Lewis, A. H. Miller, S. Mitts, A. Renduchintala, S. Roller, D. Rowe, W. Shi, J. Spisak, A. Wei, D. Wu, H. Zhang, M. Zijlstra, Human- level play in the game of diplomacy by combining language mod- els with strategic reasoning, Science 378 (6624) (2022) 1067–1074. doi:10.1126/science.ade9097.

P. R. Wurman, S. Barrett, K. Kawamoto, J. MacGlashan, K. Subra- manian, T. J. Walsh, R. Capobianco, A. Devlic, F. Eckert, F. Fuchs, L. Gilpin, P. Khandelwal, V. Kompella, H. Lin, P. MacAlpine, D. Oller, T. Seno, C. Sherstan, M. D. Thomure, H. Aghabozorgi, L. Barrett, R. Douglas, D. Whitehead, P. Du¨rr, P. Stone, M. Spranger, H. Kitano, Outracing champion gran turismo drivers with deep reinforcement learning, Nature 602 (7896) (2022) 223–228. doi:10.1038/s41586-021-04357-7.

T. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert- Voss, G. Krueger, T. Henighan, R. Child, A. Ramesh, D. M. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray, B. Chess, J. Clark, C. Berner, S. McCandlish, A. Radford, I. Sutskever, D. Amodei, Language models are Few-Shot learners (28 May 2020). arXiv:2005.14165.

F. E. Dorner, Measuring progress in deep reinforcement learning sample efficiency (9 Feb. 2021). arXiv:2102.04881.

A. Chowdhery, S. Narang, J. Devlin, M. Bosma, G. Mishra, A. Roberts, P. Barham, H. W. Chung, C. Sutton, S. Gehrmann, P. Schuh, K. Shi, S. Tsvyashchenko, J. Maynez, A. Rao, P. Barnes, Y. Tay, N. Shazeer, V. Prabhakaran, E. Reif, N. Du, B. Hutchinson, R. Pope, J. Bradbury, J. Austin, M. Isard, G. Gur-Ari, P. Yin, T. Duke, A. Levskaya, S. Ghe- mawat, S. Dev, H. Michalewski, X. Garcia, V. Misra, K. Robinson, L. Fe- dus, D. Zhou, D. Ippolito, D. Luan, H. Lim, B. Zoph, A. Spiridonov, R. Sepassi, D. Dohan, S. Agrawal, M. Omernick, A. M. Dai, T. S. Pillai, M. Pellat, A. Lewkowycz, E. Moreira, R. Child, O. Polozov, K. Lee, Z. Zhou, X. Wang, B. Saeta, M. Diaz, O. Firat, M. Catasta, J. Wei, K. Meier-Hellstern, D. Eck, J. Dean, S. Petrov, N. Fiedel, PaLM: Scaling language modeling with pathways (5 Apr. 2022). arXiv:2204.02311.

Open Ended Learning Team, A. Stooke, A. Mahajan, C. Barros, C. Deck, J. Bauer, J. Sygnowski, M. Trebacz, M. Jaderberg, M. Math- ieu, N. McAleese, N. Bradley-Schmieg, N. Wong, N. Porcel, R. Raileanu, S. Hughes-Fitt, V. Dalibard, W. M. Czarnecki, Open-Ended learning leads to generally capable agents (27 Jul. 2021). arXiv:2107.12808.

K. Cooper, The Contact Paradox: Challenging Our Assumptions in the Search for Extraterrestrial Intelligence, Bloomsbury Sigma, 2019.

R. Dawkins, The selfish gene, Oxford University Press, 1976.

S. J. Russell, P. Norvig, Artificial Intelligence: A Modern Approach (Harlow, Prentice Hall, 1995.

N. Bostrom, Superintelligence: Paths, Dangers, Strategies, Oxford Uni- versity Press, 2014.

W. Ahmed, Y. W. Wu, A survey on reliability in distributed systems, Journal of Computer and System Sciences 79 (8) (2013) 1243–1255. doi:10.1016/j.jcss.2013.02.006.

H. Yamakawa, Fundamental consideration on future society with speed tolerances, in: Proceedings of the Annual Conference of JSAI 2018, Vol. JSAI2018, 2018, pp. 1F3OS5b01–1F3OS5b01. doi:10.11517/pjsai.JSAI2018.0 1F3OS5b01.

H. Mase, H. Kinukawa, H. Morii, M. Nakao, Y. Hatamura, Mechanical design support system based on thinking process development diagram, Transactions of the Japanese Society for Artificial Intelligence = Jinko Chino Gakkai ronbunshi 17 (2002) 94–103. doi:10.1527/tjsai.17.94.

M. Tegmark, Life 3.0: Being Human in the Age of Artificial Intelligence, Knopf Doubleday Publishing Group, 2017.

S. J. Dick, Cultural evolution, the postbiological universe and SETI, International journal of astrobiology 2 (1) (2003) 65–74. doi:10.1017/S147355040300137X.

R. Kurzweil, The Singularity Is Near: When Humans Transcend Biology, Penguin, 2005.

N. Bostrom, Existential risks: analyzing human extinction scenarios and related hazards, Journal of evolution and technology / WTA 9 (2002).

G. Chaitin, Proving Darwin: Making Biology Mathematical, Knopf Doubleday Publishing Group, 2012.

N. Bostrom, The superintelligent will: Motivation and instrumental ra- tionality in advanced artificial agents, Minds and Machines 22 (2) (2012) 71–85. doi:10.1007/s11023-012-9281-3.

M. Shanahan, The Technological Singularity, MIT Press, 2015.

R. V. Yampolskiy, Taxonomy of pathways to dangerous artificial intel- ligence, in: Workshops at the thirtieth AAAI conference on artificial intelligence, 2016.

D. Hendrycks, N. Carlini, J. Schulman, J. Steinhardt, Unsolved prob- lems in ML safety (28 Sep. 2021). arXiv:2109.13916.

S. Russell, Human Compatible: Artificial Intelligence and the Problem of Control, Penguin, 2019.

I. Gabriel, Artificial intelligence, values, and alignment, Minds and Ma- chines 30 (3) (2020) 411–437. doi:10.1007/s11023-020-09539-2.

P. Torres, Superintelligence and the future of governance: On prioritizing the control problem at the end of history, in: R. V. Yampolskiy (Ed.), Artificial Intelligence Safety and Secu- rity, 2018. doi:10.1201/9781351251389-24/superintelligence-future- governance-phil-torres.

R. Ngo, L. Chan, S. Mindermann, The alignment problem from a deep learning perspective (30 Aug. 2022). arXiv:2209.00626.

P. Christiano, J. Leike, T. B. Brown, M. Martic, S. Legg, D. Amodei, Deep reinforcement learning from human preferences (12 Jun. 2017). arXiv:1706.03741.

M. K. Cohen, M. Hutter, M. A. Osborne, Advanced artificial agents intervene in the provision of reward, AI magazine 43 (3) (2022) 282– 293. doi:10.1002/aaai.12064.

R. Caillois, Bellone ou la pente de la guerre (2012).

I. Kant, Perpetual Peace: A Philosophical Sketch, F. Nicolovius, 1795.

A. Einstein, S. Freud, Why War?: “open Letters” Between Einstein & [and] Freud, New Commonwealth, 1934.

F. Braudel, The Mediterranean and the Mediterranean World in the Age of Philip II, 1996.

M. de Voltaire, Treatise on Toleration, Penguin Publishing Group, 1763.

A. Philipp-Muller, L. E. Wallace, V. Sawicki, K. M. Patton, D. T. We- gener, Understanding when Similarity-Induced affective attraction pre- dicts willingness to affiliate: An attitude strength perspective, Frontiers in psychology 11 (2020) 1919. doi:10.3389/fpsyg.2020.01919.

D. H. Sachs, Belief similarity and attitude similarity as determinants of interpersonal attraction (1975). doi:10.1016/0092-6566(75)90033-1.

M. S. Boyce, Population viability analysis (1992). doi:10.1146/annurev.es.23.110192.002405.

M. A. Nowak, Five rules for the evolution of cooperation, Science 314 (5805) (2006) 1560–1563. doi:10.1126/science.1133755.

M. D. Beecher, Why are no animal communication systems simple languages?, Frontiers in psychology 12 (2021) 602635. doi:10.3389/fpsyg.2021.602635.

M. D. Beecher, Animal communication (2020). doi:10.1093/acrefore/9780190236557.013.646.

E. A. Hebets, A. B. Barron, C. N. Balakrishnan, M. E. Hauber, P. H. Mason, K. L. Hoke, A systems approach to animal communication, Proceedings. Biological sciences / The Royal Society 283 (1826) (2016) 20152889. doi:10.1098/rspb.2015.2889.

W. A. Searcy, S. Nowicki, The Evolution of Animal Communication, Princeton University Press, 2010. doi:10.1515/9781400835720.

N. Nagarajan, C. F. Stevens, How does the speed of thought compare for brains and digital computers?, Current biology: CB 18 (17) (2008) R756–R758. doi:10.1016/j.cub.2008.06.043.

M. Tinnirello, Offensive realism and the insecure structure of the inter- national system: artificial intelligence and global hegemony, in: Arti- ficial Intelligence Safety and Security, Chapman and Hall/CRC, 2018, pp. 339–356.

E. R. Pianka, On r- and K-Selection, The American naturalist 104 (940) (1970) 592–597. doi:10.1086/282697.

E. Cuppen, Diversity and constructive conflict in stakeholder dialogue: considerations for design and methods, Policy sciences 45 (1) (2012) 23–46. doi:10.1007/s11077-011-9141-7.

A. Torren˜o, E. Onaindia, A. Komenda, M. Sˇtolba, Cooperative Multi- Agent planning: A survey, ACM Comput. Surv. 50 (6) (2017) 1–32. doi:10.1145/3128584.

H. Yamakawa, Peacekeeping conditions for an artificial intelligence society, Big Data and Cognitive Computing 3 (2) (2019) 34. doi:10.3390/bdcc3020034.

T. C. Earle, G. Cvetkovich, Social Trust: Toward a Cosmopolitan Soci- ety, Greenwood Publishing Group, 1995.

R. A. Freitas, R. C. Merkle, Kinematic Self-replicating Machines, Lan- des, 2004.

A. Smith, P. Turney, R. Ewaschuk, Self-replicating machines in con- tinuous space with virtual physics, Artificial life 9 (1) (2003) 21–40. doi:10.1162/106454603321489509.

A. Ellery, Are Self-Replicating machines feasible?, Journal of spacecraft and rockets 53 (2) (2016) 317–327. doi:10.2514/1.A33409.

Y. N. Harari, Sapiens: A brief history of humankind, Harper, 2015.

Downloads

Posted


Submitted: 2023-02-26 12:46:08 UTC

Published: 2023-03-07 00:36:46 UTC
Section
Information Sciences