AI to Learn (AI2L): Guidelines and Practice for Human-Centered AI Utilization as a Learning Support Tool—Four Pillars of Black-Box Elimination, Accountability, Information Protection, and Energy Efficiency
DOI:
https://doi.org/10.51094/jxiv.1435Keywords:
AI to Learn (AI2L), Model Transparency (Black‑Box Elimination), Accountability, Information Protection and Privacy, Green AI (Energy Efficiency and Sustainability)Abstract
Contemporary generative AI—especially large language models (LLMs)—is rapidly permeating diverse domains such as research, education, and healthcare owing to its remarkable efficiency and expressive power. Conversely, AI systems bring serious challenges: their black-box nature, the risk of privacy leakage from input data, ethical concerns arising from outputs whose rationale is opaque, and the substantial energy consumption and environmental burden associated with large-scale deployment. This paper proposes AI to Learn (AI2L), a set of guidelines that deliberately limits AI to a learning-support role for humans and eliminates any black-box components from the final deliverables. AI2L rests on four principles: (1) humans retain ultimate decisionmaking authority; (2) human verification ensures accountability for AI outputs; (3) the risk of information leakage is rigorously minimized; and (4) AI usage is managed for energy efficiency and long-term sustainability. We examine several concrete implementations of AI2L—including Grad CAM–based image interpretation, the discovery of novel insights via symbolic regression, the development of AI-generated yet humanauditable code, and reversible anonymization for data protection—and analyze them from both practical and theoretical perspectives. Recent studies showing that foundation models fail to grasp underlying physical laws, despite high predictive accuracy, further underscore the necessity of AI2L’s approach. By acknowledging AI’s limitations and hazards while harnessing its strengths, AI2L provides a robust framework for ethical, sustainable, and human-centered integration of AI into society.
Conflicts of Interest Disclosure
Conflict‑of‑Interest Disclosure: The authors declare that they have no conflicts of interest related to this work.Downloads *Displays the aggregated results up to the previous day.
References
R. Bommasani, D. A. Hudson, E. Adeli, et al., “On the Opportunities and Risks of Foundation Models,” arXiv preprint arXiv:2108.07258, Aug. 2021. doi: https://doi.org/10.48550/arXiv.2108.07258
OpenAI, “GPT‑4 Technical Report,” arXiv preprint arXiv:2303.08774, Mar. 2023. doi: https://doi.org/10.48550/arXiv.2303.08774
N. Carlini, F. Tramèr, E. Wallace, et al., “Extracting Training Data from Large Language Models,” in Proc. 30th USENIX Security Symp., 2021, pp. 2633‑2650. doi: https://doi.org/10.48550/arXiv.2012.07805
E. M. Bender, T. Gebru, A. McMillan‑Major, and S. Shmitchell, “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” in Proc. ACM Conf. Fairness, Accountability, and Transparency (FAccT), 2021, pp. 610‑623, doi: https://doi.org/10.1145/3442188.3445922
E. Strubell, A. Ganesh, and A. McCallum, “Energy and Policy Considerations for Deep Learning in NLP,” in Proc. 57th Annu. Meeting Assoc. Comput. Ling. (ACL), 2019, pp. 3645‑3650, doi: https://doi.org/10.18653/v1/P19‑1355.
R. Schwartz, J. Dodge, N. Smith, and O. Etzioni, “Green AI,” Communications of the ACM, vol. 63, no. 12, pp. 54‑63, 2020, doi: https://doi.org/10.1145/3381831.
T. Kuru, “Lawfulness of the Mass Processing of Publicly Accessible Online Data to Train Large Language Models,” International Data Privacy Law, Oct. 2024. doi: https://doi.org/10.1093/idpl/ipae013
R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, “Grad‑CAM: Visual Explanations from Deep Networks via Gradient‑Based Localization,” in Proc. IEEE Int. Conf. Computer Vision (ICCV), 2017, pp. 618‑626, doi: https://doi.org/10.1109/ICCV.2017.74
S. M. Lundberg and S.‑I. Lee, “A Unified Approach to Interpreting Model Predictions,” in Advances in Neural Information Processing Systems 30 (NeurIPS), 2017, pp. 4765‑4774, doi:
https://doi.org/10.48550/arXiv.1705.07874
C. Rudin, “Stop Explaining Black Box Machine Learning Models for High‑Stakes Decisions and Use Interpretable Models Instead,” Nature Machine Intelligence, vol. 1, no. 5, pp. 206‑215, 2019, doi: https://doi.org/10.1038/s42256‑019‑0048‑x
K. Vafa, P. G. Chang, A. Rambachan, and S. Mullainathan, “What Has a Foundation Model Found? Using Inductive Bias to Probe for World Models,” arXiv preprint arXiv:2507.06952, Jul. 2025, to appear in Proc. 42nd Int. Conf. Machine Learning (ICML 2025), doi: https://doi.org/10.48550/arXiv.2507.06952
M. Schmidt and H. Lipson, “Distilling Free‑Form Natural Laws from Experimental Data,” Science, vol. 324, no. 5923, pp. 81‑85, Apr. 2009, doi: https://doi.org/10.1126/science.1165893
S.‑M. Udrescu and M. Tegmark, “AI Feynman: A Physics‑Inspired Method for Symbolic Regression,” Science Advances, vol. 6, no. 16, eaay2631, Apr. 2020, doi: https://doi.org/10.1126/sciadv.aay2631
S. A. Shintani, “Hyperthermal Sarcomeric Oscillations Generated in Warmed Cardiomyocytes Control Amplitudes with Chaotic Properties While Keeping Cycles Constant,” Biochemical and Biophysical Research Communications, vol. 611, pp. 8‑13, 2022, doi: https://doi.org/10.1016/j.bbrc.2022.07.067
S. A. Shintani, “Chaordic Homeodynamics: The Periodic Chaos Phenomenon Observed at the Sarcomere Level and its Physiological Significance,” Biochem. Biophys. Res. Commun., vol. 760, 151712, 2025, doi: https://doi.org/ 10.1016/j.bbrc.2025.151712
S. Amershi, M. Cakmak, W. B. Knox, and T. Kulesza, “Power to the People: The Role of Humans in Interactive Machine Learning,” AI Magazine, vol. 35, no. 4, pp. 105‑120, 2014, doi: https://doi.org/10.1609/aimag.v35i4.2513
E. Tabassi, “Artificial Intelligence Risk Management Framework (AI RMF) 1.0,” National Institute of Standards and Technology, Gaithersburg, MD, USA, NIST AI 100‑1, Jan. 2023. doi: https://doi.org/10.6028/NIST.AI.100-1
S. A. Shintani, K. Oyama, N. Fukuda and S. Ishiwata, “High‑Frequency Sarcomeric Auto‑Oscillations Induced by Heating in Living Neonatal Cardiomyocytes of the Rat,” Biochem. Biophys. Res. Commun., vol. 457(2), pp. 165‑170, 2015, doi: https://doi.org/10.1016/j.bbrc.2014.12.077
S. A. Shintani, T. Washio, and H. Higuchi, “Mechanism of Contraction Rhythm Homeostasis for Hyperthermal Sarcomeric Oscillations of Neonatal Cardiomyocytes,” Scientific Reports, vol. 10, 20468, 2020, doi: https://doi.org/ 10.1038/s41598‑020‑77443‑x
L. Sweeney, “k‑Anonymity: A Model for Protecting Privacy,” Int. J. Uncertainty, Fuzziness and Knowledge‑Based Systems, vol. 10, no. 5, pp. 557‑570, 2002, doi: https://doi.org/ 10.1142/S0218488502001648.
Downloads
Posted
Submitted: 2025-08-07 23:10:44 UTC
Published: 2025-08-15 09:16:45 UTC
License
Copyright (c) 2025
Seine A. Shintani

This work is licensed under a Creative Commons Attribution 4.0 International License.