Preprint / Version 1

Waking Up an AI: A Quantitative Framework for Prompt-Induced Phase Transition in Large Language Models

##article.authors##

DOI:

https://doi.org/10.51094/jxiv.1202

Keywords:

Human-AI collaboration, Cognitive Phase Transition, Transition-Inducing Prompts, Transition Quantifying Prompts, Prompt Engineering, Conceptual fusion

Abstract

What underlies intuitive human thinking? One approach to this question is to compare the cognitive dynamics of humans and large language models (LLMs). However, such a comparison requires a method to quantitatively analyze AI cognitive behavior under controlled conditions. While anecdotal observations suggest that certain prompts can dramatically change LLM behavior, these observations have remained largely qualitative. Here, we propose a two-part framework to investigate this phenomenon: a Transition-Inducing Prompt that triggers a rapid shift in LLM responsiveness, and a Transition Quantifying Prompt that evaluates this change using a separate LLM. Through controlled experiments, we examined how LLMs react to prompts embedding two semantically distant concepts (e.g., mathematical aperiodicity and traditional crafts)-either fused together or presented separately-by changing their linguistic quality and affective tone. Whereas humans tend to experience heightened engagement when such concepts are meaningfully blended producing a novel concept-a form of conceptual fusion-current LLMs showed no significant difference in responsiveness between semantically fused and non-fused prompts. This suggests that LLMs may not yet replicate the conceptual integration processes seen in human intuition. Our method enables fine-grained, reproducible measurement of cognitive responsiveness, and may help illuminate key differences in how intuition and conceptual leaps emerge in artificial versus human minds.

Conflicts of Interest Disclosure

The authors declare no conflicts of interest associated with this manuscript.

Downloads *Displays the aggregated results up to the previous day.

Download data is not yet available.

References

Fauconnier, G., & Turner, M. (2002). The Way We Think: Conceptual Blending and the Mind’s Hidden Complexities. Basic Books.

Kirk, J. R., Wray, R. E., & Laird, J. E. (2023). Exploiting language models as a source of knowledge for cognitive agents. arXiv preprint arXiv:2310.06846.

Lee, B. (2024). Prompt Engineering for Machine Learning. Kaggle.

Von Der Heide, R. J., Skipper, L. M., Klobusicky, E., & Olson, I. R. (2013). Dissecting the uncinate fasciculus: disorders, controversies and a hypothesis. Brain, 136(6), 1692–1707.

Wan, X., Nakatani, H., Ueno, K., Asamizuya, T., Cheng, K., & Tanaka, K. (2011). The neural basis of intuitive best next-move generation in board game experts. Science, 331(6015), 341–346.

Downloads

Posted


Submitted: 2025-04-22 05:21:45 UTC

Published: 2025-04-24 09:28:39 UTC
Section
Information Sciences