プレプリント / バージョン1

Cognitive Exoskeletons: External Structure as the Determinant of LLM Output Identity

##article.authors##

  • FUJIYOSHI, SHUN Independent Researcher

DOI:

https://doi.org/10.51094/jxiv.3302

キーワード:

cognitive exoskeleton、 LLM output identity、 structural mediation、 prompt architecture、 agent harness、 creative emergence、 attentional filtering、 human-AI collaboration

抄録

Large language models produce impressive but inconsistent outputs. The field has tried two fixes: change the model (fine-tuning, RLHF) or change the human (prompt engineering). Both miss what we believe is the actual lever. We propose attaching a cognitive exoskeleton—an external structural layer that wraps around the AI and shapes its output character through architecture, not parameter modification. We define the term precisely: a cognitive exoskeleton must be model-agnostic, identity-preserving across model switches, applicable across modalities, and self-correcting. Two implementations—SPARQ for output quality and GHOSTY COLLIDER for creative leaps—have been deployed across ~5,200 retained outputs in text, image, and video. Our central observation: wrap different LLMs in the same exoskeleton and the outputs converge; wrap the same LLM in different exoskeletons and the outputs diverge. This has not been validated through blinded evaluation. Drawing on Maisano et al. (2026), who showed that attentional filtering structure determines creative problem-solving style, we propose a functional analogy between exoskeleton interventions and cognitive filtering. We advance an operationally observed claim—that external structure significantly shapes output consistency—and a stronger hypothesis requiring validation: that output identity is an emergent property of the exoskeleton, not the model.

利益相反に関する開示

The author declares no conflicts of interest.

ダウンロード *前日までの集計結果を表示します

ダウンロード実績データは、公開の翌日以降に作成されます。

引用文献

Bai, Y., et al. (2022). Constitutional AI: Harmlessness from AI feedback. arXiv:2212.08073.

Bertran, M., Fogliato, R., and Wu, Z. S. (2026). Many AI Analysts, One Dataset: Navigating the Agentic Data Science Multiverse. arXiv:2602.18710.

Boden, M. A. (2004). The Creative Mind: Myths and Mechanisms. Routledge.

Bourdieu, P. (1979). Distinction: A Social Critique of the Judgement of Taste. Harvard University Press.

Clark, A. and Chalmers, D. (1998). The extended mind. Analysis, 58(1), 7-19.

de Bono, E. (1967). New Think: The Use of Lateral Thinking. Basic Books.

Doshi, A. R. and Hauser, O. P. (2024). Generative AI enhances individual creativity but reduces the collective novelty of creative work. Science Advances, 10(28), eadn5290.

Dreyfus, S. E. and Dreyfus, H. L. (1980). A five-stage model of the mental activities involved in directed skill acquisition. ORC 80-2, University of California, Berkeley.

Finke, R. A., Ward, T. B., and Smith, S. M. (1992). Creative Cognition: Theory, Research, and Applications. MIT Press.

Formentini, F. and Burelli, C. (2025). A Vision for a Vertically-Layered Cognitive Exoskeleton (CERES). SSRN Electronic Journal. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5004131.

Fujiyoshi, S. (2026). From Theory to Protocol: Executable Frameworks for Creative Emergence and Strategic Foresight. Zenodo. https://doi.org/10.5281/zenodo.18707858. Also submitted to ICCC 2026.

Khattab, O., et al. (2023). DSPy: Compiling declarative language model calls into self-improving pipelines. arXiv:2310.03714.

Kirkpatrick, M. (2023). The Cognitive Exoskeleton. The Futurists Podcast, December 2023.

Koestler, A. (1964). The Act of Creation. Hutchinson.

Ling, L. (2025). Brain Cache: Generative AI as a Cognitive Exoskeleton for Externalizing, Structuring, and Activating Knowledge. GenAICHI Workshop. https://generativeaiandhci.github.io/papers/2025/genaichi2025_51.pdf.

Maisano, H., Chesebrough, C., Zhang, F., Daly, B., Beeman, M., and Kounios, J. (2026). ADHD symptom magnitude predicts creative problem-solving performance and insight versus analysis solving modes. Personality and Individual Differences, 113660.

Ouyang, L., et al. (2022). Training language models to follow instructions with human feedback. NeurIPS 2022.

Polanyi, M. (1966). The Tacit Dimension. University of Chicago Press.

Rafailov, R., et al. (2023). Direct preference optimization: Your language model is secretly a reward model. NeurIPS 2023.

Tan, E. S. Z., Soubki, A., and Cranmer, M. (2026). SymTorch: A Framework for Symbolic Distillation of Deep Neural Networks. arXiv:2602.21307.

Vasilopoulos, A. (2026). Codified Context: Infrastructure for AI Agents in a Complex Codebase. arXiv:2602.20478.

AMA-Bench Authors. (2026). AMA-Bench: Evaluating Long-Horizon Memory for Agentic Applications. arXiv:2602.22769.

Auton Authors. (2026). The Auton Agentic AI Framework. arXiv preprint, February 27, 2026.

Wang, X., et al. (2022). Self-Consistency improves chain of thought reasoning in language models. arXiv:2203.11171.

Wei, J., et al. (2022). Chain-of-thought prompting elicits reasoning in large language models. NeurIPS 2022.

Wiles, E., Krayer, L., Abbadi, M., Awasthi, U., Kennedy, R., Mishkin, P., Sack, D., and Candelon, F. (2024). GenAI as an Exoskeleton: Experimental Evidence on Knowledge Workers Using GenAI on New Skills. Working Paper. https://emmawiles.github.io/storage/reskill.pdf.

Wu, Q., et al. (2023). AutoGen: Enabling next-gen LLM applications via multi-agent conversation. arXiv:2308.08155.

Xu, S. and Zhang, X. (2025). Cognitive Exoskeleton: Augmenting Human Cognition with AI-Mediated Intelligent Visual Feedback. arXiv:2508.00846.

Yao, S., et al. (2022). ReAct: Synergizing reasoning and acting in language models. ICLR 2023.

Yao, S., et al. (2023). Tree of thoughts: Deliberate problem solving with large language models. NeurIPS 2023.

ダウンロード

公開済


投稿日時: 2026-03-04 03:07:21 UTC

公開日時: 2026-04-03 05:42:58 UTC
研究分野
情報科学