プレプリント / バージョン1

質的研究におけるAI利用は何が問題で、何が問題ではないのか

―認識論に基づく主要論点の体系的検討を通じて―

##article.authors##

DOI:

https://doi.org/10.51094/jxiv.4183

キーワード:

生成AI、 大規模言語モデル、 質的研究、 方法論、 認識論

抄録

経営学の質的研究において、生成AIの利用をめぐる論争が広がっている。本稿の目的は、主として2023年から2026年にかけて発表された学術論考を体系的に比較検討し、AI利用の「何が問題で、何が問題ではないのか」の線引きを明確化することにある。本稿は、質的研究におけるAI利用をめぐる5つの主要論点(LLM出力の認識論的地位、人間側の誤認メカニズムと認識論的整合性、解釈的主体性と脱技能化、効率性の科学的価値、参加者カテゴリーの平滑化と知識の均質化)を軸に、各論点で何が問題になり何が問題にならないかの境界線を論点ごとに検討する。

結果として、AI利用の適否は、LLMの技術的特性、研究者が依拠する認識論的伝統、AIに委ねるタスク、関与の様態、AI出力の地位の適合性によって左右されることが示唆された。たとえば、LLM出力を分析結果として無批判に採用することはいずれの伝統においても問題となるが、研究者の判断材料として扱う利用は認識論的伝統に応じて判断が分かれる。この適合性を判断する前提として、本稿は「認識論的自覚」を提示する。これはAI利用を正当化する十分条件ではなく、判断が成立するための前提である。この知見に基づき、本稿は3つのリテラシー(方法論的リテラシー、技術的リテラシー、実践的リテラシー)を提案する。本稿の貢献は、「使うべきか否か」という問いを、「どの認識論的前提のもとで、AIにどこまでの作業を委ね、何に使うか」という問いに具体化した点にある。

利益相反に関する開示

著者には開示すべき利益相反はありません。

ダウンロード *前日までの集計結果を表示します

ダウンロード実績データは、公開の翌日以降に作成されます。

引用文献

Aguinis, H. (2026). Method-driven theory advancements and AI implementation. Journal of International Business Studies. https://doi.org/10.1057/s41267-026-00851-0

Anis, S., & French, J. A. (2023). Efficient, explicatory, and equitable: Why qualitative researchers should embrace AI, but cautiously. Business & Society, 62(6), 1139–1144.

Bail, C. A. (2024). Can generative AI improve social science? Proceedings of the National Academy of Sciences, 121(21), e2314021121.

Bansal, P., Smith, W. K., & Vaara, E. (2018). New ways of seeing through qualitative research. Academy of Management Journal, 61(4), 1189–1195.

Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? In FAccT '21 (pp. 610–623). ACM.

Bechky, B. A., & Davis, G. F. (2025). Resisting the algorithmic management of science: Craft and community after generative AI. Administrative Science Quarterly, 70(1), 1–22.

Blondeel, E., Everaert, P., & Opdecam, E. (2025). A practical guide to implementing ChatGPT as a secondary coder in qualitative research. International Journal of Accounting Information Systems, 56, 100754. https://doi.org/10.1016/j.accinf.2025.100754

Braun, V., & Clarke, V. (2022). Conceptual and design thinking for thematic analysis. Qualitative Psychology, 9(1), 3–26. https://doi.org/10.1037/qup0000196

Braun, V., & Clarke, V. (2023). Toward good practice in thematic analysis: Avoiding common problems and be(com)ing a knowing researcher. International Journal of Transgender Health, 24(1), 1–6. https://doi.org/10.1080/26895269.2022.2129597

Carlson, N. A., & Burbano, V. C. (2026). The use of LLMs to annotate data in management research: Foundational guidelines and warnings. Strategic Management Journal, 47, 699-725. Advance online publication. https://doi.org/10.1002/smj.70023

Chatzichristos, G. (2025). Qualitative research in the era of AI: A return to positivism or a new paradigm? International Journal of Qualitative Methods, 24, 1–12. https://doi.org/10.1177/16094069251337583

Corley, K. G., & Lin, X. (2024–2025). LOOM: Locus of observed meanings [Series]. Thread Counts. Retrieved March 15, 2026, from https://www.threadcounts.org/p/loom-locus-of-observed-meanings

Corley, K. G., & Lin, X. (2025, October 3). LOOM XIV: The calculator fallacy. Thread Counts. Retrieved March 22, 2026, from https://www.threadcounts.org/p/loom-xiv-the-calculator-fallacy

Corley, K. G., & Lin, X. (2026, January 28). Rethinking human-AI collaboration in interpretive research [Webinar]. New Scholars Generative AI Series. https://linxule.com/talks/rethinking-human-ai-collaboration-interpretive-research/ Retrieved April 5, 2026.

Cornelissen, J., Höllerer, M. A., Boxenbaum, E., Faraj, S., & Gehman, J. (2024). Large language models and the future of organization theory. Organization Theory, 5(1), 1–15.

De Paoli, S. (2023). Performing an inductive thematic analysis of semi-structured interviews with a large language model: An exploration and provocation on the limits of the approach. Social Science Computer Review, 42(4), 997–1019. https://doi.org/10.1177/08944393231220483

De Paoli, S. (2024). Further explorations on the use of large language models for thematic analysis: Open-ended prompts, better terminologies and thematic maps. Forum Qualitative Sozialforschung / Forum: Qualitative Social Research, 25(3), Article 5. https://doi.org/10.17169/fqs-25.3.4196

Delios, A., Tung, R. L., & van Witteloostuijn, A. (2025). How to intelligently embrace generative AI: The first guardrails for the use of GenAI in IB research. Journal of International Business Studies, 56(3), 451–460. https://doi.org/10.1057/s41267-024-00736-0

Eisenhardt, K. M. (1989). Building theories from case study research. Academy of Management Review, 14(4), 532–550.

Friese, S. (2025). Conversational analysis with AI: CA to the power of AI: Rethinking coding in qualitative analysis. SSRN Scholarly Paper No. 5232579. Social Science Research Network. https://doi.org/10.2139/ssrn.5232579

Garcia Quevedo, D., Glaser, A., & Verzat, C. (2025). Enhancing theorization using artificial intelligence: Leveraging large language models for qualitative analysis of online data. Organizational Research Methods, 1–21. https://doi.org/10.1177/10944281251339144

Gatrell, C., Muzio, D., Post, C., & Wickert, C. (2024). Here, there and everywhere: On the responsible use of artificial intelligence (AI) in management research and the peer-review process. Journal of Management Studies, 61(3), 739–751. https://doi.org/10.1111/joms.13045

Gehman, J., Glaser, V. L., Eisenhardt, K. M., Gioia, D. A., Langley, A., & Corley, K. G. (2018). Finding theory–method fit: A comparison of three qualitative approaches to theory building. Journal of Management Inquiry, 27(3), 284–300.

Gilardi, F., Alizadeh, M., & Kubli, M. (2023). ChatGPT outperforms crowd workers for text-annotation tasks. Proceedings of the National Academy of Sciences, 120(30), e2305016120.

Gioia, D. A. (2021). A systematic methodology for doing qualitative research. Journal of Applied Behavioral Science, 57(1), 20–29.

Gioia, D. A., Corley, K. G., & Hamilton, A. L. (2013). Seeking qualitative rigor in inductive research: Notes on the Gioia methodology. Organizational Research Methods, 16(1), 15–31.

Grimes, M., von Krogh, G., Feuerriegel, S., Rink, F., & Gruber, M. (2023). From scarcity to abundance: Scholars and scholarship in an age of generative artificial intelligence. Academy of Management Journal, 66(6), 1617–1624.

Grodal, S., Anteby, M., & Holm, A. L. (2021). Achieving rigor in qualitative analysis: The role of active categorization in theory building. Academy of Management Review, 46(3), 591–612.

Grodal, S., Ha, J., Hood, E., & Rajunov, M. (2024). Between humans and machines: The social construction of the generative AI category. Organization Theory, 5(3), Article 26317877241275125. https://doi.org/10.1177/26317877241275125

Grodal, S., & Schildt, H. (2025). Qualitative research with artificial intelligence: The threat of automated uses and the augmented alternative. Working paper.

Grossmann, I., Engeler, I., Akhtar, S., Johnson, S. G. B., & Hofmann, W. (2023). AI and the transformation of social science research. Science, 380(6650), 1108–1109.

Hamilton, L., Elliott, D., Quick, A., Smith, S., & Choplin, V. (2023). Exploring the use of AI in qualitative analysis: A comparative study of guaranteed income data. International Journal of Qualitative Methods, 22, 16094069231201504. https://doi.org/10.1177/16094069231201504

Hilbolling, S., Kratochvil, R., Corley, K. G., Glaser, V. L., Jung, J., Levina, N., Lin, X., & Smith, A. (2024). AI as a collaborator for qualitative research scholars? Reflections on embracing opportunities while preserving its essence. Working paper.

Ibrahim, E. I., & Voyer, A. (2026). Qualitative research with LLM chatbots: Technological reflexivity for interpretative technology. Qualitative Research, 26(1), 133–159. https://doi.org/10.1177/14687941251390794

Jowsey, T., Braun, V., Clarke, V., Lupton, D., & Fine, M. (2025). We reject the use of generative artificial intelligence for reflexive qualitative research. Qualitative Inquiry. Advance online publication. https://doi.org/10.1177/10778004251401851

Köhler, T., Smith, A., & Bhakoo, V. (2022). Templates in qualitative research methods: Origins, limitations, and new directions. Organizational Research Methods, 25(2), 183–210. https://doi.org/10.1177/10944281211060710

Krlev, G., Hannigan, T. R., & Spicer, A. (2025). What makes a good review article? Empirical evidence from management and organization research. Academy of Management Annals, 19(1), 376–403. https://doi.org/10.5465/annals.2021.0051

Langley, A., & Abdallah, C. (2011). Templates and turns in qualitative studies of strategy and management. In D. Bergh & D. Ketchen (Eds.), Building methodological bridges (Vol. 6, pp. 201–235). Emerald.

Lieder, F. R., & Schäffer, B. (2025). Reconstructive social research prompting: Distributed interpretation between AI and researchers in qualitative research. OSF Preprints. https://doi.org/10.31235/osf.io/d6e9m_v2

Lindebaum, D., & Fleming, P. (2024). ChatGPT undermines human reflexivity, scientific responsibility and responsible management research. British Journal of Management, 35(2), 566–575. https://doi.org/10.1111/1467-8551.12781

Lorenz, F., Lorenzen, S., Franco, M., Velz, J., & Clauß, T. (2024). Generative artificial intelligence in management research: A practical guide on mistakes to avoid. Management Review Quarterly, 1–21. https://doi.org/10.1007/s11301-024-00469-2

Ludwig, J., Mullainathan, S., & Rambachan, A. (2025). Large language models: An applied econometric framework. NBER Working Paper No. 33344.

Mees-Buss, J., Welch, C., & Piekkari, R. (2022). From templates to heuristics: How and why to move beyond the Gioia methodology. Organizational Research Methods, 25(2), 405–429.

Messeri, L., & Crockett, M. J. (2024). Artificial intelligence and illusions of understanding in scientific research. Nature, 627(8002), 49–58.

Messner, R., Smith, S., & Richards, C. (2025). Artificial intelligence and qualitative data analysis: Epistemological incongruences and the future of the human experience. International Journal of Qualitative Methods, 24, 1–13. https://doi.org/10.1177/16094069251371481

Mills, C. W. (1959). The sociological imagination. Oxford University Press.

Morgan, D. L. (2023). Exploring the use of artificial intelligence for qualitative data analysis: The case of ChatGPT. International Journal of Qualitative Methods, 22, 16094069231211248. https://doi.org/10.1177/16094069231211248

Morris, M. (2024). Magical thinking and the test of humanity. AI & Society, 39, 2581–2591.

Nguyen, D. C., & Welch, C. (2025a). Generative artificial intelligence in qualitative data analysis: Analyzing—or just chatting? Organizational Research Methods, 29(1), 3–39. https://doi.org/10.1177/10944281251377154

Nguyen, D. C., & Welch, C. (2025b). Engaged and responsible scholarship: Why qualitative researchers should not embrace GenAI. Business & Society. Advance online publication. https://doi.org/10.1177/00076503251386539

Nguyen-Trung, K. (2025). ChatGPT in thematic analysis: Can AI become a research assistant in qualitative research? Quality & Quantity, 59(6), 4945–4978. https://doi.org/10.1007/s11135-025-02165-z

Peterson, A. J. (2025). AI and the problem of knowledge collapse. AI & Society, 40, 3249–3269.

Prescott, M. R., Yeager, S., Ham, L., Rivera Saldana, C. D., Serrano, V., Narez, J., Paltin, D., Delgado, J., Moore, D. J., & Montoya, J. (2024). Comparing the efficacy and efficiency of human and generative AI: Qualitative thematic analyses. JMIR AI, 3, e54482. https://doi.org/10.2196/54482

Raisch, S., & Krakowski, S. (2021). Artificial intelligence and management: The automation–augmentation paradox. Academy of Management Review, 46(1), 192–210. https://doi.org/10.5465/amr.2018.0072

Sandberg, J. (2005). How do we justify knowledge produced within interpretive approaches? Organizational Research Methods, 8(1), 41–68.

Törnberg, P. (2024). Large language models outperform expert coders and supervised classifiers at annotating political social media messages. Social Science Computer Review, 43, 1181–1195. https://doi.org/10.1177/08944393241286471

Traberg, C. S., Roozenbeek, J., & van der Linden, S. (2026). AI is turning research into a scientific monoculture. Communications Psychology, 4, https://doi.org/10.1038/s44271-026-00428-5

Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., & Polosukhin, I. (2017). Attention is all you need. In Advances in Neural Information Processing Systems (Vol. 30). Curran Associates.

野村康(2017)『社会科学の考え方—認識論、リサーチ・デザイン、手法』名古屋大学出版会.

ダウンロード

公開済


投稿日時: 2026-04-23 17:33:47 UTC

公開日時: 2026-04-28 09:44:09 UTC
研究分野
経済学・経営学