Preprint / Version 1

Exploring the applicability of Large Language Models to citation context analysis

##article.authors##

  • Kai Nishikawa National Institute of Science and Technology Policy
  • Hitoshi Koshiba National Institute of Science and Technology Policy

DOI:

https://doi.org/10.51094/jxiv.467

Keywords:

Scientometrics, Citation Context Analysis, Annotation, Large Language Model(LLM), ChatGPT

Abstract

In contrast to conventional quantitative citation analysis, a method called citation context analysis has been proposed that takes into account the contextual information of individual citations. Although citation context analysis is expected to provide complementary findings to citation analysis, it requires the creation of a large dataset through annotation work, which is costly. On the other hand, some attempts have been made to have LLM (Large Language Model), which is rapidly becoming popular these days, do the annotation work. However, most of these previous studies were conducted on general texts, and it is not necessarily clear how well they perform when applied to texts with special vocabulary and formatting, such as research papers. This study aims to explore the applicability of LLM to citation context analysis by referring to a publicly available citation context analysis dataset and a manual for the annotation work used to create it. More specifically, we will examine the following issues: 1. Whether LLM can replace humans for annotation tasks in citation context analysis? 2. How can LLM be effectively utilized in citation context analysis? The results show that LLM annotation performance is comparable to or better than human annotation in terms of consistency, but not in terms of accuracy.
However, the accuracy of LLM annotation is not as high as that of human annotation.
Therefore, it is not appropriate at this time to have LLM immediately replace human annotators in citation context analysis.
However, if it is difficult to prepare a sufficient number of human annotators, it is possible to use LLM as one of the annotators. This study provides the above basic findings that are important for the future development of citation context analysis.

Conflicts of Interest Disclosure

The author has no competing interests to declare that are relevant to the content of this article.

Downloads *Displays the aggregated results up to the previous day.

Download data is not yet available.

References

Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. arXiv (preprint), 2020.

Maud Borie, Mark Pelling, Gina Ziervogel, and Keith Hyams. Mapping narratives of urban resilience in the global south. Global Environmental Change, 54:203–213, jan 2019.

Center for S&T Foresight and Indicators. Science map2020, March 2023.

Rodrigo Dorantes-Gilardi,Aurora A. Ramírez-Álvarez, and Diana Terrazas-Santamaría. The role of highly intercited papers on scientific impact: the mexican case. Applied Network Science, 7(1), aug 2022.

Toyofumi Fujiwara and Yasunori Yamamoto. Colil: a database and search service for citation contexts in the life sciences domain. Journal of Biomedical Semantics, 6(1), oct 2015.

Xingwei He, Zhenghao Lin, Yeyun Gong, A-Long Jin, Hang Zhang, Chen Lin, Jian Jiao, Siu Ming Yiu, Nan Duan, and Weizhu Chen. AnnoLLM: Making large language models to be better crowdsourced annotators. arXiv (preprint), 2023.

Sehrish Iqbal, Saeed-Ul Hassan, Naif Radi Aljohani, Salem Alelyani, Raheel Nawaz, and Lutz Bornmann. A decade of in-text citation analysis based on natural language processing and machine learning techniques: an overview of empirical studies. Scientometrics, 126(8):6551–6599, jun 2021.

Chi-Shiou Lin. An analysis of citation functions in the humanities and social sciences research from the perspective of problematic citation analysis assumptions. Scientometrics, 116(2):797–813, may 2018.

M. Lupton and C. Mather. ‘the anti-politics machine’: GIS and the reconstruction of the johannesburg local state. Political Geography, 16(7):565–580, sep 1997.

Kai Nishikawa. How and why are citations between disciplines made? a citation context analysis focusing on natural sciences and social sciences and humanities. Scientometrics, 128(5):2975–2997, feb 2023.

Kai Nishikawa and Mie Monjiyama. Data on a citation context analysis focusing on natural sciences and social sciences and humanities. 2023.

Nicholas Pangakis, Samuel Wolken, and Neil Fasching. Automated annotation with generative ai requires validation. arXiv (preprint), 2023.

Michael V. Reiss. Testing the reliability of chatgpt for text annotation and classification: A cautionary remark. arXiv (preprint), 2023.

Christopher Michael Rytting, Taylor Sorensen, Lisa Argyle, Ethan Busby, Nancy Fulda, Joshua Gubler, and David Wingate. Towards coding social science datasets with language models. arXiv (preprint), 2023.

Iman Tahamtan and Lutz Bornmann. What do citation counts measure? an updated review of studies on citations in scientific documents published between 2006 and 2018. Scientometrics, 121(3):1635–1684, sep 2019.

Guo Zhang, Ying Ding, and Staša Milojević. Citation content analysis (CCA): A framework for syntactic and semantic analysis of citation content. Journal of the American Society for Information Science and Technology, 64(7):1490–1503, may 2013.

Posted


Submitted: 2023-07-31 05:20:55 UTC

Published: 2023-08-03 06:30:28 UTC
Section
Interdisciplinary Sciences