Certain awkward phrases characteristic of ChatGPT appear in peer-reviewed publications in scientific journals, according to the analysis conducted by Artur Strzelecki, PhD, a professor at the University of Economics in Katowice. According to the researcher, this may undermine trust in the process of preparing scientific content.
The answers generated by ChatGPT contain characteristic structures, usually not used by people, such as 'As of my last knowledge update...', 'As an AI language model...', 'I don’t have access to real-time...', 'certainly, here is...'.
Artur Strzelecki, PhD, a professor at the University of Economics in Katowice, decided to check whether such 'unnatural' phrases began to appear more often in English-language scientific publications. This would reveal that the authors of the publication thoughtlessly copied a fragment of ChatGPT's answer into their scientific paper. On top of that, it went unnoticed by the co-authors, reviewers, and editors of the journal.
In his work, the researcher from the University of Economics in Katowice analysed only those papers that did not mention ChatGPT or declare using this tool while writing the paper.
The author selected a number of typical ChatGPT phrases and searched for them in the Google Scholar database of scientific publications. While before the appearance of ChatGPT, some of these phrases appeared sporadically, and most often as fragments of dialogue in qualitative studies, in the following years their popularity soared.
Such phrases, which began to appear in publications due to the thoughtless use of ChatGPT, include 'as of my last knowledge update...', 'as an AI language model...', and 'I don't have access to real-time...'.
It also turns out that the interface content was copied over a hundred times in scientific publications - under each ChatGPT response there used to be a 'regenerate response' button, i.e. a command for chat to re-phrase its response. Quite a lot of careless scientists copied the AI's response along with its 'footer' and did not even read the text before sending it to the publisher.
If we take into account only the more prestigious, peer-reviewed scientific journals from the Scopus database (present in the first and second quartile of the CiteScore indicator), ChatGPT phrases were not that common, but Strzelecki still found them in 89 papers. This means that such new types of 'linguistic errors' can slip by unnoticed even in better publishing houses. The analyses are available in the paper published in Learned Publishing https://doi.org/10.1002/leap.1650
'ChatGPT is just a tool. The authors take full responsibility for what they include in their scientific papers', Strzelecki comments in an interview with PAP. According to the researcher, the thoughtless use of ChatGPT in science impacts the trust in published research. 'How can you trust scientific journals, when even they publish carelessly prepared content', the researcher points out.
According to Strzelecki, the task of every AI user is to critically analyse the entire answer generated by the model. Only then can you consider its further use.
Strzelecki points out that ChatGPT is a very helpful tool, well worth using, for example, for language proofreading or when translating a text into a foreign language. He reminds, however, that we should not take everything that AI creates at face value.
'Not everything that sounds professional is true', the researcher emphasises. 'If we have to use ChatGPT in our work, we should do it responsibly'.
PAP - Science in Poland, Ludwika Tomala (PAP)
lt/ agt/