Your faculty reference librarians are available to advise you through all aspects of the research process.
AI "hallucination"
The official term in the field of AI is "hallucination." This refers to the fact that it sometimes "makes stuff up." This is because these systems are probabilistic, not deterministic.
Which models are less prone to this?
GPT-4 (the more capable model behind ChatGPT Plus and Microsoft Copilot) has improved and is less prone to hallucination. According to OpenAI, it's "40% more likely to produce factual responses than GPT-3.5 on our internal evaluations." But it's still not perfect. So verification of the output is still needed.
ChatGPT often makes up fictional sources
One area where ChatGPT usually gives fictional answers is when asked to create a list of sources. See the Twitter thread, "Why does chatGPT make up fake academic papers?" for a useful explanation of why this happens.
There is progress in making these models more truthful
However, there is progress in making these systems more truthful by grounding them in external sources of knowledge. Some examples are Microsoft Copilot and Perplexity AI, which use internet search results to ground answers. However, the Internet sources used, could also contain misinformation or disinformation. But at least with Copilot and Perplexity you can link to the sources used to begin verification.
Scholarly sources as grounding
There are also systems that combine language models with scholarly sources. For example:
A search engine that uses AI to search for and surface claims made in peer-reviewed research papers. Ask a plain English research question, and get word-for-word quotes from research papers related to your question. The source material used in Consensus comes from the Semantic Scholar database, which includes over 200M papers across all domains of science.
Remember that ChatGPT is not meant to be used as a search engine for finding information. If you try to use it that way, you'll find that AI gives you seemingly complete, reliable information often without any references. This unsourced output makes it difficult to check the veracity of the information provided. For now, it's best to use Library databases, the Library Catalog, or Google Scholar for fact finding and research. This may change in the future with more specialized search tools based on LLMs. However, if you use AI in this way here are some tips for fact checking.
Don't just accept information at face value. You need to delve deeper by asking yourself a few simple questions:
This guide is based on "Student Guide to ChatGPT" by University of Arizona Libraries is licensed under CC BY 4.0