Large Language Models (LLMs) — A model of generative AI that is designed to believably process, respond to, and replicate human-sounding language. The most famous example is ChatGPT, but there are countless technologies that use this AI model.
Hallucinations — Inaccurate or false claims made by generative AI like LLMs as a result of their reliance on probability. Possible hallucinations include but are not limited to non-existent people, publications, statistics, and events.
Lateral Reading — A strategy of verifying information from one source by comparing it to another source reporting on the same information. The second source must be as credible as possible to use it as a benchmark of accuracy. This is a key strategy for research in general, not just when utilizing AI.
Prompt Engineering — The process of crafting and refining the input or request made to an AI.
Check out the National Science Foundation's Center for Integrative Research in Computing and Learning Sciences' AI Glossary for more key AI terms.
How AI Can Help Your Research
AI can serve as a valuable aid in the research process by helping to spark ideas, refine research questions, and highlight key themes within large collections of information. These tools can assist in locating pertinent sources, pulling out essential details, and comparing differing perspectives — all features that can make it easier to judge the accuracy and relevance of information. AI can also be used to condense lengthy articles or reports, transforming complex material into clear, concise summaries that promote better comprehension and more efficient study.
AI can further support academic work by:
Providing an introduction to a topic, offering background context, and clarifying challenging concepts.
Offering constructive feedback on your chosen approach to a topic.
Editing text for grammar and spelling mistakes and suggest improvements.
Examining large datasets to identify patterns and draw insights.
Summarizing and synthesizing multiple large and complicated texts.
Producing prototypes, simulations, or scenario-based models.
Translating texts in other languages.
Searching digital resources to locate and recommend credible sources. NOTE: Not all AI are reliable tools for this, including ChatGPT. See the AI Tools page of this guide for more details on what tools the Librarians recommend.
While AI tools can be powerful aids in research and learning, they also come with important limitations and risks. To use them effectively and responsibly, it’s essential to remain aware of how these systems work — and where they can go wrong. The following points outline key considerations to keep in mind when evaluating AI-generated content.
AI lacks deep understanding. ChatGPT for example, can create a plausible response, but it doesn't understand concepts. This can lead to errors and over generalizing.
AI can't reliably evaluate the quality of its own responses. Fabricated information, such as fake articles and non-existent authors, are referred to as AI hallucinations. Challenge AI responses and require it to provide evidence to support any claims. Verify the accuracy of this evidence yourself by practicing lateral reading.
Remember that AI lacks critical or analytical thinking. Without real-world experience or context, it can struggle with interpreting irony, humor, nuanced metaphors, and common sense.
Don’t depend on AI too heavily — continue building your own knowledge and critical thinking skills.
Avoid relying solely on AI-produced summaries — read the original materials to fully grasp the finer details and context.
Keep in mind that many AI models are trained only on data up to a certain date and may not reflect the latest news or recent research.
Stay alert to potential bias in AI-generated content. Biased training data will result in AI reinforcing human bias and prejudice as though they are factual. As with any human-designed technology, "if it's garbage in, it's garbage out."
Lateral Reading
Lateral reading is a strategy to determine the credibility of one source by comparing its information with that of similar sources. This is an important part of fact-checking any information provided by ChatGPT and other text-generating AI models.
ChatGPT's more reliable answers include linked citations to the sources it used to generate a reply. You should check yourself that these sources actually exist, are relevant to your search, and are credible. Most often, these cited sources are preferable for you to use in your own project rather than the AI-produced answer.
Prompt Engineering
Context is the number one ingredient ChatGPT and similar AI's need to reply with a useful response. When using ChatGPT to help with an assignment, make sure to include all the relevant details from the assignment requirements. Prompt engineering refers to the careful crafting of the input or request made to an AI.
The pneumonic CLEAR will help you remember what good prompt writing needs in general:
Evaluate the AI as a Tool
When working with AI for research, it’s essential to assess both the technology and the information it produces with a critical eye. Check that information is accurate and well-sourced, free from bias, logically consistent, and not framed with manipulative or overly emotional language. Consider the following:
Purpose — What is the intended function or goal of the tool?
Credibility — Is the generated or presented information trustworthy? Since generative AI can create original text rather than simply returning search results, cross-checking multiple sources is crucial.
Ethics — Are there any moral or ethical issues connected to use of this tool?
Content Uploads — Does the tool request you to provide existing materials, such as documents or images? If so, could copyright be a concern? Is there an option to prevent your uploads from being incorporated into its training data?
Privacy — What does its privacy policy say? In an educational setting, are there potential FERPA implications?
Reproducibility — If replicating results matters to your work, can the tool support that need?
Funding — Who finances the tool, and could that financial backing influence the reliability of its results?
Training Data — What sources or datasets were used to train the system, or what databases does it access? Evaluate whether the data is broad enough, includes subscription-only sources like library databases, is sufficiently up-to-date, and whether it contains biases either in content or in algorithm design.
Citations — When references are provided, verify that they are genuine and not fabricated or “hallucinated.”
Role-Playing
Giving generative AI a role to assume gives the AI more context about the type of response that will help you most. For example, ask the AI to play the role of interviewer for a job you're applying for, adopt the persona of a historical figure, or take a position counter to yours in a debate. Assign it the role of a professor to get feedback with an academic lens on your class notes or essay outline. Remember that if you have access to the real thing (like a professor or Librarian), talking to those experts is likely a better use of your time.
Designing Study Aides
Use AI to create flash cards, quizzes, or summaries of your class notes. It can also create an assignment timeline for a particular project and other time management tools! See our Study Skills Guide for more ideas.
Copyright concerns abound when it comes to AI. If you use AI to write something, does that mean that it's copyrighted to you or to the AI? Or to the company that created the AI? If you upload a copyrighted work into an AI tool to summarize, are you committing copyright fraud? The answers to these questions are still being decided, but schools are having to come up with policies on the fly.
Generally, unless explicitly informed by your professor that you can use AI, using AI to generate content to complete your coursework is considered academic dishonesty.
Notice that we specify "generating content," not using AI in general. If you ask an AI tool to write your paper for you, or generate an answer to a discussion board assignment, that's academically dishonest. However, if you use AI tools in an assistive manner, in other words, not creating content for you but helping you in creating your own content, that is usually fine. Examples of this could be asking an AI to give you ideas for a paper topic, helping you create an outline for a paper based on your own ideas of what topics to include, and creating study guides. These uses of AI are not allowing them to do the work for you, but helping you to do your own work.
If you're not sure whether your planned use of AI is in violation of academic integrity, ask your professor!
Tips
As AI becomes more and more independent, and requires less and less input from human beings, the loss of human control is a major concern. For instance, autonomous war machines could make decisions that could literally wipe out humanity. Human control of AI is already pretty tenuous: current AI models still use English to operate, but experts warn that they could generate their own language, which would vastly decrease transparency of their operation.
The debate over AI autonomy also creates a legal gray area where if AI does do anything immoral or destructive it's not immediately clear with whom the blame lies: with the AI, or with the people that made it?
Tips
Because AI algorithms are created by people and trained on what people say and do, any biases present in their behaviors will transfer to the AI. This causes AI to amplify racist, sexist, homophobic, and otherwise biased or prejudiced stances.
For example, facial recognition algorithms tend to negatively flag people of color more than white faces, or in some cases are unable to read darker faces in dark surroundings. AI programs tend to reinforce sexist bias, like rejecting job applications from women much more than men, or flagging certain jobs as "feminine" (like "nurse," "flight attendant," "secretary," etc.).
AI bias is also a big problem in education, as detailed in this article. For example, falsely flagging essays as AI-generated, especially when written by non-native English speakers.
Tips
Because you don't have to pay AI or give it benefits like you would a human, companies are very eager to train AI tools to do the work of people who will then be fired. For instance, Duolingo announced that it would be replacing thousands of contractors with AI, a decision that was then walked back (at least a little). Microsoft also recently announced plans to replace human employees with AI.
In many cases, workers have been asked to train their own AI replacements, or worse, have been asked to do seemingly unrelated tasks that were then used to train their AI replacements without their knowledge.
According to the World Economic Forum's "Future of Jobs Report 2025," "trends in AI and information processing technology are expected to create 11 million jobs, while simultaneously displacing 9 million others, more than any other technology trend." This may seem like it's still a net positive, but the report also claims "Robotics and autonomous systems are expected to be the largest net job displacer, with a net decline of 5 million jobs." Autonomous systems almost certainly involve AI, so that's still AI replacing human workers.
Tips
AI might seem like magic, but using it actually has an enormous impact on the environment. AI data centers require an enormous amount of energy and water. This article in MIT News gives an easily-understandable breakdown:
According to the Washington Post (cited in this article, in case you can't access WaPo), writing a single 100-word email with ChatGPT is the equivalent of pouring out a bottle of water. And instead of researching ways to be more environmentally friendly, companies ask citizens of the towns their data centers are in to limit their use of resources, as with Texans being asked to take shorter showers, or with data centers' energy usage being responsible for significantly higher electric bills for ordinary citizens.
Tips
Most major AI tools have committed breaches of privacy, with many of them being trained on things like private healthcare records, Dark Web materials, and people's "private" clouds.
Anything entered into an AI tool is not private, despite these tools' claims. For instance, many ChatGPT users' conversations with the tool were recently found in Google search results, exposing sensitive information. Any nurses or doctors who upload patient health information to an AI tool (an increasingly common occurrence) is breaching the Health Insurance Portability and Accountability Act (HIPAA). And since many AI tools are trained on the whole of the internet, even private or copyrighted words or images are subject to being incorporated into their algorithms.
AI can also be used to steal identities, generate explicit images of someone without their consent (it's happened to Taylor Swift, multiple times), and because most AI tools have security flaws, they can be hacked for nefarious purposes, such as this research team that exploited Gemini's flaws to hack into the physical systems of smart houses.
Tips