chat loading...
Skip to Main Content
home

AI Literacy


 

While AI can be helpful for brainstorming, organizing existing information, and summarizing, it is notorious for outputting non-factual information, called hallucinations--it happens so much they have a word for it. Often these hallucinations are presented confidently as facts. If a chatbot generates a citation and article summary, for example, check to make sure the article exists. Sometimes it does not.

Also, keep in mind that an AI’s output is only as current as the latest data it was trained on. It won't know anything more recent than the cutoff date of its training dataset.

Another ethical issue is that output generated by a chatbot could be biased. If there were biases inherent in the training materials, then those biases will be perpetuated.

Make sure you

  • fact check any information generated by an AI, including checking citations and sources of information
  • critically evaluate AI output for potential bias that can skew the information
  • remember that generative AI tools are not a search engine. They use data and information to predict patterns and generate responses

These three videos created by StudyForge will introduce you to academic integrity in the world of AI. For more information, see the AI@ATC tab in this guide.

Large Language Models (LLM)--a subset of GenAI which includes ChatGPT, Perplexity, Microsoft CoPilot and others--is trained with massive amounts of data scraped from around the internet. Some of that data could be personal or sensitive information. This information can be used without the consent, compensation (or knowledge, even) of the person to whom the information belongs. Your interactions with chatbots may also be used to train the model. If you are chatting with a healthcare chatbot, do you know that any information about your health that you input won't be used for training?

In addition to using data you give it, many chatbots collect data linked to your identity to display 3rd party ads or sell. Finally, don't assume that your chats go away when you close the app or prompt. They may hang around on a server, with the risk of being breached.

Make sure you investigate the privacy policies of any AI that you use. OpenAI, for example may use your content to train their model. If you aren't comfortable with this, you need to opt out.

 

Information adapted from

Generative Artificial Intelligence and Data Privacy: A Primer. (2025, June 4). https://www.congress.gov/crs-product/R47569

Athens Campus Library: 706.355.5020 | Elbert County: 706.213.2116  | Walton County: 706.552.0922 
email: Library Webmaster

adobe logo This site links to pdf documents. Click this link to get Adobe reader