Use Caution With ChatGPT and Other AI Services
As many of you are aware, AI has become a very popular and interesting topic. With the introduction of ChatGPT and other AI services, there is a great deal of fascination (justifiably) with the technology. That said, as with any technology, our duties to protect and preserve the data we use to perform our work do not change. A recent disclosure of data and information by Samsung into ChatGPT has some valuable lessons for all of us.
The UT System does not have any legal agreements with any AI developer that provides any assurance of data confidentiality. Therefore, putting data into ChatGPT or similar services is equivalent to disclosing the data to the public. Therefore, we must use the same data sharing precautions that we use every day with the new technology. Specifically, this means the following information should not be placed into any AI service:
Any data whose disclosure to the public would be considered a breach under FERPA, HIPAA, PCI, GLBA or any other Federal or State Statute.
Examples include (not exhaustive):
- Credit Card Numbers
- Personally identifiable medical information
- Financial Aid information
- Student names and grades
Additionally, great caution is suggested with the following information:
- Research data/Intellectual Property
- Source code
- Proprietary data
- Internal meeting notes
- Hardware related information
- Presentation notes, emails
While generative AI may prove to be a valuable tool, our use of it is limited by our control over how data is stored and accessed. Please be cognizant of our data stewardship responsibilities as you explore these new technologies and their capabilities.
For additional information on ChatGPT and Generative AI, please refer to the following resource page provided to us by the Walker Center for Teaching and Learning. Resources on ChatGPT and Generative AI
Thank you for your diligence!
The University of Tennessee at Chattanooga
(423)425-4000 | firstname.lastname@example.org