Skip to main content

System Status: 

Using GenAI Tools in Administration

Provides tips and information for UC San Diego staff on the safe and compliant use of commercial Generative AI tools.

Introduction

With technology advancements such as ChatGPT, Google Bard, and other AI-driven platforms, there's growing enthusiasm within our UC San Diego community to leverage these tools and integrate them into our ecosystem. This guide serves to advise UC San Diego staff on safe and compliant use, ensuring we do not compromise institutional, personal, or proprietary data. This guide does not include guidance for faculty on acceptable use for pedagogical purposes. Guidance tailored to faculty needs may be sought through appropriate channels within the university.

Recognizing the dynamic nature of the AI field, this content will be updated regularly to align with the constantly evolving landscape.

What is Generative AI?

Generative Artificial Intelligence (GenAI) encompasses a range of technologies designed to create new, original content by leveraging extensive training on diverse datasets—ranging from text and music to images. Notable among these tools is ChatGPT, a chatbot equipped to engage users through natural language interactions. "Chat" signifies the user-friendly interface, while "GPT" (Generative Pre-trained Transformer) indicates the underlying machine learning architecture responsible for content generation.

GenAI tools like ChatGPT, DALL∙E, and Google Bard excel at generating text, music, images, and even computer code. These models undergo rigorous training on comprehensive datasets, which include an assortment of text from books, websites, and various other sources. By decoding complex patterns and linguistic nuances, they can produce content that is not only contextually appropriate but also grammatically accurate and stylistically coherent.

How Generative AI Can Help You

Administrative Assistance:
Automate routine communications like operational change reminders and policy updates, and summarize public-facing content for ease of access.

Coding and Web Development:
Draft code for common programming tasks, accelerating the development process.

Event Planning:
Automate the creation of event descriptions, schedules, and promotional material for public calendars.

Image and Video Production:
Edit and create images, as well as voiceover tracks for videos, to elevate your media production.

Job Descriptions and Postings:
Use templates to suggest customized language for position overviews, key responsibilities, and qualifications. Review the language to ensure it is free from unintended biases, as biased wording may discourage certain groups from applying, potentially impacting the diversity of the applicant pool.

Language Translation:
Generate translations while recommending consultation with native speakers for accuracy and appropriate tone.

Training & Onboarding:
Develop training materials and FAQs for new tools, and automate responses to common questions during staff training sessions.

Website and Communications Content:
Edit text for clarity and grammar, suggest optimal layouts, headlines, and meta descriptions, and draft content for course listings, prerequisites, or institutional information.

Guidelines for Protecting Institutional Information When Using Generative AI Tools

When using tools not covered by UC San Diego or University of California contracts, be careful with the institutional data you share. Do not use ChatGPT or similar tools for confidential or sensitive information.

As a standard practice, never share Personally Identifiable Information (PII), FERPA-protected student records, or data classified as P3 or P4 with any service provider without proper contractual safeguards. If you're uncertain about the data classification applicable to your specific scenario, consult the UCOP website for guidance on information classification.

As a user of commercial GenAI services, it's crucial to know your responsibilities when using a service. Both OpenAI, the company behind ChatGPT, and Google set clear restrictions on using their tools for fraudulent or illegal activities. For complete details, refer to OpenAI's usage policy and Google Bard's Privacy Notice.

Precautions

Errors and “Hallucinations”

When using generative AI tools like ChatGPT, Google Bard, and similar technologies for business purposes, be vigilant about "hallucinations"—moments when the AI generates unverified or incorrect information. Always cross-check the tool's output for accuracy before incorporating it into university-related tasks. While generative AI is potent, it can occasionally produce false or misleading content. Ensure all facts and figures generated by these tools are independently verified through non-AI sources before use. In other words, don't simply copy and paste what is produced into your work. 

Bias

When using Large Language Models (LLMs) like ChatGPT within the UC San Diego community, it's important to recognize that the datasets used to train the models may be trained on incomplete or biased data. Implicit and systemic biases can inadvertently be built into AI systems. Such biases run counter to UC San Diego's institutional values of diversity, equity, and inclusion. Therefore, using outputs in a way that amplifies these biases can be contrary to our shared institutional values. Learn more about bias, inclusion, and belonging through UC San Diego HR courses on these topics or through the resources offered by the Office for Equity, Diversity, and Inclusion.

Updates and Revisions

This guidance document will be revised regularly to address changes in technology, legislation, and institutional policies. More information will be forthcoming about how UC San Diego is harnessing AI technologies within the security of our own technology infrastructure. 

Need help? Contact the ITS Service Desk, (858) 246-4357.