Download the Guidance Document
The Generative Artificial Intelligence Task Force was created to develop clear, ethical policies and procedures to guide the use of generative artificial intelligence (AI) tools in teaching, learning, and research. This task force has developed guidance to support responsible AI use while maintaining academic integrity and innovation in accordance with the university mission.
The Generative AI Policy and Procedures Task Force will:
- Continue to Assess Current AI Use: Conduct a thorough review of how faculty and students are using generative AI in academic and research settings.
- Evaluate Best Practices: Identify AI policies at peer institutions and within relevant professional organizations to inform policy development.
- Develop Clear Guidelines: Create policies that address AI use in coursework, assessments, research, and administrative functions, ensuring alignment with academic integrity standards and ETSU’s mission.
- Address Ethical and Legal Considerations: Evaluate potential ethical, privacy, and copyright concerns associated with AI use and ensure compliance with applicable laws and accreditation standards.
- Engage Key Stakeholders: Collaborate with faculty, students, administrators, and external experts to ensure broad input and acceptance of the policies.
- Recommend Implementation Strategies: Establish procedures for policy dissemination, faculty training, and student awareness campaigns to support effective adoption.
- Provide Oversight Recommendations: Identify mechanisms for monitoring, use and update policies as technology and educational practices evolve.
The Task Force created the guidance that follows to address the ethical use of generative AI on ETSU's campus..
-
Written Guidance for Ethical Use of Generative AI at ETSU
This written guidance supplements the ETSU Board of Trustees Policy, Use of Artificial Intelligence Technologies for Instructional and Assignment Purposes (effective February 21, 2025). It provides guidance for the ethical, responsible, and compliant use of Generative AI across all university activities involving faculty, staff, students, and researchers. The Generative AI Task Force generally affirms its use on campus when it can support the work of faculty, staff, and students.
Guidance is organized into three primary sections:
- Core Principles, outlining essential standards and expectations for the ethical and compliant use of Generative AI.
- Implementation Procedures, offering practical examples and requirements tailored specifically for students, instructors, researchers, and staff.
- Governance, outlining university-provided resources, opportunities for ongoing training, and procedures addressing potential misuse of Generative AI technologies.
-
Core Principles
All affiliates of the university community who employ generative artificial intelligence (Gen AI) technologies shall do so ethically, respectfully, and responsibly, aligning Gen AI usage with the university’s core values of honesty, integrity, inclusivity, and trust. Misuse, including falsifying academic content or research data or findings, generating malicious or misleading information, or infringing on intellectual property rights, is strictly prohibited.
- Transparency and Disclosure
University community members who use generative artificial intelligence in academic, administrative, or research contexts shall transparently disclose such usage in a manner that is both appropriate and clear. This includes attributing AI-generated content or substantial Gen AI contributions when it is used to support the completion of course assignments and submissions, scholarly publications, creative activities, and related decision-making processes. - Privacy and Data Protection
All uses of generative artificial intelligence must comply with applicable laws and institutional policies governing data privacy, confidentiality, and security. Users are prohibited from inputting or otherwise exposing confidential or regulated data (including confidential institutional data, Human Subjects Data that requires informed consent for disclosure, FERPA, or HIPAA-protected information) to unauthorized Gen AI platforms or in any manner inconsistent with university policies. - Human Oversight and Accountability
Generative artificial intelligence technologies shall be utilized with appropriate human oversight. Individuals must carefully review and validate Gen AI outputs against their own knowledge and skills before incorporation into educational, administrative, or research outcomes. The ultimate responsibility for verifying the accuracy, fairness, and integrity of AI-generated content or decisions remains with the human user, who is accountable for compliance with all university policies and applicable laws. - Staying Current and Informed
University Affiliates who employ generative artificial intelligence technologies shall maintain ongoing awareness of developments, best practices, and emerging guidelines in Gen AI. Users regularly update their knowledge and skills to ensure continued compliance with evolving ethical standards, university policies, and relevant laws.
- Transparency and Disclosure
-
Implementation Procedures
The following procedures provide practical guidance for students, instructors, researchers, and staff who engage with generative AI in the context of university work. Each example demonstrates how the Core Principles—transparency, privacy, human oversight, and staying informed—can be applied in specific university roles and responsibilities.
- Students (Course and Assignment Use)
- Follow Course Guidelines: Clearly review and adhere to the instructor’s specified parameters regarding AI use, as stated in the syllabus or assignment instructions. (Core Principle 1)
- Cite Generative AI: Whenever generative AI contributes substantially to assignment completion, clearly cite its use according to the instructor-specified format or standard citation practices provided by style guides. (Core Principle 1)
- Ensure Academic Integrity: Avoid submitting fully AI-generated content as original work unless explicitly permitted by the instructor. Submitting such work without clear permission constitutes academic misconduct. (Core Principles 1, 3)
- Research Compliance: When involved in research activities, students must obtain IRB approval and explicit consent for any human-subject data processed using AI. Data must always be anonymized before system upload or analysis activities. (Core Principle 2)
- Stay Informed: Regularly review available university resources to remain current on best practices for responsible AI use. (Core Principle 4)
- Instructors (Instruction & Assessment Use)
- Communicate Expectations Clearly: Include explicit statements about acceptable Gen AI usage in course syllabi and provide detailed guidelines for individual assignments to prevent ambiguity. (Core Principle 1)
- Cite and Review AI Contributions: Follow best practices when using generative AI to develop course materials, instructional content, or student feedback. Clearly cite any AI involvement and review all AI-generated material to ensure accuracy, appropriateness, and alignment with instructional goals. (Core Principles 1, 3)
- Responsible Use in Assessments: De-identify all student work before uploading to Gen AI platforms for feedback or assessment purposes. Carefully review and correct AI-generated feedback to ensure accuracy and fairness. (Core Principles 2, 3)
- Investigate Misconduct Fairly: Use multiple evidence sources, including AI detection software, style analysis, and prompt logs, when investigating potential academic misconduct related to AI. Do not rely solely on Gen AI-detection scores, which have questionable validity. (Core Principle 3)
- Maintain Professional Development: Regularly consult institutional support materials and engage in institutional training to stay informed about emerging Gen AI practices relevant to instruction, assessment, and academic integrity. (Core Principle 4)
- Researchers (Data Collection, Analysis Use, & Creative Activities)
- Transparent Disclosure: Clearly disclose the specific generative AI tools used, their purpose, and methods of application within the Methods or Acknowledgments sections of research publications. AI should never be credited as an author nor used as a sole, undisclosed generator of creative works. (Core Principle 1)
- Protect Participant Data: Secure IRB approval and informed consent before utilizing generative AI to analyze any human-subject data. All personal identifiers must be removed prior to analysis or AI-based processing. (Core Principle 2)
- Bias and Accuracy Checks: Document all bias checks and verification procedures in lab notebooks or in public registries, such as the Open Science Framework (OSF), to transparently communicate limitations and maintain research integrity. (Core Principle 3)
- Continuous Learning: Regularly consult institutional support materials and participate in research-focused AI training, workshops, or seminars provided by the Office of Research and Sponsored Programs (ORSP) to remain up to date with ethical standards and best practices. (Core Principle 4)
- Staff (Administrative & Office Use)
- Cite Generative AI Use Transparently: When generative AI is used to create reports, communication documents, or any other professional products or work, clearly and appropriately disclose its use and verify outputs for accuracy and context. (Core Principles 1, 3)
- Secure Data Management: Utilize AI platforms that protect the handling of confidential or regulated data, thereby avoiding public or unsecured tools. Always remove sensitive details such as personal identifiers or protected financial information before processing. (Core Principle 2)
- Quality Control and Verification: Thoroughly check, verify accuracy, and review the style of any documents or outputs generated using AI before official dissemination or internal use. (Core Principle 3)
- Professional Development: Stay updated with institutional training and workshops to effectively integrate responsible AI use into administrative tasks, ensuring alignment with university policies and ethical standards. (Core Principle 4)
- Students (Course and Assignment Use)
-
Governance
The AI Advisory Task Force will meet regularly to review this guidance in light of new legal developments, evolving institutional needs, and ongoing community input. Updates or recommendations will be documented and made publicly available. All revisions to the guidance or procedures will be timestamped and archived on the AI Task Force webpage to ensure transparency and version control.
Training and Resources
ETSU will provide regular training, consultations, and digital resources to support the responsible and informed use of AI across teaching, research, administration, and student support. Training materials and links to digital resources will be housed on the provost’s official Gen AI guidance webpage.Feedback
Faculty, staff, and students are encouraged to submit feedback or suggestions via the online form provided. Submissions raising urgent issues will prompt an out-of-cycle review.Misconduct Procedures
Academic or research misconduct involving AI will be addressed through the university's existing procedures. AI detection tools may inform, but cannot solely determine, cases of misconduct. All allegations must include human judgment and context-based review.Methods, Responsible Officials, and Interpretation
The Generative AI Task Force, under the direction of the Office of the Provost, is responsible for implementing, reviewing, and revising this set of guidelines. For questions about this policy, please contact the Office of the Provost. The President, in conjunction with the Office of University Counsel, has the final authority to interpret this guidance. Generative artificial intelligence technologies were utilized to assist in the research, drafting, and organization of this policy document. University personnel carefully reviewed and validated all generative AI content to ensure accuracy, relevance, and compliance with institutional values and policies.
Membership
The Task Force will be chaired by a senior faculty member with expertise in academic policy and emerging technologies and include representatives from Faculty Senate, SGA, Department Chairs, Academic Technology Services, General Counsel’s Office, Library, and the Center for Teaching Excellence. The Task Force will engage additional faculty, student, and staff representatives as needed to ensure diverse perspectives and expertise.
Current members include:
- David Atkins
Dean, ETSU Library - Dr. Alison Barton
Director, Center for Teaching Excellence - Dr. Brian Bennett
Chair, Department of Computing - Milind Chaturvedi
ETSU Student - Anthony Kiech
Director, Academic Technology Services - Dr. Robert Pack
Executive Vice Provost - Dr. Trena Paulus
Professor, Department of Sociology and Anthropology - Dr. Melanie Richards
Director, School of Marketing and Media
Deliverables and Timeline
The Task Force will provide regular progress updates to the Provost through the Executive Vice Provost’s Office. Faculty and students will have an opportunity to provide input throughout the process. Final policies and procedures will be reviewed by ETSU leadership for approval and adoption by July 2025, in advance of the fall semester.
Updates
The Task Force initiated their work on March 3, 2025. This page was updated June 13, 2025
To provide feedback and for more information about the work of the task force, email Dr. Rob Pack.