Guiding Principles for the Use of Generative Artificial Intelligence at SHEA

Generative AI has the potential to fundamentally change the way we work.  Currently, we expect its greatest immediate impact at SHEA to be on how content (of all types) is created. The potential for having “assistive” technology tools is immense yet we recognize that the technology also has shortcomings and needs to be used with consideration.

In the absence of national governance policies or oversight of AI, the community needs to engage in development of guiding principles to best use these new technologies to help define SHEA’s risk tolerance, ensuring that SHEA members, volunteers and staff engage only with AI that satisfies rigorous standards to meet the goals of the advancing health equity, prioritizing patient safety, recognizing privacy standards and limiting risk so that we proceed with care in relation to AI use.

These guiding principles are meant for SHEA to balance championing innovation and experimentation, while valuing the integrity, privacy, transparency, and truth associated with information evolving with the technology. SHEA encourages staff and members to learn about AI and encourages open dialogue on the role of AI for the organization itself and the communities we serve. This document outlines guiding principles that will be amended as we learn more about both the potential benefits and shortcomings of these new tools and the implications of their use.

What is Generative AI?

Generative AI refers to a category of AI models and techniques that are designed to generate new content, such as text, images, and media content (e.g., music, videos). These models are trained to understand patterns and structures in data, following which they can generate new content that is similar in style or format. These AI tools are evolving rapidly without any regulation or oversight, and there are still many questions concerning AI, including:

  • Accuracy of generated content
  • Source transparency
  • Intellectual property rights
  • Potential bias(es)
  • Data/Information privacy

Like all technology, Generative AI is a tool. Humans – specifically SHEA staff and volunteers – are responsible for the outcomes of their tools. For example, if autocorrect unintentionally changes a word – changing the meaning of something we wrote, we are still responsible for the text. Technology enables our work; it does not excuse our judgment nor our accountability.

Guiding Principles

Principal #1 –Use of AI should be disclosed for transparency, unless its use is limited to grammatical corrections as part of the development of original documents.

As implementation of AI-enabled tools and systems increases, it is essential that use of AI at SHEA be transparent to promote trust. AI use disclosures should contribute to users’ knowledge and not create unnecessary administrative burdens. When generative AI is used to create original content for a SHEA product or deliverable or used as part of a decision-making process, that use should be disclosed and documented. When AI is used as a tool for editing, refining, reorganizing, or correcting human-generated copy for products such as society newsletters, emails, or similar communications, disclosure is not required provided the product or deliverable is human reviewed prior to finalization and distribution.  If AI is being used to create content for presentations that can be interpreted as expert advice by the presenter, such as use during SHEA sponsored education, this must be cited.  AI tools or systems cannot augment, create, or otherwise generate records, communications, or other content on behalf of a staff member or volunteer without their consent and final review. While transparency does not necessarily ensure AI-enabled tools generate accurate, secure, or fair content, it is difficult to establish trust if its use and role is hidden from the process.

Principle #2 – AI used as part of research should be disclosed in all articles or abstracts.

AI use must be declared and clearly explained in publications such as research papers, abstracts presented at meetings, and peer review. Currently, AI does not meet the requirements for authorship at SHEA’s scientific journal publisher, Cambridge University Press, given the need for accountability by the author. AI and LLM tools may not be listed as an author on any scholarly work published by Cambridge.  Of note, Cambridge has a plagiarism policy and uses software to detect this in their content.  Please reference their policy at https://www.cambridge.org/core/journals/flow/information/journal-policies/publishing-ethics.  Scholarly works including letters to the editor and opinion pieces must be the author’s own and should not present others’ ideas, data, words or other material without adequate citation and transparent referencing. Additionally, manuscripts under peer review should not be submitted to public large language models/AI due to confidentiality concerns. Breaching confidentiality of the peer review process is a form of peer review misconduct and may be reported to authors’ institutions. Authors are accountable for the accuracy, integrity, and originality of their research including any use of AI.

Principle #3 – Members, staff, and presenters are accountable for the accuracy, integrity and originality of work. 

AI is a tool, not a replacement, for original thought and effort. When using AI, the content must be reviewed and often edited for clarity and customization.  The end product is still the responsibility of the member or staff using the tool, and they are accountable for the accuracy, integrity and originality of any use.

Principle #4 – Privacy and Security is critical when using AI.

SHEA prioritizes robust measures to protect privacy and data security. If using AI in relation to SHEA business, no personally identifiable information, company or member information that is not generally available to the public should be referenced.  SHEA does not share any private information with third parties collected during the normal course of business per our existing policy.   Recordings using AI must be done with the consent of all those on the call and should be deleted once minutes or notes from the meeting are finalized or at 3 months after the meeting, whichever occurs first.  SHEA members have the right to ask that SHEA not use AI on committees they participate in for minutes.

Principle #5 – Members and staff are expected to follow all appropriate rules and laws related to AI as this evolves. 

Generative AI should only be used where appropriate policies follow applicable state and federal laws and regulations (e.g., HIPAA-compliant Business Associate Agreement). SHEA prohibits the use of confidential, regulated, or proprietary information as prompts for generative AI to generate content.  While there are currently no national policies or laws related to AI, this is evolving. As regulations come into play, SHEA expects all members and staff to be in compliance.

We use cookies to help improve your experience
Ok