UNMC AI Use Guidelines: Difference between revisions

No edit summary
 
(One intermediate revision by the same user not shown)
Line 21: Line 21:
Policy No.: '''8020'''
Policy No.: '''8020'''


Effective Date: Draft 03/07/2025
Effective Date: 03/07/2025


Revised Date:   
Revised Date: 06/04/2025  


Reviewed Date:  
Reviewed Date: 06/04/2025


'''<big>UNMC AI Use Guidelines</big>'''
'''<big>UNMC AI Use Guidelines</big>'''
Line 36: Line 36:


== Guidelines ==
== Guidelines ==
1. Per Executive Memorandum No. 42, Policy on Risk Classification and Minimum Security Standards, only public or Low Risk Data may be used with AI tools—unless a legal enterprise agreement and confidentiality agreement have been established with the third party ''and'' the required assessment process has been completed, or the tool is listed on the UNMC Approved Technology List.
1. Per [https://nebraska.edu/offices-policies/policies/no-42-policy-on-risk-classification-and-minimum-security-standards Executive Memorandum No. 42, Policy on Risk Classification and Minimum Security Standards], only public or Low Risk Data may be used with AI tools—unless a legal enterprise agreement and confidentiality agreement have been established with the third party ''and'' the required assessment process has been completed, or the tool is listed on the UNMC Approved Technology List.


2. Be mindful of including sensitive information in AI tools. External generative AI tools incorporate every user interaction into their model, including the prompts, data, and reactions you supply.  Any information entered into external generative AI tools is considered public and may be stored and used by anyone else. UNMC employees and students are expected to:
2. Be mindful of including sensitive information in AI tools. External generative AI tools incorporate every user interaction into their model, including the prompts, data, and reactions you supply.  Any information entered into external generative AI tools is considered public and may be stored and used by anyone else. UNMC employees and students are expected to:


* Not enter confidential, proprietary, or patient-related information that is subject to federal or state regulations or otherwise considered sensitive or restricted. Follow the UNMC policy on [[Privacy/Confidentiality|Privacy, Confidentiality and Security of Patient and Proprietary Information Policy]] and applicable privacy laws.
* Not enter confidential, proprietary, or patient-related information that is subject to federal or state regulations or otherwise considered sensitive or restricted. Follow the UNMC policy on [https://wiki.unmc.edu/index.php/Privacy/Confidentiality Privacy, Confidentiality and Security of Patient and Proprietary Information Policy] and applicable privacy laws.
* Follow the University of Nebraska’s Policy for Responsible Use of University Computers and Information Systems, Policy on Research Data and Security, and Policy on Risk Classification and Minimum Security Standards.
* Follow the University of Nebraska’s [https://nebraska.edu/-/media/unca/docs/offices-and-policies/policies/executive-memorandum/policy-for-responsible-use-of-university-computers-and-information-systems.pdf Policy for Responsible Use of University Computers and Information Systems], [https://nebraska.edu/-/media/unca/docs/offices-and-policies/policies/executive-memorandum/policy-on-research-and-data-security.pdf Policy on Research Data and Security,] and [https://nebraska.edu/-/media/unca/docs/offices-and-policies/policies/executive-memorandum/policy-on-risk-classification-and-minimum-security-standards.pdf Policy on Risk Classification and Minimum Security Standards].


3. All UNMC users are accountable for their academic or professional work, regardless of the tools used to produce it. When using generative AI tools, users should always verify the information produced for errors and biases and exercise caution to avoid copyright infringement.
3. All UNMC users are accountable for their academic or professional work, regardless of the tools used to produce it. When using generative AI tools, users should always verify the information produced for errors and biases and exercise caution to avoid copyright infringement.
Line 51: Line 51:
4. Employees and students must maintain current awareness of ethical and responsible use of AI in research and creative activities by regularly reviewing university policies and relevant guidelines from funding agencies.
4. Employees and students must maintain current awareness of ethical and responsible use of AI in research and creative activities by regularly reviewing university policies and relevant guidelines from funding agencies.


5. Before entering into agreements with vendors, subcontractors, or collaborators, it is important to inquire about any potential use of AI. Any new solution involving AI—or any addition of AI to an existing solution—must go through the risk assessment process, as outlined in UNMC/Nebraska Medicine IM #63: Risk Assessment Policy. To ensure responsible and ethical use of AI in line with these guidelines, additional terms and conditions may be required in current or future agreements.
5. Before entering into agreements with vendors, subcontractors, or collaborators, it is important to inquire about any potential use of AI. Any new solution involving AI—or any addition of AI to an existing solution—must go through the risk assessment process, as outlined in [https://info.unmc.edu/its-security/policies/procedures/imriskassessment.html UNMC/Nebraska Medicine IM #63: Risk Assessment Policy]. To ensure responsible and ethical use of AI in line with these guidelines, additional terms and conditions may be required in current or future agreements.


6.  Federal funding agencies prohibit the use of AI tools during the peer-review process. The National Institutes of Health (NIH), in its discussion of AI peer review, explains that using AI in the peer review process is a breach of confidentiality because peer review is a confidential process and these tools “have no guarantee of where data are being sent, saved, viewed or used in the future.” The National Science Foundation (NSF) shares guidelines for declaring the use of AI in proposals and explicitly prohibits the use of AI in the NSF merit review process.
6.  Federal funding agencies prohibit the use of AI tools during the peer-review process. The National Institutes of Health (NIH), in its discussion of AI peer review, explains that using AI in the peer review process is a breach of confidentiality because peer review is a confidential process and these tools “have no guarantee of where data are being sent, saved, viewed or used in the future.” The National Science Foundation (NSF) shares guidelines for declaring the use of AI in proposals and explicitly prohibits the use of AI in the NSF merit review process.
Line 72: Line 72:


=== For Everyone ===
=== For Everyone ===
If you are uncertain about the approval status of a particular AI tool or require guidance on its appropriate use, please refer to the UNMC Approved Technology List. If still uncertain, please reach out to Information Technology Services (ITS) with any questions.
If you are uncertain about the approval status of a particular AI tool or require guidance on its appropriate use, please refer to the [https://nebraskamed.service-now.com/kb_view.do?sysparm_article=KB0012191 UNMC Approved Technology List]. If still uncertain, please reach out to Information Technology Services (ITS) with any questions.


== Use for Research ==
== Use for Research ==