AI usage policies have become the new norm as businesses across industries adopt various AI technologies in hopes of enhancing productivity and staying competitive, with many companies now revisiting and updating their AI usage policies to become more permissive while aiming to meet any transparency requirements.
In this post we present a structural overview of a typical AI usage policy and discuss current trends and concepts for these policies.
Purpose and Scope of Policy
AI usage policies often start with a broad statement encompassing the policy’s purpose and the scope of the policy’s applicability. For example, the policy may apply to any AI use, generative AI use, or use with respect to certain tasks or departments, and it could apply to employees, customers, third-party users such as contractors or providers, or some combination of these individuals.
Some common purposes of implementing an AI usage or responsible use policy include to:
- Risk Mitigation: Educate users on responsible usage and mitigate legal, financial, or reputational risk as well as maintain compliance with applicable laws and regulations.
- Quality Control: Implement appropriate systematic review processes (e.g., human verification) to verify AI-generated outputs for reliability, accuracy, and quality.
- Data Governance: Provide guidelines and requirements for data governance and protecting sensitive and personal data, respecting users’ privacy and maintaining compliance with privacy laws and regulations.
- Operational Consistency: Standardize how AI is used across the company for consistent monitoring, operations, and decision-making processes.
- Clarity and Strategy: Provide a framework to align AI initiatives with the company’s overall values, strategy and goals.
Appropriate Uses and Guidance
A critical component of AI usage policies is a detailed description of permissible, appropriate uses of AI that is clearly distinguished from inappropriate or prohibited uses of AI, for which purpose including examples may also be helpful. Some common tasks for which AI can be useful and appropriate include generating or predicting language and suggesting grammatical style or improvements to writing.
It will be important to clarify that any AI use remains subject to company policies (and restrictions within the AI policy) relating to confidentiality, noninfringement, and verification. Some companies also maintain a list or catalogue of “approved” AI tools, listing the tool and corresponding permitted uses, which is referenced in the policy and can be updated as new AI tools continue to emerge.
We have recently seen AI usage policies become more permissive, broadening the scope of work for which AI may be used. More businesses appear to also be getting more comfortable with the use of generative AI and large language models (GenAI)—under appropriate circumstances—which may have been previously prohibited.
Some AI policies include additional guidelines or restrictions in the appropriate use section that specifically apply to GenAI:
- Users must comply with the confidentiality obligations in their employment contracts as well as the company code of ethics. Users must complete any AI training provided by the company prior to use.
- Users should be able and ready to explain how the GenAI tools are used and how the output is generated, verified, and used (to the extent practical).
- Users may only use data with GenAI tools that is legally obtained and used with the necessary consents and permissions.
- Users must immediately report any security incidents or suspected breaches to the appropriate company security contact.
Disclosure of Use Requirements and Approval Processes
Generally, AI use can be easily detected using accessible third-party tools, and it is critical for companies to inform users of any required disclosure requirements applicable to AI-generated work. The scope and detail of disclosure may vary by industry, and may be more stringent in industries subject to heightened privacy regulations.
Companies may take different positions on internal and external disclosure of AI use: some companies require internal disclosure to the appropriate supervisor, and approach external disclosure on a case-by-case basis (or customer/client disclosure). Based on industry or company policies, AI-generated content incorporated into external client work product may require prior consent or approval from the client.
It is also important to note that additional disclosure requirements may apply in the AI software provider’s applicable terms and conditions and advise employees to review such terms. In any case, employees should be provided clear disclosure requirements and contact information in the AI policy to follow in disclosing their AI usage as well as a process for asking questions regarding disclosure or approval for any AI usage.
Prohibited Uses
Similar to outlining appropriate uses of AI, AI policies also generally inform users of inappropriate or prohibited uses of AI. Some common prohibitions include:
- Do not use customer data with GenAI tools without the customer’s written consent.
- Do not use any confidential, proprietary, or restricted company data or information with any AI tools where such data could be used to train the AI model or could be accessed by a third party as an output of the AI tool.
- Do not use personal accounts with AI tools for business-related purposes.
- Do not use GenAI tools to develop or create any invention or proprietary work product without sufficient human involvement, verification, and judgment.
- Do not use personally identifiable information with GenAI tools.
Another trend in AI policies is to include a notice and restriction regarding use of public AI. Considering information submitted to AI may be viewed as public disclosure, users should utilise public AI tools only with completely public data to avoid a violation of confidentiality or privacy (and in compliance with the company’s existing data security protocols).
It is also important that users are made aware that their input may be used by AI for further training and could be publicly shared with other users, and that any content copied and pasted into AI may constitute intellectual property infringement.
Other Restrictions, Requirements, and Things to Keep Front of Mind
AI policies generally also incorporate any required steps users must take, or issues they should be aware of, when using AI. Commonly, AI policies warn users that AI can produce inaccurate, incomplete, or biased results, and as such users should not assume output is complete, correct, or thorough.
Many companies’ AI policies require a manual human verification, review, and documentation process, noting that users are ultimately responsible for all content produced with AI assistance. A common theme in AI policies is that AI tools should not be used as a replacement for human expertise and creativity.