AI Guidelines for Marketing and Communications
Introduction, scope of guidelines, and relevant definitions
These guidelines recognize the transformative potential of artificial intelligence (“AI”) technologies to enhance operational efficiency, creativity, innovation, and decision-making in the marketing and communications functions at Stanford. At the same time, these tools necessitate careful usage to ensure responsible, legal, and ethical implementation. These guidelines outline the acceptable uses and expectations for the use of AI, supporting fully the use of these technologies, while ensuring such activities are done in the best traditions of the university.
As such, these guidelines build on Stanford’s philosophy that the use of AI is centered on improving the human condition and that AI should be used to augment, not replace, human work. In this context, the university expects that content generation using AI will be done ethically and with human oversight.
These guidelines apply to all regular staff, interns, casual employees, and consultants. Within the context of this document, “AI” and “generative AI” mean software and/or machines that can create new content, such as text, images, video, and other rich media. (Examples include, but are not limited to, ChatGPT, DALL-E, Gemini, and Claude.) Additionally, any software and/or machines that leverage existing data and use that knowledge to generate new, original content should also be considered within this scope.
Specific guidance for a select number of common use cases also is outlined below. The absence of a use case should not, however, be read as prohibition against that case. The bias of these guidelines is toward thoughtful experimentation and continuous growth.
Providing guidance on how to use AI tools is not within the scope of these guidelines. Employees interested in a primer in this area should consult the Stanford AI Playground.
Finally, these guidelines are intended to complement existing university policy. If there is a conflict between guidance in this document and a policy in the Administrative Guide, then the Administrative Guide controls.
Responsible and ethical usage
Consistent with the Report of the AI at Stanford Advisory Committee (which is incorporated herein by reference and paraphrased below for ease of reference), there are seven key guiding principles that employees in the marketing and communications functions should strive for in their use of AI tools:
- Human oversight: Humans must take responsibility for the AI tools and systems they use at Stanford. You are responsible for oversight of any content you produce using AI to ensure that content is accurate, in alignment with institutional values, and in compliance with the policies set forth in this document.
- You are personally responsible for conforming to this requirement and this responsibility may not be delegated to another employee.
- Alignment with university values: AI systems should be built and/or procured to support Stanford’s mission and values.
- Human professionalism: All employees in communications and marketing should adhere to professional standards and high levels of quality in their work. Moreover, they should exercise their best judgment and critical thinking in the use of these tools.
- Ethical and safe deployment: These tools should improve university functions. Employees should understand the full implications of an AI system prior to deploying it in their functional area. If the system is not fully understood, an evaluation of the system should be undertaken prior to its implementation.
- Privacy, security, and confidentiality: When using AI tools that involve personal data, the legality and impact of the application should be carefully considered prior to implementation. Some uses of data — such as those involving medical records, privileged information, or student or employee records — require express consent.
- Please review the university risk classifications for assistance in determining what kind of data you may be working with.
- As of this writing, the Stanford AI Playground is not approved for high-risk data.
- You may not provide any confidential or legally privileged information of Stanford or a third party to generative AI tools.
- Data quality and control: All data used to create new AI applications with university resources should be collected in legal and ethical ways and you should document the provenance of any data you use with these tools.
- An AI “golden rule”: As stated in the committee report, you should “use or share AI outputs as you would have others use or share output with you.”
Readers of these guidelines are strongly encouraged to familiarize themselves with the full content of the committee report.
Compliance and intellectual property considerations
The following guidelines apply to all uses of AI technology without limitation:
- Irrespective of the application, you must adhere to the University Code of Conduct, Information Security, and Privacy Policies when using AI technologies.
- As in other contexts, you may not use AI tools to promote for-profit organizations, engage in commercial activities, or provide explicit or implicit commercial endorsements.
- Please carefully review the terms of use (“TOU”) for any model you use. Many common TOUs include language that grants the provider the right to use your name and Stanford’s name in their marketing activities, which is not permitted under Administrative Guide 1.5.4.
- Similarly, you may not use an AI to advocate on behalf of Stanford for any political position or political party, consistent with Administrative Guide 1.5.1.
- Do not use high-risk data in your prompts or include such data as attachments to your prompts.
- Examples of such information include, but are not limited to, protected health information (PHI), student records, donor information, employee information, home addresses, and social security numbers. You should familiarize yourself with these risk classifications before providing data to an AI tool.
- Additionally, exercise good judgment and extreme caution when providing even moderate-risk or low-risk data to a model, whether as prompt, attachment, through an API, or otherwise.
- Ensure that your usage of AI complies with intellectual property rights and laws such as copyright, trademark, and trade secrecy protections. Use of third-party copyrighted or trademarked material or use of a person’s likeness without permission in interacting with an AI model may be illegal and could expose Stanford to significant financial liability.
- Similarly, you are responsible for ensuring that any generative model you use has obtained, and provides to the university, a license for any outputs from that model.
Continuous evaluation
Given the rapidly changing nature of this field, you are encouraged to regularly review and assess the models you are using to ensure they are meeting expectations and these guidelines.
All employees communicating on behalf of Stanford are encouraged to engage in a regular discussion of their AI practices with colleagues and, where appropriate, their supervisors. In the initial phase of any trials, experimentation, or evaluations, you should consider discussing with your supervisor(s) how you are using AI, how you plan to validate its output, and any additional AI-related activities you may plan to undertake in your work. This allows both parties to be aware of the usage, to discuss it in an open fashion, and ensure alignment on the application.
Recommended platform: Stanford AI playground
The Stanford AI Playground offers an excellent opportunity to trial a variety of models in a safe environment. We strongly recommend exploring your use cases in this environment, to the greatest extent possible, as you begin to leverage these technologies. The AI Playground includes various large language models (“LLMs”), a form of generative AI specializing in text-based content. Users can also access additional LLM plugins for image generation, web scraping, and AI-assisted Google services.
Moreover, Stanford’s AI Playground supports data privacy by taking advantage of Stanford’s infrastructure and vendor partnerships. Files uploaded to the Playground are not shared externally or used to train models. More information is available in a January 2025 Stanford Report article and in the AI Playground Quick Start Guide.
Specific use cases in the communications and marketing functions
Following is guidance for a number of common use cases in the marketing and communications functions. (As noted above, the absence of a use case should not be construed as a prohibition on that activity.) All applications, however, should conform to the principles, as well as the legal and policy considerations, outlined above.
- Writing and/or research for news articles (including Stanford Report), press releases, institutional statements, and similar material: When using AI tools for drafting written content, you are responsible for: (i) carefully reviewing and fact-checking the output; (ii) ensuring the finished product complies with these guidelines and other university policy; (iii) ensuring the relevant tool does not borrow text verbatim from other sources; and (iv) disclosing the presence of an AI in the drafting and/or research process to the relevant editor.
- Drafting statements: When using AI tools for brainstorming or outlining statements, you are responsible for: (i) carefully reviewing and fact-checking AI output; (ii) ensuring the finished product complies with these guidelines and other university policy; (iii) disclosing the presence of an AI in the drafting and/or research process to the relevant editor; and (iv) disclosing the presence of an AI in that statement to the Vice President for University Communications.
- Image generation and enhancement:
- Enhancement: Many image and video editing tools now employ AI models to assist with tasks such as color grading and/or noise reduction. These are generally acceptable applications of an AI tool, so long as the usage conforms to the guidelines above and does not substantially alter the authenticity of an image. If scientific data is being presented, any enhancement(s) should be reviewed with the scientists who generated the data to ensure it is acceptable.
- Generation: In the case of image generation, any such activities should be consistent with the intellectual property guidelines above and you should credit/indicate the model that created the image in any photo credit line. As detailed above, if scientific data is being used to generate an image, the output should be carefully reviewed with the scientists who generated the data to be sure that it is acceptable.
- Social media: Some general considerations in using AI tools for the following activities:
- Monitoring: When using AI models, consistent with these guidelines, to monitor and summarize social media sentiment, you are encouraged to validate the output against a random sample of the underlying data.
- Writing: When using an AI as an authoring partner, you are responsible for confirming accuracy of its output and its appropriateness for your audience.
- Marketing: Again, some general considerations:
- Segment and personalize: When using an AI to develop audience insights, segment audiences, and deliver personalized content to those audiences, you must ensure that all applications remain in compliance with these guidelines.
- Chatbots and other search agents: Chatbots that utilize AI models to retrieve and present content to audiences (for example, as in a retrieval-augmented generative approach) may be deployed only with approval from your unit head of marketing and/or communications.
- Other activities:
- Presentation preparation: Using AI tools (such as those built into common presentation tools) to enhance the efficiency and effectiveness of presentations should also remain consistent with these guidelines.
- Event transcription: When using transcription tools for live, public events, or livestreamed events, you are responsible for reviewing the output for accuracy. If you use an AI tool to transcribe a meeting, you are responsible for ensuring: (i) its use conforms fully to these guidelines; (ii) the transcripts are not used to train non-Stanford models; and (iii) the data is retained consistent with university policy.
Specific guideline for Stanford Report content
You may use AI tools to help produce content bound for publication in Stanford Report. The following are permitted uses:
- Post-production, enhancing images, optimizing image quality, and retouching minor sections of an image. Generated images will be accepted at the discretion of the Stanford Report editorial team.
- Translation and captioning of accompanying video and/or motion assets. All captions should be reviewed by a human for accuracy.
- As a partner in drafting portions of written editorial content, including:
- Brainstorming and suggesting questions for an interview.
- Proposing an outline for an article.
- Suggesting refinements to a narrative for clarity or consistency.
- Researching the subject of the content.
- Capturing and transcribing interview notes.
- Preliminary editing to reduce grammatical, spelling, or other typographical errors.
In these cases, a resulting content product may be submitted to the editors of Stanford Report without prior consultation, but the author must disclose the use of an AI to the receiving editors in University Communications. Similarly, editors involved in the review of material prior to submission to University Communications must adhere to the same standards. The author(s), and not the AI, are responsible for reviewing and ensuring the accuracy of any images, content, or research in which an AI was utilized.
We recognize a wider range of applications and use cases may be applied to producing content for Stanford Report than what is indicated in the above. In those cases, the author(s) of the content should consult with the editors of Stanford Report prior to submitting the content for evaluation.
Please note that compliance with these guidelines does not guarantee placement in Stanford Report. All other editorial standards and content considerations are still effective. University Communications exercises discretion in determining which content appears in Stanford Report. This process involves rigorous editorial review during which a variety of factors, including university policies, are carefully considered. As a result, not all content suggested for publication, whether developed using an AI or not, is accepted.
Concluding remarks
This is a rapidly developing area where all employees will benefit from continuous learning and experimentation. We expect that the scope of application will soon exceed what is enumerated in these guidelines. While we will make every effort to keep it current, as you encounter novel use cases, please share those with your colleagues for their own learning and development.
These guidelines were drafted in collaboration and consultation with colleagues from University IT, the Graduate School of Education, the School of Engineering, the Office of Development, the Graduate School of Business, the Stanford Institute for Human-Centered Artificial Intelligence, and University Communications. The designated point of contact is John Stafford, Assistant Vice President for Marketing & Digital Strategy. Please contact John with any questions, concerns, or suggestions you may have.
Consistent with these guidelines, gpt-4o-mini and the Google plug-in on Stanford’s AI Playground were used to conduct research related to the drafting of these guidelines.
Connect with Stanford