Need an AI policy? Ask these questions before you start.
By Danielle Mellema, Chief Ghostwriter, Digital Content Manager
For better or worse (or somewhere in between), artificial intelligence (AI) is here to stay. The capabilities of this technology have evolved so rapidly that many teams and organizations haven’t had the opportunity to do much more than a gut check in response. Whether you or your organization are more inclined toward enthusiasm, reluctance, fear, or ambivalence about the rise of AI, a more thoughtful approach is needed to navigate these waves of change in a way that keeps you anchored to your mission and values.
No more figuring it out on a case-by-case basis: It's time to create an AI policy.
This doesn’t mean you can’t use discernment in each situation—with AI technology’s rate of change, you’ll constantly be exercising discernment! Instead, creating a policy for AI usage gives you or your organization the opportunity to consider what boundaries best align with your values and your unique place in the market. It also gives clarity to everyone in the organization as well as those you work with (such as clients, patients, or third-party entities) so they can understand how AI does (or doesn’t) align with your goals, how you will (or won’t) be using it, and what you expect from others.
Whether you are drafting a policy for yourself in your professional work, your team, or your organization as a whole, here are a few questions to ask to help you draft an AI policy:
What are my guiding values?
As with so many other decisions organizations must make, your mission, vision, and values must direct any choices about AI usage. Ask: Do our core beliefs align with the value that a particular AI technology offers? What is unique about our organization? What sets us apart from other similar organizations? What ethical commitments do we already hold that might speak into our AI usage? How can AI help us reach our goals? How might it diminish our ability to reach our goals or remain committed to quality?
Remember: If you are the one tasked with writing an AI usage policy for your organization, the level of AI usage you are comfortable with personally might not be the same as what makes sense for your organization. The guiding light must be what is the best match for your organization’s values.
What will I commit to?
After considering your personal values or those of your organization, you will have the foundation you need to articulate your commitment to those you serve, such as patients, clients, etc. What is your “why” behind the way you are using AI? In what situations will you use AI (e.g. plagiarism screenings, basic research)? What steps will you take to confirm the accuracy of any data given from AI? In what situations are you committing to not use AI? While the technology is constantly changing, it is important to be as specific as you can about the types of uses you do and do not intend to engage to make the policy clear.
What do I expect of others?
You can’t control the AI usage of others. But it’s wise to be clear what level of AI engagement you expect of those you work with or serve in order for you to produce quality work, serve with integrity, and uphold the values of your organization. What expectations do you have of your team members, clients, patients, etc., regarding their use of AI? In what areas or tasks would you steer them away from the use of AI? Why?
How will we engage with outside organizations?
When you are considering which organizations, vendors, applications, etc., to contract with, how much of a factor is their use of AI technologies? Do you plan to question every outside organization you work with as a part of your decision-making process? What level of responsibility to your patients or clients are you willing to assume for the AI usage of those outside organizations? This is an important place to outline what due diligence looks like for your organization to ensure you are able to deliver a service or finished product with the level of AI intervention you are committed to.