top of page
Search

Your organization AI policy: 3 questions to ask yourself now!

  • Mar 16
  • 3 min read

A client from a large nonprofit recently asked me, “What is better: to have a list of pre-approved AI tools or not?” 


An approved list allows you to provide a definitive directory of AI tools people can use within your organization, complete with additional guidelines for usage. It means that you need a person or a team that can evaluate the tools, vet them, and provide clear user guidance if there are restrictions on how your employees can use the tool.



It seems like a clear “yes"... right?


But what about if you have a lot of employees that are ultra-specialized, and all have a few tools that are not necessarily part of the suites that you or your organization already paid for that they would like to use?Then it becomes a little bit more complex than vetting the Copilot usage for your office.


If you worked with me in the past, you know the general answer to this question would be: “There is no size fits all when it comes to internal AI policy”. 

It depends on various factors, from what your organization is currently offering as AI systems to your internal capacity to tackle a vetting system and your tolerance for risks.

The Home Base Strategy: Start with What You Own

It might sound counter-intuitive, but for a small organization, restricting your team to 3–5 pre-approved AI tools will actually make you more productive. For many organizations, the safest way to innovate is to play in the sandbox you already have. This works best for organizations that already pay for access to AI tools that are widely available to their staff, such as Copilot, Gemini, or a CRM/HR system with an AI integration.


For you, the main question is: Can our existing Home Base suites meet our organization's needs?

Before looking externally, have you mapped out the AI features already integrated into your current systems? Playing in the “sandbox you already own" is a reliable way to fulfill your mission without added risk. If that is the case, you can create a pre-vetted list and develop an agile process to vet new tools as opportunities arise.


Your team must always return to the basics and ask themsleves: Are we adopting a tool just because it’s new, or because it demonstrably furthers our mission? Don’t be distracted by shiny new things if they don’t truly serve your purpose.


Diagnostic Questions: Do You Actually Need a List?

If you are still not certain that a pre-approved list is the way to go, ask yourself these diagnostic questions:

  1. How diverse are our technical needs? If 90% of your staff only needs AI for basic drafting and summaries, a list is perfect. But if your teams are “ultra-specialized"—using AI for niche coding, specialized research, or creative production—a rigid list may stifle the very innovation you need. You will need a clear approach to approval of new tools.

  2. What is the risk related to your data? Does your staff handle high-stakes PII (Personally Identifiable Information), sensitive donor data, or beneficiary files? If the answer is yes, a pre-vetted list provides the guardrails necessary to prevent accidental data leakage into unsecured “free" versions of AI.

  3. Do we have the capacity for “Centralized Oversight"? An approved list requires someone or a team to address inquiries related to the acceptance of new tools. If you don't have the budget or staff to manage these requests, a list could eventually lead to what is called Shadow Tech, where employees use unapproved tools on personal devices. 


If a rigid list isn't the right fit, you can still maintain Trust and Transparency within your AI policy by clearly identifying:

  • Usage-Case Norms: Instead of listing tools, identify which tasks are permitted or prohibited. For example, you might allow AI for “brainstorming and drafting" but strictly prohibit its use for “high-touch beneficiary intake".

  • Standardized Guardrails: Rather than vetting every tool, enforce “non-negotiable safety features" that apply to any AI usage. This includes a strict “no-free-tier" rule for sensitive data and a mandatory Human-in-the-Loop model to prevent “wholesale" usage of AI outputs.


No matter what route you choose, staff AI literacy is the heartbeat of a successful implementation. 

Don't just give your team a list of “you shall not". Give them the skills they need to use the tools optimally and to understand AI fundamentals like ethical dilemmas, bias, and hallucinations. 


Virage offers a great in-house course for your staff where we come and, in less than 2 hours, your entire office has a great overview of the AI fundamentals. Contact us to see the options we can offer to your business or organisation.


We also have two upcoming webinars mid-April (in French and English) to help you start building your organization AI policy. Sing up!


AI WORKSHOP FOR NON PROFIT LEADERS
Register Now

 
 
bottom of page