INSIGHTS · FREE ME UP AI
AI Governance for Australian Not-for-Profits: What Your Board Needs to Know
Published March 2026 · 8 min read
Someone in your organisation is already using AI. Maybe they told you. Maybe they didn't.
Staff members are using ChatGPT to draft grant applications, summarise meeting notes, and respond to donor enquiries. Volunteers are using Copilot to prepare reports. A program coordinator found a free AI tool online that's making their work faster. None of this was planned. None of it was governed. And the board may be the last to know.
This is the current reality for most Australian not-for-profits. AI adoption is happening from the ground up — and governance is catching up from behind.
That gap is where risk lives.
Why This Is Now a Board-Level Responsibility
AI governance isn't an IT question. In Australia in 2026, it's a leadership and accountability question — and the board owns it.
The Australian Institute of Company Directors published AI governance guidance specifically for directors of SMEs and not-for-profits in 2024, noting that boards have a duty of care that extends to how AI is used within their organisations. The guidance makes clear that if boards don't govern AI use, they cannot discharge that duty.
The National AI Plan released in December 2025 includes specific provisions for not-for-profit adoption, with government consolidating support within the National AI Centre while raising expectations for responsible governance. And from December 2026, updates to the Privacy Act will require organisations to be able to explain how AI influences decisions about the people they serve.
None of this creates immediate legal liability for most NFPs. But it does create a clear expectation: boards are accountable for what AI does in their name.
If boards and organisations do not govern AI use, ethical decisions about its deployment get made by default — by whoever happens to be using the tool at the time. (Corrs Chambers Westgarth, 2025)
The Three Questions Every NFP Board Should Be Asking
Most NFP boards don't need a 30-page AI policy. They need clear answers to three questions.
1. What AI tools are being used in our organisation right now?
This is the foundation. You cannot govern what you don't know about. Most organisations find, when they ask this question properly, that the answer is more extensive than expected — tools embedded in Microsoft 365, standalone subscriptions taken out by individual staff members, and free consumer tools being used for operational work.
The starting point is a simple AI tool register: a list of every AI tool used, who uses it, what for, and what data it handles. This doesn't require technical expertise to create — it requires asking the question and following up.
2. Is client or beneficiary data being handled appropriately?
The Australian Privacy Act applies to personal information used by AI — including information that AI systems infer or generate. For NFPs working with vulnerable populations, the stakes are particularly high: case notes, health information, family circumstances, and financial situations are all categories that require careful handling.
The OAIC is explicit: identifiable personal information should not be entered into public AI tools such as free-tier ChatGPT, Google Gemini, or consumer Copilot. Many NFP staff members don't know this. Some do know it and are doing it anyway because it saves time and they haven't been told not to.
The board's role is to ensure that clear guidance exists and is communicated — not to monitor every staff member's screen, but to ensure the organisation has taken reasonable steps.
3. Where does a human make the final decision?
AI should not make final decisions about client welfare, service access, risk assessment, or any outcome that materially affects a beneficiary's life. This isn't a legal requirement in most contexts yet — but it's a fundamental governance principle, and one that Australian regulators are moving toward requiring explicitly.
The board should know, for every significant AI use case in the organisation, where the human decision point is. If there isn't one — if AI output flows directly to action without human review — that's the risk to address first.
The Australian Regulatory Context in Plain English
Australia does not have a standalone AI Act. As of 2026, the approach is standards-led rather than legislation-led — meaning existing laws apply, new targeted guidance supplements them, and a mandatory framework may come later.
What this means practically for NFPs:
The Privacy Act 1988 applies to how AI handles personal information. NFPs that handle personal data of clients, donors, or volunteers need to understand how their AI tools process that data and whether it leaves Australian shores.
The Australian Government's Guidance for AI Adoption, published in October 2025, consolidates previous voluntary standards into six responsible AI practices covering governance, impact assessment, risk management, transparency, testing, and human oversight. It's explicitly aimed at SMEs and NFPs as well as larger organisations, and the foundational guidance is written for non-technical readers.
The National AI Centre's AI Adopt Program provides free consultations, training, and tools specifically for small businesses and not-for-profits adopting AI. This is a practical, funded resource that many eligible organisations don't know about.
Only around 30% of Australians believe the benefits of AI outweigh the risks, according to a 2025 study by the University of Melbourne and KPMG. For NFPs, whose reputation and community trust are core to their operating model, that public scepticism is a governance consideration as much as a regulatory one.
What Good NFP AI Governance Actually Looks Like
Good governance doesn't require a compliance team, a legal budget, or a technical expert on the board. It requires four things.
An AI tool register
A living document listing every AI tool used in the organisation, who uses it, what it's used for, what data it accesses, and whether it's been assessed. Updated at least annually. Takes half a day to create from scratch.
A simple AI use policy
One or two pages. Written in plain English. Covers what staff can and can't use AI for, what data must never go into public AI tools, and who to ask when they're not sure. The Australian Government's Guidance for AI Adoption includes a free policy template. The board approves it. The CEO owns it operationally.
Clear human decision points
For every significant AI use case — summarising case notes, drafting client communications, analysing survey data — there is a documented step where a human reviews the output before it influences a decision or goes to a client. This doesn't need to be elaborate. It needs to exist.
A board conversation once a year
AI adoption moves quickly. The board should have a standing agenda item — not a full briefing, just a check-in — that covers: what tools are now in use, whether anything has changed in the regulatory environment, and whether the organisation's policy still reflects actual practice. One conversation per year, informed by a brief report from the CEO. That's it.
Good AI governance for an NFP isn't about becoming a technology organisation. It's about asking the right questions, once a year, with the right information in front of you.
The Specific Risks for NFPs Working with Vulnerable Populations
NFPs working in aged care, disability services, mental health, family services, child protection, or homelessness have additional considerations that general AI governance guidance doesn't always address.
Client data in these sectors is highly sensitive. The consequences of a data breach or an AI-influenced decision that harms a client are severe — for the client, for the organisation, and for community trust. First Nations data sovereignty adds another layer for organisations working with Aboriginal and Torres Strait Islander communities, where standard data governance principles may not be sufficient.
Trauma-informed practice requires that AI tools used in client-facing contexts — even indirectly, such as tools used to summarise case notes or draft communications — do not introduce bias, stereotyping, or assumptions that are inconsistent with person-centred practice.
The practical answer is the same as for any NFP: know what's being used, keep sensitive data out of public AI tools, ensure human review of anything that affects a client, and consult with your peak body or legal adviser if you're uncertain about a specific use case.
For how Free Me Up AI approaches these considerations across our NFP engagements, see our responsible AI and governance page and our AI automation for not-for-profits service.
Where to Start: A Board Action Plan
If your board is starting from scratch, this is a practical sequence:
- Ask the CEO for a brief on what AI tools are currently used in the organisation. Give them two weeks. If the answer is "I'm not sure", that's the starting point.
- Request a simple AI tool register — just a list, no technical detail needed.
- Check that your privacy policy covers how you handle personal data processed by AI tools.
- Adopt a simple AI use policy. Use the Australian Government's free template as a starting point, adapted for your context.
- Confirm that human review exists for any AI use case that affects clients or beneficiaries.
- Add AI governance as an annual board agenda item.
That's the full implementation for most NFPs. None of it requires external consultants or significant budget. It requires leadership attention and follow-through.
The Short Version
AI is already in your organisation. The question isn't whether to govern it — it's whether the governance is happening intentionally or by default.
The bar for good NFP AI governance in 2026 is practical, not onerous: know what's being used, protect client data, ensure human oversight of decisions that matter, and check in once a year. Most boards can get there in a single planning cycle.
The organisations that do this well will be better positioned when the regulatory environment tightens — and more trusted by the communities they serve in the meantime.
Need help building an AI governance framework for your NFP?
We work with Australian not-for-profits to implement AI safely, with governance built in from the start. Book a free 15-minute call and we'll tell you exactly where to begin.
Related reading
- AI Automation for Not-for-Profits and Advocacy Organisations
- Responsible AI and Governance
- How Australian Organisations Can Implement AI Safely, Ethically, and Effectively
- Reduce Admin as a Small Business Owner — What AI Can and Can't Do
Sources
[1] Australian Institute of Company Directors / Bird & Bird (2024) — AI Governance Guidance for directors of SMEs and NFPs; notes that approximately two-thirds of Australian organisations are using or planning to use AI technology, and directors must understand their governance responsibilities.
[2] Corrs Chambers Westgarth (2025) — 'Responsible AI Governance: Key Considerations for Australian Organisations' — 'If boards and organisations do not govern AI use, then ethical decisions about its deployment are made by default.'
[3] University of Melbourne and KPMG (2025) — Only 30% of Australians believe the benefits of AI outweigh its risks; approximately 78% expressed concern about negative outcomes from AI.
[4] Australian Government, Department of Industry Science and Resources — National AI Plan (December 2025) — includes specific provisions for SME and NFP support within the National AI Centre and confirms government support for responsible AI adoption.
[5] Australian Government — Guidance for AI Adoption (October 2025) — replaces the Voluntary AI Safety Standard; provides six responsible AI practices with a foundational tier aimed at SMEs and NFPs.
[6] The Policy Place (2026) — 'AI Governance for Australian Nonprofits: Privacy, Risk and Compliance Guide' — confirms boards hold governance responsibility; personal and sensitive client information must not be entered into public AI tools.