
Adopting Responsible AI in business
This is the second article in our series on artificial intelligence (AI) concepts. This article presents a checklist for businesses considering AI adoption. If you are new to AI, the first article introduces AI and its various fields, development methods and usage applications.
Using Microsoft CoPilot in your business.
AI adoption checklist for businesses
A MIT study has found that over 95% of Gen AI Pilots fail to deliver measurable returns due to poor strategic choices, lack of data quality, or subpar integrations (Snyder, 2025). AI adoption is not a matter of acquiring new technology; it is about having a clear strategic plan that balances opportunity with responsibility, risk, and long-term sustainability.
To succeed, businesses should adopt a responsible AI adoption strategy whereby AI projects clearly establish the purpose, cost-benefit, and diverse stakeholders. The design of the AI system should take a proactive security-by-design and privacy-by-design approach. High-quality data with clear ownership and safeguards is used. Any AI output is clearly labelled, and for AI systems involved in decision-making, a human reviewer is always involved.
We have developed this convenient checklist based on the Responsible Artificial Intelligence Guidance for Businesses (Ministry of Business, Innovation and Employment [MBIE], 2025). You can download the checklist PDF here.
Criteria | Description |
---|---|
Why AI, define the purpose | Answer 'WHY' your business wants to use AI. What goals are you trying to achieve – a new product or service offer, automating repetitive tasks to improve productivity, or data analysis to aid decision making? |
Know your environment, its strengths and limitations. Conduct an AI preparedness due diligence. | Review your existing technology infrastructure: - Will AI integrate with your existing IT applications and data? What upgrades, if any, may be needed? - Do you have AI skills in-house? What training may your staff need to operate and maintain the system? - Is the AI system for internal enterprise or public use? - Do you have test environments to trial the AI system? |
Which provider's AI model are you going to use? Evaluate your suppliers | - What AI model will be suitable for your usage scenario and who are the providers of this model? - When selecting a model and implementation partner, perform due diligence on partner capabilities, including background reference checks. - Reach out to industry peers or associations about existing experience of AI implementations and their suppliers that you can learn from. |
Evaluate your data | An AI is only as good as the data it is trained on. Data must be accurate, complete, lawfully obtained, and relevant. - Establish clear ownership rights for data being used with AI. Is this data company-owned, or sourced from third parties? Who holds the intellectual property rights for the data? - In what context was the data collected, and does that context allow its usage with AI? What are the known biases in the data? - How often data is modified, and by whom. - Ensure data used complies with relevant agreements and regulations. |
Perform a cost-benefit analysis | Conduct a cost-benefit analysis considering the AI system's total cost of ownership (TCO). The TCO includes implementation costs, ongoing licensing fees, and maintenance support. |
Conduct security risk assessments | Adopt a security-by-design approach. Bring in your security specialists and information management experts to embed security into the new system from the get-go. - Ensure data used with AI is classified and protected with safeguards reflective of its classification. Who can access this data today, and will the AI system respect these existing permissions? - AI can exacerbate existing vulnerabilities. For example, if your existing systems have loosely defined permissions, the power of AI can surface previously hidden information and potentially expose confidential data. - Develop incident response plans for unexpected security risks. What will you do if your staff or the public complains about your AI system exposing confidential information? - If development is outsourced, what protections does your supplier have? - Set up monitoring for security risks and regularly review the AI system audit logs. |
Create a governance framework | A responsible AI system requires strong governance. A cross-disciplinary team of experts – developers, security experts, information experts, lawyers, and user representatives – is needed for governance to be effective. - Review your existing IT security, information management, and software development practices to identify gaps for AI usage. - AI systems involved in decision-making always have a human-in-the-loop approach to review and verify AI-generated output. - Have an exit strategy ready? What if you had to shut down your AI system? What if you had to move to a different supplier or AI model? |
Conduct privacy assessment | In New Zealand, it is a legal requirement to have a privacy officer if a firm collects personal information from its customers. Whether the AI system is used internally or publicly, adopt a privacy-by-design approach. - Conduct a privacy assessment of AI system design to identify privacy concerns. - Anonymise any data that includes private information before using it for AI training. |
Be transparent | Transparency is essential to building trust and confidence in any AI system. - Conduct a stakeholder impact assessment to identify all stakeholders and AI's impact on them. Communicate openly and transparently about the purpose behind the AI system and security, governance, and privacy safeguards that have been applied. - Clearly identify any AI-generated content using labels, watermarks or disclaimers. - Implement a clear feedback and complaints channel for stakeholders to provide input. - Practise good recordkeeping and have documentation and audit logs of AI decisions and change activities. |
References
Ministry of Business, Innovation, and Employment. (2025). Responsible Artificial Intelligence guidance for businesses. https://www.mbie.govt.nz/business-and-employment/business/support-for-business/responsible-ai-guidance-for-businesses
Snyder, J. (2025, August 26). MIT Finds 95% Of GenAI Pilots Fail Because Companies Avoid Friction. Forbes. https://www.forbes.com/sites/jasonsnyder/2025/08/26/mit-finds-95-of-genai-pilots-fail-because-companies-avoid-friction/