ORBICAP AI Policy

Version 1.2, January 2026

ORBICAP's AI Policy establishes principles and guidelines for the ethical, effective, and responsible use of Artificial Intelligence (AI) tools and technologies across its operations. This policy aligns with international standards and best practices, ensuring ORBICAP’s commitment to transparency, accountability, and privacy while leveraging AI's potential to enhance services in leadership development, workflow optimisation, quality assurance, and data-driven decision-making

1. Purpose and Scope

This policy outlines ORBICAP's approach to adopting and endorsing AI tools in its operations. It applies to all employees, contractors, and collaborators utilising AI tools and systems. The categories of AI-enabled tools and international frameworks relevant to ORBICAP’s work are described in the attached appendix.

This policy also supports ORBICAP’s training programmes, including regional courses on statistical quality and academic lectures on official statistics, by ensuring that AI tools are used transparently and responsibly to enhance data integration, documentation, and analysis

1.1 Definition of Artificial Intelligence (AI) and AI Systems

For the purpose of this policy, an “AI system” refers to a machine-based system as defined in Article 3 of the EU Artificial Intelligence Act (AI Act), designed to operate with varying levels of autonomy and that, for explicit or implicit objectives, infers from the input it receives how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

2. Guiding Principles

The following principles guide ORBICAP's AI use and endorsements:

o   Ethical Use: AI tools must respect ethical standards, human dignity, and organisational values.

o   Transparency: Where applicable, clients, partners, and stakeholders will be informed of AI usage, and disclosures will be provided in deliverables.

o   Privacy and Compliance: All data processed using AI tools must comply with local and international regulations, including GDPR, and user data must be anonymised and secure.

o   Accountability: Humans will review AI outputs to ensure accuracy, cultural sensitivity, and alignment with project objectives.

o   Alignment with International Standards: ORBICAP aligns its advisory and capacity-building work with AI-enabled tools and frameworks commonly referenced by international organisations such as PARIS21, the United Nations, the World Bank, the IMF, and Eurostat.

3. AI-enabled Tools and Areas of Use

ORBICAP makes use of AI-enabled tools as supportive instruments in leadership, workflow, and quality-focused projects, in line with international best practice and applicable legal frameworks.

4. Implementation Guidelines

The use of AI tools at ORBICAP will adhere to the following implementation guidelines:

o   Training and Capacity Building: All staff will be trained to use endorsed AI tools responsibly.

o   Tool Selection: AI-enabled tools are selected based on project requirements, client mandates, and applicable legal frameworks. The appendix provides a categorical reference to the types of tools and frameworks relevant to ORBICAP’s work.

o   Client Communication: When AI tools significantly contribute to project deliverables, clients will be informed. This disclosure will be made through reports or presentations with a dedicated section in quality documentation templates (e.g., ‘AI Tools Used: Power BI for visualisation, validated by ORBICAP experts’).

o   Monitoring and Review: AI use will be reviewed annually during Q1 to ensure continued compliance with ethical standards, the EU AI Act, GDPR, and international statistical frameworks.

5. Risk Management

ORBICAP will actively manage risks associated with AI usage, including:

o   Identifying and mitigating potential biases in AI outputs.

o   Ensuring that all data processed using AI tools is anonymised and secure.

o   Maintaining human oversight to validate and refine AI-generated outputs

6. Legal Compliance and the EU AI Act

ORBICAP does not develop or distribute AI systems. We act as a professional user of selected, externally developed AI tools in support of leadership, quality, and data-focused consulting. Following the EU Artificial Intelligence Act (AI Act), ORBICAP classifies its use of AI as low-risk and ensures complete transparency, human oversight, and privacy protection in all relevant applications.

When advising public institutions or statistical producers on the use of AI in data workflows or capacity building, ORBICAP promotes alignment with the EU AI Act, the UN Fundamental Principles of Official Statistics, and the European Statistics Code of Practice. This includes highlighting when specific AI applications—such as classification, imputation, or editing—may fall under high-risk categories according to the Act, and ensuring these are managed accordingly through robust documentation, ethical review, and stakeholder communication.

7. Appendix

The attached appendix, “Appendix: Categories of AI-enabled Tools and Frameworks,” provides an illustrative, non-exhaustive overview of categories of AI-enabled tools and international frameworks relevant to ORBICAP’s work in statistical capacity building, quality assurance, workflow optimisation, and advisory services.

The appendix does not constitute an endorsement of specific vendors or products, nor does it imply operational deployment of AI systems by ORBICAP. Instead, it serves as a reference framework to support transparency, contextual understanding, and informed dialogue with clients and partners regarding the scope, purpose, and governance of AI-enabled tools in professional and capacity-building contexts.


This page was updated in January 2026.

Back