AI System Ethics Self-Assessment Tool


Get Started

This self-assessment tool is in beta version
AI System Ethics Self-Assessment Tool

What is the self-assessment tool for?

This self-assessment tool is built to enable AI developer organisations or AI operator organisations to evaluate the ethics level of an AI system, using Dubai’s AI Ethics Guidelines.

We are making it available so that for each AI system ethical performance can be assessed from the very outset.  This is to help the team think about the potential ethical issues that may arise throughout the development process, including initial idea stages through to the maintenance of the system when it is fully operating. It should also help in identifying particular guidelines to pay attention to for particular AI systems, and to give some ideas on what kind of mitigation measures could be introduced.

It is important to state that the guidelines in this self-assessment tool are recommended instead of compulsory.  There are, however, different recommendation strengths. Guidelines with recommendation level high (phrased as “should” in AI ethics guidelines document) are highly recommended and have a higher weight in the final score. Guidelines with recommendation level moderate are moderately recommended and phrased as "should consider" and have less weight in the final performance score. It is not suggested to proceed with AI system implementation unless a certain level of ethics performance is reached. The tool is used for self-assessment purposes only and will not be audited, checked or regulated during this time.

To improve the effectiveness of the guidelines, we actively welcome feedback and improvement suggestions, as well as examples of use cases to which the guidelines have been applied.  These can be given through the feedback form on our website.  Data will be used to monitor the broader adoption of the toolkit and in time to improve it (eg. provide benchmarking or relative scoring).

What does it contain?

The self-assessment tool contains the following components:

1. General Info & Classification

This is where you describe in general what the AI system is for, and where you can identify if your AI system deals with Non-significant decisions, Significant decisions, or Critical decisions.

2. Self-assessment of the performance level

This is where the self-assessment of the ethical performance level will be performed, depending on applicability level. The self-assessment has 4 parts:

  1. Fairness aspect self -assessment
  2. Accountability aspect self –assessment
  3. Transparency aspect self -assessment
  4. Explainability aspect self -assessment

3. Results

This page depicts the overall ethics level of the AI system, and highlight important performance gaps if exist. It also shows if any extra measures were taken depending on applicability.

How to use the tool?

1. Start from “General Info & Classification” tab. First describe the AI system, and then use the table to identify the decision type of your system.

2. Based on the decison type perform your self-assessment where applicable.

2.1.  Fill each guideline with a self-judged performance level. Note that, if "Not Applicable", an explanation should be provided in the "Answer" column.

2.2.  Under each guideline, check which mitigation methods you have applied. Where no specific mitigation methods are suggested or if mitigation measures are not applicable to your AI system, you could fill in your own mitigation methods measures in the explanation tab. The mitigation measures are not exhaustive and might not be applicable to every use case. They are provided as baseline examples to think about direction, guidance and to provide more clarity on the meaning of guidelines.

2.3. Under each mitigation, answer the questions asked where appropriate.

Note: the self-judged performance level, mitigation methods taken, and questions & answers are not necessarily directly relevant.

3. Go to the “Results” page, and check if you have met the suggested performance level and any existing performance gaps. If you have any performance gaps you should check the guidelines that are highlighted as alert, explore possible mitigation measures applicable to your AI system and try to embed them into your processes.

Licensing and Responsibility Note

Digital Dubai Office is not responsible for any misuse of AI System Ethics Self-Assessment Tool, and the user bears all the consequences of this use.

This self-assessment tool is published under the terms of a Creative Commons Attribution 4.0 International Licence in order to facilitate its re-use by other governments and private sector organisations.  In summary this means you are free to share and adapt the material, including for commercial purposes, provided that you give appropriate credit to the Digital Dubai Office as its owner and do not suggest the Digital Dubai Office endorses your use.

Get Started