AI4PublicPolicy / QARMA: Explainable & Transparent Machine Learning for Policy Makers

QARMA: Explainable & Transparent Machine Learning for Policy Makers

Machine learning has the potential to improve the way public policy is made. Policy makers are interested in new methods of understanding and analyzing data that will help them make better decisions. There is a layer of human ingenuity missing from existing models. True machine learning is one way to close this gap. The technology can provide decision makers with high-level analysis, helping them to connect the dots and arrive at more effective strategies for combating crime, ensuring health, and even protecting the environment.

While machine learning is clearly going to be useful, there is still a long way to go before it becomes an everyday tool in all policy-making scenarios. At the moment, policymakers are just beginning to learn how machine learning can assist them find answers to some of the world’s most pressing questions. Moreover, before deploying machine learning for producing public policy making results, policy makers must ensure that the models are unbiased, transparent and explainable, otherwise governments and organisations could face serious risks by automating or outsourcing key decisions.

The need for machine learning explainability and unbiased operation is highlighted in the AI Act of the European Parliament and the Council of Europe. The AI Act is globally the first systematic effort to regulate AI systems and is based on a risk based classification of AI systems. In the case of high risk systems (e.g., systems that take important decisions that impact human lives) it specifies that the AI systems must be explainable, transparent, trained with quality data, well documented and operating based on human oversight.

Netcompany-Intrasoft is using explainable AI technology to ensure the transparency of its AI systems and to facilitate proper human oversight, where end users understand how AI systems work. In this direction Netcompany-Intrasoft has also developed its Quantitative Association Rules Mining Algorithms (QARMA) Framework. QARMA comprises a family of algorithms for extracting valid, non-redundant multi-dimensional quantitative association rules obeying the standard support-confidence framework extended to cover essentially any user-defined “interestingness” metric, including conviction, lift etc. It extracts rules with an arbitrary number of preconditions on the attributes of items contained in the antecedents of a rule, and with a post-condition quantifying the value of a single (target) attribute of an item. One of the main value propositions of the framework is that it yields easily explainable rules i.e. it falls in the realm of Explainable Artificial Intelligence solutions.

In the scope of the AI4PublicPolicy project QARMA is used to provide explainable rules for public policies. Hence, it facilitates policy makers to extract insights for optimal decision making. INTRA has already used QARMA in extracting and explaining insights about optimal parking policies in the city of Athens in collaboration with DAEM and Novoville. We plan to use it for explainable knowledge extraction in other pilots of the project as well, including for example the new noise management pilot at the city of Nicosia.