AI4PublicPolicy Components pt.5

In our previous blog post about the main components of the AI4PublicPolicy platform, the AI Security, AutoML and Text and Sentiment Analysis components have been described – find the 4th blog post on the platform components here.

In this 5th and final part of the AI4PublicPolicy platform components’ analysis, we are shedding light on three more components.


Policy Extraction

Component Description

This toolkit gives the policymaker the ability to choose an AI workflow from the catalogue of Machine Learning/Deep Learning workflows (pre-built by a data scientist) and apply it to the relevant dataset. The toolkit returns a policy model to the policymaker who uses it to estimate the parameters of the policy and ultimately to propose one that is based on the dataset, the recommendation from the AI model and the policymaker’s interpretation of the results.

Contrary to the AutoML component, the policymaker could explicitly choose different AI models and evaluate their performance on a real dataset and manually choose the one that performs best.


Dataset Explorer

This subcomponent offers a convenient way to work with datasets by providing a visual interface to edit and transform them. Its primary purpose is to prepare the dataset for further processing in the pipeline. Basic transformations that can be performed using this subcomponent include adjustments to column names, filtering of columns, and adjustments to column types.

AI Algorithms Catalogue

This subcomponent contains all the available ML/DL algorithms to choose from.

Model Explorer

The Model Explorer is a powerful tool within the ML pipeline that provides users with a list of trained models and their corresponding performance metrics. Users can easily evaluate the results by reviewing key information such as algorithm name, accuracy, feature importance, and duration. Additionally, policymakers can save and load models for further exploration, or deploy them as a service endpoint to other AI4PublicPolicy components.

Open Analytical Environment

It’s a platform which provides basic building blocks for constructing AI pipelines. There are two main categories of pipelines: code-centric and visual-centric. Jupyter Notebooks falls into the code-centric category and requires more coding from the data scientist to build the pipeline using Python ML libraries. On the other hand, KNIME is a visual-centric option that enables users to build pipelines with functional blocks provided by KNIME and connect them to the visual designer. This approach requires less coding, making it an attractive option for those without extensive coding experience.


Datasets that come from other datasets, selected AI algorithm from the catalogue, and parameters of the AI algorithm.


A policy model that includes details such as algorithm name, accuracy, and feature importance rankings based on the influence of different features on the trained model.

The following figure shows the policy extraction internal architecture:

Policy Evaluation and Optimization

Component Description

This component affords the simulation and evaluation of the policies developed by leveraging the opinions and feedback of local actors to propose new insights and improvements.

In particular, a new virtual environment process is designed which in the first phase proposes the developed policy models to local actors and through different channels (existing Pilot applications, online surveys, social media) collects the explicit and implicit feedback using the provided tools. In the subsequent phase, this feedback is used as input to a mechanism that extracts new insights and capabilities to augment the artificial intelligence algorithms. More specifically, the tool to analyze the responses can give weights to the input features of the AI algorithm, remove/add features based on the opinions of the actors or give insights to the policy maker.

The tools that extract feedback from the different channels adopt mechanisms developed for Sentiment Analysis, Opinion Mining and Text Analytics. By using this procedure, citizens are involved in the policy co-creation process, while the selection of the most appropriate data sources and policy models based on the expectations of local actors is also enabled.

The following figure shows the policy evaluation and optimization flow.


Policy models as outcome of the AI algorithms and feedback collected from the local actors through different channels.


New features for AI algorithms and other insight from the analysis of the responses, as well as dataset enrichments.



Component Description

The Virtualized Policy Management Environment (VPME) incorporates the policy models, the explainability AI mechanisms outcomes from Explainable AI (XAI) for Policy Interpretation and the Policy Explainability and Interpretation Tools, the Sentiment Analysis and Opinion Mining and Text Analytics for Document Processing components which create the sentiment analysis, the opinion mining and the text analytics tools respectively and finally the security interfaces. The aforementioned technology components which are incorporated into the VPME platform can be accessed through their APIs. The VPME is a cloud-based platform and realised through Jupyter Notebooks from which all the different components are connected through the API each component offers. Each Pilot has a set of Jupyter Notebooks which analyses each dataset and then the proper visualisations are produced along with the models’ forecasts.


Models and functionalities for AI-based reusable and interoperable policies, explainable AI (XAI) for policy interpretation, policy explainability and interpretation tools, secure operation of AI algorithms and tools, AutoML for public administrators, sentiment analysis and Opinion Mining and Text Analytics for Document Processing along with the Policy-Related Datasets.


A Cloud-based platform based on a set of Jupyter Notebooks through which the user is able to control the models and functionalities created by the different sets of tools.

The following figure shows the VPME subcomponents.