Artificial Intelligence (AI) is changing our world. This new phenomenon carries many threats, but also offers many opportunities. We need to find a suitable framework to support trustworthy AI. A key challenge remains: can we, as humans, retain control over the technology or will the technology take control of humanity? In responding to this challenge, the following question needs to be considered: What kinds of tools are needed, not only to keep control of AI development, but foremost to multiply the possible opportunities it offers?
The current pandemic has shown how useful and important AI can be in helping us to fight COVID-19. Moreover, it has clearly demonstrated that we cannot afford not to utilise it, nor do we have time to lose with regard to its development.
Hence, it is our responsibility to urgently establish an adequate framework for the development of AI systems based on a revision of the existing law and followed by possible new legislative proposals with a clear focus on future-proof tools. We have to generate a suitable governance model that not only has its foundation in law, but that also ensures democratic oversight through the open and collaborative participation of all partners and the validation of AI solutions by science and society. We should build trustworthy AI based on a human-centric and principled approach. The practical implementation of ethical rules in the design of AI (through the existing ex post model of analysing the consequences, including unintended ones, as well as a new ex ante model that provides an impact assessment in the early stages of development) and the evaluation of the everyday functioning of AI systems are essential.
It will not be possible to develop AI and claim all its economic and social benefits without a clear model for data use (including flows, collection and processing) that fully respects fundamental rights and the principles of cybersecurity. It will not be possible to build trustworthy AI without transparent rules for the relationships between its users (workers, citizens and consumers) and AI designers, developers and deployers (with the symmetry of information required, e.g. practical schemes for ‘explainability’). It will not be possible to accurately implement various AI functionalities without undertaking risk assessments and introducing mechanisms to manage those risks.
To achieve all of the above, we need compromises at various levels: between European institutions and stakeholders (businesses, citizens and consumers, taking into account rights), between European 10 institutions and member states (based on common and harmonised solutions), and between political groups, which are currently more focused on their differences than similarities. How can these compromises be achieved swiftly?
The answer is multidimensional and complex; however, we should be brave enough to pursue it. Paradoxically, the unfortunate experience of COVID-19 has brought a lot of positive momentum to our search for answers, proving to be a real AI development game-changer.