• The announcement of the provisional agreement to establish the first comprehensive regulatory framework on AI by the European Union marks a historic development. At a time when even the creators of technological innovation publicly express concerns about the potential for unchecked dominance of machines over humans, Europe is taking a leading initiative. It aims to impose fundamental principles and values on the technological development of the 4th Industrial Revolution.

    The agreement aims to ensure that AI is utilised in a transparent, fair, safe, and environmentally friendly manner without limiting possibilities and opportunities for European AI-related start-ups. Among other provisions, it guarantees that AI-generated content will always be labelled as such, and AI systems interacting with humans will be required to inform the user that they are in contact with a machine. Human control systems for machines and the installation of risk management systems are also foreseen.

    Simultaneously, a distinction is made between systems classified as ‘high-risk,’ such as those used in sensitive areas like critical infrastructure, training, human resources, and public order. Particularly in the latter category, the use of remote biometric identification systems in public places is restricted to avoid mass surveillance of populations. The text goes beyond theoretical considerations and establishes a necessary control mechanism—the European AI Office, which will coordinate compliance and enforcement, having the authority to impose significant financial penalties.

    Despite reservations expressed about the potential impact on Europe’s technological competitiveness, a balanced approach has been taken to ensure that research is not restricted while implementing strict regulations, mainly on large-scale applications. In this framework, the technology market itself acknowledges that the new legislation leaves companies room for manoeuvre, despite the restrictions.

    While Europe may not be in a position to take the technological lead in the development of the 4th Industrial Revolution, it has demonstrated its ability to use its organised market and institutions to regulate an environment which risks becoming chaotic.

    Let us make no mistake: technology will always advance faster than bureaucratic negotiations of regulations.

    Within four months, we transitioned from ChatGPT 3 to ChatGPT 4, representing a shift from a system with 175 billion parameters to one with a trillion parameters.

    The time it took for ChatGPT to ‘reinvent’ itself was not enough to pass the regulation from the European Parliament to the Council, a distance of less than one kilometre in the heart of Brussels. Considering that the text was initially presented in 2021 and the regulation is not expected to take effect before 2025, the significant difference in the reaction reflexes becomes understandable. Nonetheless, the importance of the European decision remains substantial. The EU is the first international entity to successfully impose regulatory rules on the development of artificial intelligence, establishing crucial parameters of transparency, accountability, and control, while making the technology more human-centred.

    Europe has proven itself to be the most sensitive international actor in protecting human rights and upholding fundamental principles and values. It is now laying the foundation for further development of a stronger international regulatory framework, providing a model that could be adopted more widely with country-specific adaptations. Additionally, the rules are likely to be applied outside the EU for platform functionality reasons.

    It also paves the way for a similar intervention at the level of International Law, which would have been imperative had the EU not taken action, given that the UN recently seems to have sunk into a quagmire of inefficiency and mere observation of major international developments.

    In any case, the EU appears to have learned a crucial lesson from the uncontrolled growth of technological giants. It has taken steps to address this issue belatedly, as these wholly unregulated platforms became available for the dissemination of fake news and hate speech, resulting in profound social and political consequences. That is why the EU’s regulatory intervention on AI, following the Digital Service Act and other related legislation, is a step of historic importance, no matter how modest it may appear today.

    Panagiotis Kakolyris AI Industry Values

    Panagiotis Kakolyris

    AI Act: A Historic Victory for European Values

    Blog

    21 Dec 2023

  • Artificial Intelligence (AI) is changing our world. This new phenomenon carries many threats, but also offers many opportunities. We need to find a suitable framework to support trustworthy AI. A key challenge remains: can we, as humans, retain control over the technology or will the technology take control of humanity? In responding to this challenge, the following question needs to be considered: What kinds of tools are needed, not only to keep control of AI development, but foremost to multiply the possible opportunities it offers?

    The current pandemic has shown how useful and important AI can be in helping us to fight COVID-19. Moreover, it has clearly demonstrated that we cannot afford not to utilise it, nor do we have time to lose with regard to its development.

    Hence, it is our responsibility to urgently establish an adequate framework for the development of AI systems based on a revision of the existing law and followed by possible new legislative proposals with a clear focus on future-proof tools. We have to generate a suitable governance model that not only has its foundation in law, but that also ensures democratic oversight through the open and collaborative participation of all partners and the validation of AI solutions by science and society. We should build trustworthy AI based on a human-centric and principled approach. The practical implementation of ethical rules in the design of AI (through the existing ex post model of analysing the consequences, including unintended ones, as well as a new ex ante model that provides an impact assessment in the early stages of development) and the evaluation of the everyday functioning of AI systems are essential.

    It will not be possible to develop AI and claim all its economic and social benefits without a clear model for data use (including flows, collection and processing) that fully respects fundamental rights and the principles of cybersecurity. It will not be possible to build trustworthy AI without transparent rules for the relationships between its users (workers, citizens and consumers) and AI designers, developers and deployers (with the symmetry of information required, e.g. practical schemes for ‘explainability’). It will not be possible to accurately implement various AI functionalities without undertaking risk assessments and introducing mechanisms to manage those risks.

    To achieve all of the above, we need compromises at various levels: between European institutions and stakeholders (businesses, citizens and consumers, taking into account rights), between European 10 institutions and member states (based on common and harmonised solutions), and between political groups, which are currently more focused on their differences than similarities. How can these compromises be achieved swiftly?

    The answer is multidimensional and complex; however, we should be brave enough to pursue it. Paradoxically, the unfortunate experience of COVID-19 has brought a lot of positive momentum to our search for answers, proving to be a real AI development game-changer.

    AI Technology

    Artificial Intelligence and Governance: Going Beyond Ethics

    Research Papers

    23 Mar 2021

  • Artificial Intelligence (AI) has emerged as the main engine of growth of the Fourth Industrial Revolution, owing to its naturally cross-cutting, general-purpose nature. From a military perspective, the range of potential applications is at least as vast as the current range of tasks that require human cognition, e.g., analysing and classifying visual data, organising logistics, operating vehicles, or tracking and engaging hostile targets. How can Western nations – by which I mean those nations that are members of either NATO or the European Union (or both) – make the most out of the rise of AI, bearing in mind its potential defence applications?

    AI Defence Innovation

    Artificial Intelligence and Western Defence Policy: A Conceptual Note

    IN BRIEF

    12 Jan 2021