The European Commission published on April 25 2018 a Communication outlining the strategy of the EU for Artificial Intelligence. This post looks at the document, its structure and main points.
While the first two chapters deal with a general introduction an AI scenarios and Europe’s competitive posture in the international landscape (not great), the third part details the way forward the Commission is proposing and it’s by far the most interesting.
The first instinct, as always, is to throw money at the problem (the problem being Europe is lagging behind in this field, even if the first part of the document does not say it in so many words). So, in the paragraph titles somewhat pompously “Boosting the EU’s technological and industrial capacity and AI uptake across the economy” an ambitious program of investments is outlines. The estimate is move from an amount of about 4-5 Billion Euros in 2017 to (at least) 20 Billion in 2020, and even more in the following years. Let’s bear in mind that in 2021 the new budget period of the Union begins, minus the UK’s contribution.
The Commission will step up the budget for Artificial Intelligence in the Horizon 2020 18-20 work programmes to 1.5 B by the end of 2020 (circa a 70% increase), which is to say 500 Million Euros per year. This is a direct action by the Commission but the total amount comprises a sizable amount of private investments that should be triggered by the public money, à la “Juncker Plan” (which isn’t exactly working according to plan however). The amount of private investments so mobilized should theoretically reach 20 Billion in 2020. In the existing PPPs (Public-Private Partnerships) on robotics and big data alone the projected figure is 2.5 Billion. Additionally the Commission plan to establish a Joint Research Centre, jointly with Member States.
Small and Medium Enterprises are mentioned explicitly in the strategy, which is important especially for countries where SMEs constitute the vast majority of companies. The way the strategy envisions SMEs’ access to AI is -however- through “AI-on-demand” platforms, with the potential risk of vendor lock-ins and not acquiring the skills required to actually incorporate AI in their processes.
In order to attract those private investments, the European Fund for Strategic Investments will be enhanced and the Commission pledges to work with the European Investment Bank (EIB) with the goal to reach al least 500 Million in total investments. The range of initiatives is rounded by VentureEU, a 2.1B venture capital fund of funds programme.
Data is the fuel of AI
Since AI, especially the Machine Learning part of it, needs big datasets to work, making more data available is one of the goals included in the strategy, and rightly so. Datasets, in the vision of the Commission, should among other things support the AI-on.demand platforms (public-backed and open). Among the initiatives in support of this goal, there will be an updated directive proposal on public sector information, guidance on sharing private data, and a recommendation on accessing and preserving scientific information.
Socioeconomic, ethical, and legal aspects
The EC dutifully include a sections on the socioeconomic dimension, explaining how resistance to AI introduction is widespread among the entrenched sections of society and representatives of industries bound to be deeply changed by AI. To allay this fears, “leaving no one behind” is stressed as a slogan, and the goal of developing digital skill for workers is outlined. This is the usual answer to the “neo-luddites” fears, but it will be ineffective if the individual workers are not willing to learn new skills and re-invent themselves.
An attention to AI applications (profiling in particular) is present in the General Data Protection Regulation and the call for Privacy be Design and by Default techniques is recalled in the strategy as well. The legislative foundation of it all is the Fundamental Charter of the EU, which in its articles 7 and 8 lay out privacy and data protection as fundamental rights, setting the highest standard worldwide in this regard. The GDPR and surrounding legislation calls for Machine Learning accountability, declaring the right of the subjects to be provided with meaningful information about the logic involved in automated decisions, and also also the right not to be the subjected to automated decisions except in particular situations. In short, AI systems should be developed in a way that allows humans to understand at least the basis of their actions.
The EC included in its strategy the drafting of AI ethics guidelines by the end of 2018, in connection with the existing Product Liability Directive and Machinery Directive, as well as the developing of safety and liability standards to be developed by the European and International SDOs like CEN/CENELEC and ISO/EIC. In summary, individuals should be able to control the personal data generated by AI business tools and in particular be able to know if they are communicating with a human or not – in short, accountability of automated decisions is a requirement.
Roundtable at the European Parliament on 26 April 2018 – Full Video