top of page
Search
Writer's pictureDeep.Circle

AI for Humanity - The French AI strategy

Updated: Apr 18, 2018

Emmanuel Macron, the French President, recently held a conference to talk about the country's strategy to develop AI in France and Europe. Here is an overview.

Introduction


Defining artificial intelligence (AI) is not easy. The field is so vast that it cannot be restricted to a specific area of research: it is more like a multidisciplinary program. Originally, it sought to imitate the cognitive processes of human beings. Its current objectives are to develop automatons that solve some problems better than humans, by all means available.

AI is at the crossroads of several disciplines: computer science, mathematics (logic, optimization, analysis, probabilities, linear algebra), and cognitive science, not to mention the specialized knowledge of the fields to which we want to apply it. The algorithms that underpin it are based on equally varied approaches: semantic analysis, symbolic representation, statistical and exploratory learning, neural networks, and so on. The recent boom in AI is due to significant advances in machine learning. Learning techniques are revolutionary compared to AI's historical approaches: instead of the machine being programmed with the rules that govern a task (often much more complex than one might think), it now discovers them itself.

AI is also developing quickly due to the international “dataization” of all sectors (i.e. big data) and the exponential increase in computing power and data storage capacities. Applications are multiplying and directly affecting our daily lives: image recognition, self-driving cars, disease detection, and content recommendation are some of the many possibilities being explored. The universal nature of AI and its many variations herald a new revolution, with its share of pitfalls and opportunities.



The main lines of the Villani Report


Choose your part:



Or download the:


PART 1

Developing an aggressive data policy


Context

Many artificial intelligence (AI) strategies start with the collection of large bodies of data.

Data is a key competitive advantage in the global AI race. Digital giants in China, Russia and the United States, which have built up their positions by focusing on data collection and use, have a considerable head start. This asymmetry is clearly visible: for instance, in France, large American platforms capture approximately 80% of visits to the 25 most popular sites every month.

A data policy taking into account AI requirements is therefore essential if France and the European Union wish to attain the goals of sovereignty and strategic autonomy. Although these goals are ambitious, they are necessary steps in the creation of a French and European AI industry.

Data is raw material of AI, and essential for the development of new practices and applications.

Proposals

  • 01: Encourage companies to pool and share their data The government must encourage the creation of data commons and support an alternative data production and governance model based on reciprocity, cooperation and sharing. The goal is to boost data sharing between actors in the same sector. The government must also encourage data sharing between private actors, and assist businesses in this respect. It must organize for certain data held by private entities to be released on a case-by-case basis, and support data and text mining practices without delay.

  • 02: Create data that is in the public interest Most of the actors heard by the mission were in favour of progressively opening up access to some data sets on a case-by-case and sector-specific basis for public interest reasons. This could be in one of two ways: by making the data accessible only to the government, or by making the data more widely available, for example to other economic actors.

  • 03: Support the right to data portability The right to data portability is one of the most important innovations in recent French and European texts. It will give any individual the ability to migrate from one service ecosystem to another without losing their data history. This right could be extended to all citizen-centred artificial intelligence applications. In this case, it would involve making personal data available to government authorities or researchers. This would be beneficial for three reasons:

    • It would encourage the creation of new databases for use by public services;

    • It would give new meaning to the right to portability by supporting improved data circulation under the exclusive control of citizens;

    • It could be implemented immediately after the European data protection regulation enters into force, without the need for new constraints being introduced for private actors.



PART 2

Targeting four strategic sectors


Context

Dominant players and emerging countries in the AI field have adopted radically different development models. France and Europe will not be able to claim a place on the global stage if they simply attempt to create a “European Google”.

France must instead draw on its economy’s comparative advantages and areas of excellence, focusing on priority sectors where our industries can play key roles at the global level.

The sectors with sufficient maturity to launch major transformation operations are health, transport, the environment and defence and security.

Why these four sectors?

  • 01They are areas in which France and Europe excel.

  • 02They represent important challenges in terms of the public interest.

  • 03They attract the interest and involvement of public and private actors.

  • 04They require strong public leadership to trigger the transformations.

Proposals

Efforts must focus on achieving these three goals:

  • 01Implement sector-specific policy focusing on major issues Industrial policy must focus on the main issues and challenges facing our era, including the early detection of pathologies, P4 medicine, medical deserts and zero-emission urban mobility. These issues could be identified by sector-specific commissions in charge of publicizing and running activities for their ecosystems.

  • 02Test sector-specific platforms To support innovation, sector-specific platforms must be created to compile relevant data and organize its capture and collection; to provide access to large-scale computing infrastructures suitable for AI; to facilitate innovation by creating controlled environments for experiments; and to enable the development, testing and deployment of operational and commercial products.

  • 03Implement innovation sandboxes The AI innovation process must be streamlined by creating testing areas (sandboxes) with three characteristics:

    • a temporary reduction of the regulatory burden to help actors test innovations;

    • support to help actors shoulder their obligations;

    • and resources to run experiments in “real-life” conditions

    • The goal of these sandboxes will be to facilitate the testing, iterative design and deployment of AI technologies in coordination with future users.

Possible applications

  • In the health field, predictive and personalized medicine will make it possible to monitor patients in real time, and improve the detection of anomalies in electrocardiograms.

  • In the transport field, the development of the driverless car is a key industrial priority.

  • In the defence and security field, AI could be used to detect and even respond to cyberattacks that cannot be detected by humans, and facilitate the analysis of multimedia data.

  • In the environmental field, the development of monitoring tools for farmers will pave the way for smart agriculture benefiting the entire agrifood chain.

PART 3

Boosting the potential of French research

Context

France’s research and higher education institutes in the artificial intelligence (AI) field have always been widely renowned at the international level. French scientific training has a reputation for excellence and helps create a world-class pool of researchers.

Nevertheless, the AI research field has changed considerably in recent years. There is increasing competition from private-sector research institutes, with major AI firms opening fundamental research centres. This has accelerated the “brain drain” of students and experienced researchers.

Another difficulty facing French research is its weak performance in terms of the transfer and use of this knowledge by industry, in both startups and large groups.

AI research is the focus of fierce international competition, particularly between China and the United States.

Proposals

To better connect geographical regions and AI research areas, the mission has developed three key proposals:

  • 01 Create interdisciplinary AI institutes (3IA) in selected public higher education and research establishments. These institutes must be spread throughout France and cover a specific application or field of research.

  • 02 Allocate appropriate resources to research, including a supercomputer designed especially for AI applications in partnership with manufacturers. In addition, researchers must be given facilitated access to a European cloud service.

  • 03 Make careers in public research more attractive by boosting France’s appeal to expatriate or foreign talents: increasing the number of masters and doctoral students studying AI, increasing the salaries of researchers and enhancing exchanges between academics and industry.

PART 4

Planning for the impact of AI on labour

Context

While it is not known how many jobs will be created or destroyed due to the automation of tasks, it is likely that most occupations and organizations will change.

This problem must be tackled head on by acknowledging that a major shift is taking place, and that production processes will be distributed between humans and machines in the future. France must set aside the necessary resources to plan and prepare for this transition. Priority must be given to developing complementarity between human labour and machine activity.

More than 50% of tasks in 50% of occupations could be automated, according to France’s Employment Orientation Council.

93% of the Mediametrie survey respondents believe that AI technologies will modify the way they work.

Proposals

New training models must be planned and tested to prepare for these professional transitions. Three main proposals have been put forward:

  • 01Create a public laboratory on the transformation of work The creation of a public laboratory on the transformation of work will encourage reflection on the ways in which automation is changing occupations. It will also make it possible to test tools supporting professional transitions, especially for those likely to be most affected by automation.

  • 02Develop complementarity between humans and machines To improve future working conditions, reflections must focus on developing a “complementarity index” for businesses, and including all aspects of the digital transition in social dialogue. This could result in a legislative project on working conditions in the automated era.

  • 03Test new funding methods for vocational training This testing would make it possible to address AI-related changes to value chains. Currently, businesses fund the vocational training of their own employees. However, for their digital transformation, they often call on other actors who capture value and play a key role in automating tasks but do not help fund vocational training for employees. New funding methods must therefore be tested through social dialogue.

PART 5

Making AI more environmentally friendly

Context

Global warming is now a scientific certainty. Taking into account the environmental impacts related to the development of digital practices and services is therefore essential.

Although the growth of artificial intelligence (AI) adds to the negative environmental impact of digital technologies, it could also contribute to environmentally friendly solutions. AI offers many opportunities in the ecological field, including better knowledge of biological ecosystems evolution, optimized resource management, environmental preservation and improved protection for biodiversity.

An ambitious political AI policy must do more than just optimize resource use. It must promote growth that is characterized by economy and solidarity, helping to contribute to a smart ecological transition.

Energy consumption by the digital sector could increase tenfold by 2030, accounting for between 20 and 50% of global electricity use.

Proposals

  • 01 The government must use AI to support the ecological transition:

    • Firstly, by creating a research centre focusing on AI and the ecological transition. This centre could contribute to projects such as Tara Oceans, which is at the crossroads of life sciences and ecology.

    • Secondly, by implementing a platform to measure the environmental impact of smart digital tools.

  • 02 As part of this approach, it must help AI become less energy-intensive by supporting the ecological transition of the European cloud industry.

  • 03 Lastly, ecological transition must go hand in hand with the liberation of “ecological data”. AI can help reduce our energy consumption and restore and protect nature – for instance, by using drones to carry out reforestation, or by mapping living species through image recognition technology.

PART 6

Opening up the black boxes of AI

Context

Artificial intelligence (AI) is already omnipresent. Every day, we unknowingly interact with smart systems that make our lives easier – or, at least, that are supposed to make our lives easier.

However, many questions are being asked today: does AI really seek to improve our well-being? If not, how can we make sure it does?

These questions have led to a wide-ranging discussion on the ethical issues related to the development of AI technologies and, more generally speaking, algorithms.

To ensure that new AI technologies respect our social values and rules, we must take action now by mobilizing scientists, government, industry, entrepreneurs and civil society.

Given that algorithms may be biased, they can have undesirable effects on our lives.

Proposals

In the long term, artificial intelligence technologies must be explainable if they are to be socially acceptable. For this reason, the government must take several steps:

  • 01 Develop algorithm transparency and audits

    • by developing the capacities necessary to observe, understand and audit their operation. To do so, a group of experts must be created to analyse algorithms and databases, and research on explainability must be supported to encourage civil society to carry out its own evaluations.

    • This means focusing on three areas of research: producing more explainable models, producing more interpretable user interfaces, and understanding the mechanisms at work in order to produce satisfactory explanations.

  • 02 Consider the responsibility of AI actors for the ethical issues at stake:

    • By including ethics in training for AI engineers and researchers.

    • By carrying out a discrimination impact assessment, along the lines of France’s privacy impact assessment (PIA), to encourage AI designers to consider the social implications of the algorithms they produce.

  • 03 Create a consultative ethics committee for digital technologies and AI, which would organize public debate in this field. This committee would have a high level of expertise and independence. Indeed, 94% of those interviewed considered that the development of AI in our society should be regularly addressed in public debates.

  • 04 Guarantee the principle of human responsibility, particularly when AI tools are used in public services. This includes setting boundaries for the use of predictive algorithms in the law enforcement context. It also means extensively discussing any development of lethal autonomous weapons systems (LAWS) at the international level, and creating an observatory for the non-proliferation of these weapons.

PART 7

Ensuring that AI supports inclusivity and diversity

Context

In a world where technologies are becoming key to our future, artificial intelligence (AI) must not become yet another tool for exclusion.

Women only account for 33% of people in the digital sector. Minorities are also underrepresented.

Given the fast-changing nature of AI technologies and practices, our society has a collective duty to be aware of and discuss the issues this raises. This is especially relevant for fragile populations and groups already excluded from the digital sector, for whom AI represents an even greater danger.

AI could lead to a better, fairer and more efficient society, or it could lead to wealth being concentrated in the hands of a very small group of digital elites. Therefore, in the AI field, inclusive policies must seek to attain two goals: ensure that the development of these technologies does not increase social and economic equalities, and use AI to reduce these inequalities.

In 2016, less than 10%of those enrolled in IT engineering schools were women.

Proposals

  • 01Ensure that 40% of those enrolled in digital engineering courses are women by 2020 This recommendation was supported by more than 85% of those interviewed. To attain this goal, an incentive policy could be implemented. This initiative must be accompanied by a policy to train and raise awareness of diversity issues among educators in the AI industry. To address the growing inaccessibility of public services and rollback of rights caused by dematerialization, administrative procedures must be modified and mediation skills enhanced.

  • 02Modify administrative procedures and enhance mediation skills The government could launch an automated system managing administrative procedures to help individuals better understand administrative rules and how they apply to their personal situations. At the same time, new mediation tools must be implemented to provide support to those who need it.

  • 03Support AI-based social innovations The government must support social innovation programmes based on AI (dependency, health, social action and solidarity) to ensure that technological advances also benefit those working in the social action field.

The Villani Mission

On 8 September 2017, French Prime Minister Édouard Philippe tasked Cédric Villani, Mathematician and Deputy for the Essonne, with a mission on artificial intelligence (AI). His goal was to lay the foundations of an ambitious French strategy in the AI field.

Composition of the mission:

  • 📷Cédric Villani Mathematician and Member of the National Assembly@VillaniCedricCédric Villani is a French mathematician and a former student of the Ecole normale supérieure. He received a doctorate in mathematics and he won the Fields Medal in 2010 and the Doob prize in 2014. He is now a professor at the University of Lyon. He directed the Institut Henri Poincaré in Paris from 2009 to 2017. He has held various visiting positions at several foreign universities. He is a member of the National Assembly for the Fifth Constituency of the Essonne and he is vice-president of the OPECST (parliamentary office for scientific and technological options assessment ). He is a member of the Academy of Sciences and has published several books, including Alive Theorem, which has been translated into 12 languages.

  • 📷Marc Schoenauer Principal Senior Researcher with INRIA@evomarcMarc Schoenauer is Principal Senior Researcher with INRIA since 2001. He graduated at Ecole normale supérieure. For 20 years, he has been full time researcher with CNRS (the French National Research Center), working at CMAP (the Applied Maths Laboratory) at Ecole Polytechnique. He then joined INRIA, and later founded the TAO team (Thème Apprentissage et Optimisation, i.e., Machine Learning and Optimization Theme) at INRIA Saclay in September 2003 together with Michèle Sebag. He has co-authored more than hundred articles and has supervised 35 doctorate dissertations. He has been president of the AFIA (the French Association for Artificial Intelligence) from 2002 to 2004.

  • 📷Yann Bonnet General secretary to the French Digital Council@yann_bonnetAn engineer by training, Yann Bonnet began his career as a consultant. He joined the French Digital Council in 2013 as General Rapporteur, before becoming Secretary General in 2015. He was in charge of steering the national consultation on digital transformations, which was launched by the Prime Minister in 2014. This initiative eventually led to the Law for a Digital Republic. Yann Bonnet was also in charge of multiple reports, including taxation in the digital age, the digital dimension of the TTIP negotiations and online platforms fairness.

  • 📷Charly Berthet Head of legal and institutional matters at the French Digital Council@charlyberthetCharly Berthet is a French lawyer working at the French Digital Council as head of legal and institutional affairs. He has worked specifically on regulation matters, on data protection and civil liberties. He has been a consultant for the Ministry of Foreign Affairs where he helped elaborate the digital international strategy. He graduated at University Paris II and University Paris Dauphine.

  • 📷Anne-Charlotte Cornut Rapporteur of the French Digital CouncilAnne-Charlotte Cornut graduated from Sciences Po and HEC and is rapporteur of the French Digital Council since april 2016. She worked on the digital transformation of the SMEs and of the higher education and research. She formerly was adviser to the CEO of 1000mercis/numberly, a data marketing company.

  • 📷François Levin Head of economic and social affairs at the French Digital CouncilFrançois Levin has graduated in philosophy from Ecole normale supérieure of Lyon and in public administration at University Paris I. He joined the French Digital Council in 2015 and is now head of economic and social affairs. He has specifically worked on the digital transformation of work and formation as well as of culture and copyright law.

  • 📷Bertrand Rondepierre Engineer in the Corps de l’armement working for the Direction Générale de l’Armement (French defence procurement agency)@BertrandRdpBertrand Rondepierre graduated from Ecole polytechnique, holds an engineering degree from Telecom Paristech and is an alumnus of the Master degree Mathematics, Vision, Learning at ENS Paris-Saclay. He works as a system architect for the DGA, where he runs projects in digital and artificial intelligence fields.

  • 📷Stella Biabiany-Rosier Executive Assistant of the French Digital CouncilStella Biabiany-Rosier has spent her career as a Assistant Manager in consulting and law firms, then in ministerial offices. Since July 2017, she has been assisting the General Secretary of the French Digital Council.

Assisted by Anne-Lise Meurier, Zineb Ghafoor, Candice Foehrenbach, Camille Hartmann, Judith Herzog, Marylou Le Roy, Jan Krewer, Lofred Madzou et Ruben Narzul

The mission’s work was carried out between 8 September 2017 and 8 March 2018. Its tasks included:

  • The hearing of 400 experts from a variety of areas and a careful consideration of several contributions, including the contribution of France Strategy;

  • Implementing a public consultation in partnership with Parlement & Citoyens, which 1,639 people participated in;

  • Completing a benchmark of policies implemented in 15 countries;

  • A survey conducted by Mediametrie and the Villani mission of about 3000 individuals.

176 views0 comments

Comments


bottom of page