The effects of automation on work creep into the public debate whenever a new technology extends into a large range of sectors. Mechanisation, electricity and IT have overturned and reinvented work and social organization. Will artificial intelligence be the new “general use technology” that offers as many new skills as hitherto unknown needs for protection?
One hundred and twenty million people – that’s nearly twice the population of France. It’s also the number of people who, to keep a job, will have to change profession or be trained again in the next three years, according to a recent report by IBM’s Institute for Business Value, analysing the future of work in the world’s 12 most advanced economies.
This clearly promises to produce a wave of retraining. According to the authors, artificial intelligence will make a profound change in the nature and allocation of a great many jobs. It will scarcely come as a surprise that the jobs that can be the most automated usually revolve around a routine physical activity, carried out in a controlled and predictable environment, or involving gathering and processing data.
The IBM report is far from the first to announce that our society will be overturned by growing employment automation. Projections are often questioned and very approximate (some announce 10% of automation, others 50%), but they are all heading in the same direction. One of the estimations of McKinsey, whose Global Institute considered in November 2017, in one of its AI technologies adoption scenarios, that 375 million workers worldwide would have to “change jobs to avoid obsolescence” by 2030. One year later, the same McKinsey considered, basing its finds on the past, that the retraining imposed by automation could compensate for the removal of “obsolete” jobs… and even create more. In the United States, the widespread growth of information technology, between 1980 and 2015, created over 19 million jobs, against 3.5 million that disappeared. Will AI, once again, show that the economist Joseph Schumpeter, was right. In the 1930s, Schumpeter emphasized the paradoxical nature of innovation, this “creative force of destruction”, which makes former practices disappear and brings in new ones? And if this is the case, should we be hastening the trend?
Automation is not automatic
Not so fast! The interest of the massive spreading of intelligent automation is not… automatic. The technical feasibility is, of course, a prerequisite. But entrusting all or part of a job to AI presupposes that the investment in development and deployment be compensated for by gains in productivity or quality.
Above all, this means bring able to see well into the future. But at the very least, transferring an activity from a human being to a machine has every chance of changing the way work is organized, training new recruits, and managing expertise within a company. Matt Beane, an associate researcher with MIT’s Institute for Digital Economy, has shown that automating jobs that do not require much qualification – often a first job – can ruin young workers’ opportunities to learn, make contacts and be trained. This, he believes, is also true of certain highly qualified jobs, such as the young surgeon who couldn’t practice if a robot operated in her stead.
If we accept that AI is useful, effective, and free of any undesirable risk in the sector in which it is introduced, should we welcome it? It would seem that wage earners are prepared. Oracle and Future Workplace have conducted a survey of 8,000 managers and employees in ten countries. The results, published in October 2019, show that 53% of the people interviewed claim to be optimistic about the possibility of having a robot for a colleague; 64% of them would trust AI more than their manager. But these overtures to AI are not made blindly. Among the respondents, nearly three-quarters (71%) recognise that security challenges make them hesitate about envisaging a working world shared between AI and human beings.
If AI finds favour with wage earners, could it really transform work for the better? It’s possible, say the economists, but, not without accompanying developments in technology. Most of the forward-looking exercises conducted in developed countries agree to advocate a solid unemployment insurance system coordinated with a “reinvented” training offer giving everyone the possibility, at every stage of their working life, to rely on technologies, so that they never stop learning and developing their skills. This is where “flexisecurity” comes in, a balance between protection and aptitude to change, in which the Scandinavian countries are such masters. “It is urgent to invest and innovate to give workers new skills. Indeed it is an indispensable answer to the challenges that the work market must take up, faced as it is with successive waves of automation”, the experts of the MIT task force argue in their latest report.
In practice, a growing number of companies use “re-skilling” or “up-skilling” programmes to adapt their employees’ skills to their positions, as with the “Skill’Up” program run by BNP Paribas Cardif and its training partner General Assembly, with already more than 600 employees trained and “acculturated” mainly in Europe, but also in Asia and some in Latin America. The average NPS (Net Promotor Score) measured for these 600 employees is more than +60.
We could also mention the emblematic case of Amazon, which in July 2019, announced that it was investing $700 million in training one-third of its employees (around 100,000 people). At the heart of this training programme are: data science, creating solutions and safety engineering. Will this suffice to offer these tens of thousand workers an “anti-disruption” insurance? “The picture of the quantitative effects of automation combined with AI is full of contrasts,” warns socialist Yann Ferguson in a recent article. He mentions that for the French Parliamentary Office for the Evaluation of Scientific and Technological choices, “personal convictions win out when it comes to an overall appreciation of the future effect of AI on the work market”, convictions that the sociologist believes are “organized around the old division between the ‘techno-optimists’ and the techno-pessimists’.” And the ways in which work will be protected against the risks AI brings with it remain to be specified. How can one qualify the responsibility, when a patient reacts badly to a treatment administered by a doctor following the advice of a virtual “assistant”? Who should be blamed when an automated recruitment system prefers certain profiles: the authors of the programme, the company that uses it, or the data scientists who selected the data? Legal experts have asked these questions, but it is still too early to see a framework of stable, shared answers emerging.
Beyond the new responsibilities of company workers, what about their well-being and security? As AI takes over matching profiles and jobs at an ever faster pace, will the ”gig economy” overheat, as is so cleverly portrayed in the audio series “Dreamstation”? Should we plan “anti-burn-out” insurance for employees, who would be entitled to compensation beyond a threshold of jobs accumulated within a year? Will wage earners and self-employed people need to protect their reputations, their voices and even their features, to maintain control over the traces they leave online, tracked by recruiters’ AI?
Meanwhile, will recruiters need insurance products that protect them from the risks of profiles being usurped and other false documents? The fact that certain universities already offer to register their qualifications in a blockchain could be a clue.
Algorithms on the look-out for the right candidate
A future in which platform models gain pace, automating the relations between companies and workers, is not certain. A future in which humans and machines work together, with human intelligence relying on the power of algorithms, and humans’ decisions influenced by the behaviour of machines, already exists. And there is already proof that the use of AI can be extremely problematical: racial discrimination (in legal proceedings in the United States, notably), the endangering of vulnerable people (the death of a cyclist hit by a driverless car in California), widespread surveillance (facial recognition in public places).
The upheaval that AI could bring with it, positive and negative, exceeds the stakes of insurance. Frameworks to be adapted or invented affect the organization of society and its ethical choices. They are at the heart of a wide raft of research, which aims to establish the principles of auditable and explicable AI design.
Is there an emergency? Despite the feverishness that can be triggered by the regular announcements of new AI performances and the intensity of investments in the sector, this is not sure. As the economist Philippe Askenazy and the IT specialist in machine learning Francis Bach claim in a recent article in the review “Pouvoirs”, the fears of the drastic impact of automation on employment were the same in the United States during the Cold War. “Reading the report by the Commission on Technology, Automation and Economic Progress to President Lyndon Johnson in 1966 is disturbing. [...] The authors, including the CEO of IBM, James Watson, and the future Nobel prizewinner for Economics, Robert Solow, were concerned by a world in which, at least temporarily, the amount of jobs destroyed by technology could not be compensated for by job creations. Disturbing? “Anticipating the impacts on work and jobs is highly speculative as there is so much uncertainty over the evolution of technology itself, its social or industrial use and the indirect mechanisms, which accompany each irruption of technology,” say the two authors usefully. But being humble in the face of the future does not forbid us from anticipating, on the contrary. “It is difficult to foresee the type of world we will be living in in five, ten or twenty years time – technologically, politically, economically or in any other way”, said Dick Kepthorne the Chairman of the World Federation of Insurance Companies at the organization’s ninth conference (GIFA) in 2017. [But] just as the insurance profession is ready and waiting to react to any disruption in the life of the insured, we too are prepared to react to the disruptions in our sector [notably via] the Disruptive Technology Working Group, [whose] objective is to discuss with the authorities and regulators the impact of the innovations and disruptions in the insurance sector on public policies.”
The new risks of AI
AI technologies bring new – and numerous –risks with them. The report entitled “The Malicious Use of AI”, published by a consortium of researchers (notably from Oxford University, the Electronic Frontier Foundation and the Open AI platform), lists some 20 scenarios of AI distortions, ranging from the automatic targeting of scam victims (phishing, email spoofing…) to taking control of driverless cars, or the influence on elections of the massive distribution of “deepfakes”, videos that superimpose someone’s image onto another’s without it being necessary to film them – opening the way to all sorts of manipulations.
These risks are new, as they introduce information technologies that have new capacities, which multiply the power of influence of data over reality. Of course, companies are gradually weighing up the odds. Managing cyber risks is evolving, gradually taking on board the complexity of AI technologies in organizing prevention, detection and responses to the threats. To counter the attacks and incidents that AI makes possible, risk managers can already count on insurance offers dedicated to data protection, crisis management systems, etc. Adapting risk management to the new world heralded by AI will even go so far as to seek out the best way of avoiding the risks of commercial offences (agreements, cartels, etc.) that over-enthusiastic AI could produce unbeknown to the owner.