News

The future of the workplace in a post-A.I. world

27.06.2017

Carlos Espinal shares his views on how AI will impact the world of work

Last week and this week have seen two amazing conferences focusing on the future of Artificial Intelligence. This week, we had CogX in London (led by the guys over at CognitionX), and last week, we had the first Transform.aiconference in Paris, where I was moderating a panel with David Yang(ABBYY), Polly Sumner (Salesforce), Jacques Bughin (McKinsey), and Christos Tsolkas (Philip Morris International) on the subject of the future of the workplace in a post-artificial-intelligence world.

We covered questions such as: What happens when machines can do what you can do? How is AI is reshaping the workplace? Everyone, from factory workers to oil drillers to doctors to fashion designers will have to work alongside machines, sometimes very “smart “ones… What type of jobs will exist in the future? How will it change the way managers perform their jobs (from hiring, to evaluating to promoting talent)? Will the machines manage us? What type of skill sets will we need and how can companies prepare their workforce and their leadership for this new world?

One of the themes that came up in both conferences is the impact the Singularity will wield on us. The term the ‘Singularity’, made famous by Ray Kurzweil, refers to a point in the near future where the scales tip in favor of AI-enhanced beings. Sci-fi writers, fear-mongers and futurists all compete to imagine what this would look like. Though changes may not be as drastic as those seen in terminator, one can extrapolate that many of the things we call ‘labor’ today will be drastically different. Basically, the future of the workplace and workforce is uncertain, and therein lies the problem in discussing the topic today.

Just as industrial robots have changed the landscape of manufacturing, AI will change the currently ‘secure’ world of knowledge-workers, but how, no one really knows.. will we be integrating AI-enhanced bio-compatible hardware to help us make decisions? Will we be simply relying on machines to do all the heavy lifting and humans therefore become the ‘creatives’? Is creativity even ‘safe’ in the workspace of what humans can do better than machines, or will creativity be replaced with a human-fooling ‘simulation’ of creativity?

In that spirit, let’s start by looking far ahead and then work our way backwards to today. The big questions we need to answer include: What new types of jobs will be created in a post-AI world? What will be the phases of our integration with AI? And finally, what businesses are being created today that can either augment the capability of, substitute, or increase the efficiency of, a human-worker?

Let’s address the hardest one first: what are the jobs of the future.

AI will replace us gradually. During this process, there will be short term and long term jobs. According to a recent MIT article, these jobs fall into three types: The Trainers, those who improve AI systems, The Explainers, those who interface with commercial or other entities not in direct contact with the AI, and The Sustainers, those who ensure AI operates as intended. Further examples of these roles ‘in practice’ can be found in the article. Dr. Guillaume Bouchard, the founder of Bloomsbury AI, feels that these might only be quite short lived. In his words: The three category of jobs are clearly true but they will only exist before super-intelligence. After [the rise of a super-intelligence], there will be no need for trainers anymore. For me, it is hard to draw a hard conclusion regarding the ‘trainer’ jobs: how long will we have/need them? Are they even sustainable?

Whilst the MIT article does present a very interesting angle on how things could evolve in a world where machines take over all elements of our decision making, one of the points that Polly brought up in the panel was around ethical/human decisions that even Trainers (to use the article’s language) will not be able to fully solve. For example, how do we create consistency across AI platform decisions in a world where different companies with their own different motivations, might train AI systems to varying degrees of choices ranging from discriminatory for some, to too progressive for others? Would it be a human committee that settle’s matters for example? It all kind of boils down to a simple question — Will general AI ever truly pass the Turing Test across all types of interactions, including those that require credible emotional responses or the resolution of complex ethical dilemmas? Across the web many, including Polly and myself, don’t agree that we will fully get there, but I do think we will be able to feel for and have empathy for machines, which is the inverse of, but still quite different than the key point we discussed.

As such, perhaps the transition to how we replace our workforce entirely by machines will be far more gradual and in far less ‘singularity’ sounding ways. One of the points David brought up was around the subtle integration we will likely go through in incorporating AI technology. We might go from our current wearable-tech phase to a phase where we are embedded with AI systems that help supplement our decisions. In a recent podcast interviewwith two Seedcamp AI Healthcare companies Viz.ai and Gyant.com, we walked through how this might work as doctors leverage technology to make better decisions and possibly move a lot of the diagnosis to machines which might make fewer mistakes than exhausted humans. Moving away from wearable or embeddable, we enter into the phases of integration which start resembling science fiction, including autonomous general AI and ideas like Von Neumann probes, which the sci-fi book We are Legion, does a great job of illustrating how autonomous systems could help us conquer the galaxy. In his book, Nick Bostrom, also highlights other ways a super-intelligence could surface in the future.. here is his summary:

A speed superintelligence could do what a human does, but faster. This would make the outside world seem very slow to it. It might cope with this partially by being very tiny, or virtual.
A
collective superintelligence is composed of smaller intellects, interacting in some way. It is especially good at tasks that can be broken into parts and completed in parallel. It can be improved by adding more smaller intellects, or by organizing them better.
A
quality superintelligence can carry out intellectual tasks that humans just can’t in practice, without necessarily being better or faster at the things humans can do. This can be understood by analogy with the difference between other animals and humans, or the difference between humans with and without certain cognitive capabilities.

Putting all this ‘Supply Side’ tech to one side, one of the points Jacques brought up on the panel was around the demand for these technologies in markets and companies he advises vs. the supply of technologies we hear about. Jacques made it very clear that demand lags far behind, as there are many complexities, not only in understanding the implications of the technologies that are surfacing, but also the process of integrating them. Christos, who has worked in the space of digital transformation, shared examples of how complex integrating digital services across a company’s functions such as, Targeting and Planning, Customer Service, Internal Collaboration, and Customer ordering can be. Nevermind the issue of then trying to link them into AI systems which might be pseudo-autonomous and could wreak havoc across different parts of the larger organization.

In conclusion, the future is both exciting and uncertain. Exciting because there are a huge amount of opportunities for AI to reduce risk for humans in jobs that are dangerous for humans or where humans’ imperfections create danger. Think defense-related jobs, public hygiene-related, or toxic-material management related, all of which will help reduce a lot of social and health problems (possibly). Uncertain, however, because there is some risk that AI might just be able to crack that Turing Test across the board and leaving us totally jobless.. unless it doesn’t, leaving us humans ‘safe’ to deal with jobs that are classically in the realm of what we consider ‘human’: creative jobs, empathy-centric jobs, ethics-centric jobs, and lastly jobs where discerning the fine line between good data from the bad data is critical.

Update: Calum Chace, author of Pandora’s Brain, and whom I had the pleasure of interviewing on our podcast, very kindly provided some feedback that is worth sharing for us to reflect on —

I think it helps to separate out discussion of the technological singularity (passing the Turing test / superintelligence) from discussion of the economic singularity (cognitive automation and technological unemployment.)

I agree Calum, one (economic) is far more likely to be a certainty than the other, and the ramifications are going to be just as vast.

Unrivalled network
Unfiltered advice
Unwavering support