Wij gebruiken cookies om onze website zo goed mogelijk te laten functioneren.

Episode 4 (sz 2): Nicky Dries on the Future of Work and Constructive Pessimism

"And that world of abundance.. yes, that may be the case that billions of profits will be made based on that technology but how will that then be distributed evenly in society? Because over the past few decades, we've seen that the top 1% has only gotten richer, so it will be the case that profits will be booked with that technology but where does the assumption come from that that will suddenly be miraculously evenly distributed?"

>>> intro

Welcome to the "Let's Talk About Work" podcast. This episode features an optimist and a pessimist at the same table: Bart Wuyts and Professor Nicky Dries both advocate for a reorganization of the labor market, more meaningful jobs, and increased wellbeing, but from different perspectives. Bart Wuyts, as the CEO of WEB-Blenders group, focuses on reuse, circularity, and coaching for jobs and languages, whereas Professor Nicky Dries contributes from an academic research standpoint. The conversation covers concerns as a counterbalance to pervasive optimism about progress, the impact of artificial intelligence on work, the importance of collectivity, and much more. Because when we talk about inclusion, we're not just talking about today and tomorrow, but also the day after and the times beyond.

We have a special guest today, none other than Professor Nicky Dries. It's a great privilege for us to have Nicky here, who has been working on the future of work for several years at the KU Leuven's Future of Work Lab.

Thank you for the invitation.

Welcome! Could you share more about what we should understand by 'future of work' and especially about your work in it?

Yes, the future of work: we hear more and more about it lately, including concepts like digital literacy and the direction the labor market and various professions should take. I always find it somewhat peculiar that the future of work, as well as the future of everything, is currently such a hype. It seems like the future has always existed, right? It's just the historical period following the present, which was true even in the Middle Ages and the Stone Age. Why is there so much fuss about the future now? I believe there are several reasons.

One is certainly the hype around AI (artificial intelligence), especially since November 2022, when chat GPT was introduced to the general public, causing quite a shock, particularly among those previously unfamiliar with the technology.

Secondly, there's the whole narrative of the fourth industrial revolution. The first revolution brought steam, the second electricity, the third ICT, and the fourth is said to blur the lines between the physical world, virtual reality, and human biology and DNA. When I read that definition, I had to take a moment. Thinking about it... So, for a few years now, I've been reading many books, including some quite fascinating and odd ones about the future and disruptive technological innovations, which, according to some, could even threaten the essence of being human - with the labor market and work being part of this.

Much research in our field (management, and I'm also part of the economics faculty at KU Leuven, where our lab is based) is so-called forecasting researchor predictive research, making projections based on technological capabilities, like 'so many percent of jobs in this profession could be automated by year x'. In our lab, we take a different, more critical approach, arguing that much of this literature is overly deterministic. The assumption is 'if technology X increases in capacity to this extent by that year, then this or that will happen'. There are many problems with these predictions, which often don't come true. So, we focus more on the social and democratic aspects of the future of work: what are the agency or action possibilities of employers, employees, and other labor market participants? Because these predictions assume technology almost self-implements, right? Our critical perspective says, wait a minute: if you're saying 'so many people in those professions will be replaced by robots or AI in 20 years' - it's not the robots or AI doing that, right? It's people deciding 'we're going to replace these workers with that technology'. We call this invisibilization. We think a discourse has emerged that lets the human aspect fade into the background, focusing too much on purely technological capacity and prediction.

You mentioned looking more at the human side. How do you do that, and what does your research entail?

We use quite unusual methods for our field, especially considering we're part of the economics faculty. The team I work with within the Future of Work Lab comes from various backgrounds, currently all social scientists. I was trained as a psychologist, one of my closest collaborators is a historian, and one of our doctoral students studied literature and film school.

So you're quite the outlier at the economics faculty.

Yes, although people usually see the value in working across different disciplines. For example, history is always seen as very relevant for understanding the future. And we look at atypical data, like science fiction films; we have a project on the potential for resistance by workers or the population against a technocratic elite, as depicted in science fiction films. Another project we're doing is analyzing biographies of Silicon Valley CEOs to see what ideologies and personalities lie behind the technology. I follow many Silicon Valley CEOs on Twitter and sometimes wonder if these are the people we want shaping the future labor market.

What does that make you think? Why do you say that?

For example, at Amazon, people are not allowed to use the restroom; everything is heavily monitored at Twitter. At Twitter - or X since Elon Musk took over - one of the first things he did, after a massive round of layoffs, was to install bunk beds at Twitter's headquarters so people could comfortably sleep at work. And there are certain visions... Sam Altman is also acting strangely, in my opinion. He was CEO of Open AI, or at least one of the founders (I think he's no longer CEO), who recently tried to convince investors that AI will play such a crucial role in the future of humanity that we should direct almost all resources and money there.

While this requires enormous data centers and consumes a lot of energy, which is bad for the climate. But also, especially: that's one company, right? There are just absurd things happening there that make me raise an eyebrow. This is not research, what I'm saying now; these are sources of inspiration for research. But it gives me this feeling of 'we need to work across disciplines'. Those people are probably brilliant engineers and ICT experts, but they're not social scientists, not political scientists, and in my perception, there's a feeling even among politicians of 'we don't understand this technology, so we should leave it to the experts'. That's true, but all the social and economic consequences around it, in my eyes, don't fall within the expertise of most people from, for example, Silicon Valley or the tech sector. On the contrary, those people often say 'we just build', right? 'We just make that technology - we're not responsible for the social consequences'. But I disagree with the idea that this technology is value-neutral and can just proceed unchecked. Also, there's resistance from that world to regulation, often with the argument that the government doesn't understand it anyway and thus will hinder innovation.

I sense that you might have entered this research theme out of a kind of concern, rather than looking at these developments - as others might do - from a kind of progress optimism, 'this will move us forward', 'we will become even better versions of ourselves'. Is it true that concern and a bit of activism drive you?

Yes, that's absolutely correct. I have a concern. I must say, I've always been someone interested in technology and enjoyed watching science fiction as a child and such. It's not a resistance to change in me. I do refuse to use chat GPT, for example - students may, I don't have anything against it per se - but I won't use it.

Why not?

Firstly, I don't find it a pleasant tool. I must say I haven't used the paid, latest version, which is supposedly much better. But I quickly lost interest, and I also feel (I might change my mind, that's part of life), when I talk to people who often use it, that they are people who don't like writing, for example. And I really enjoy that. So I haven't really found its usefulness for what I would need. I've talked to people who said they use it to write letters to their daughter when she's at summer camp. It's just not my thing, sorry. That was also someone, an engineer, who said he doesn't like writing emails and found it difficult to articulate them. But for me, that's something I really enjoy. And maybe the tool has improved by now, but when it first came online, I noticed that if I knew a lot about a topic and asked about it, the answers were often really wrong, which made me lose trust in it. If I were to use it for something I don't know much about - because that seemed still useful - but if that information isn't reliable.. (it may have improved by now, of course). But it's also just a kind of resistance from myself - not that it makes much difference on the world stage what I do. I just have a kind of resistance to the idea that I would need it.

Artemis, you use chat GPT sometimes, right?

I use chat GPT. I enjoy writing, but I use it to adapt my writings to the language algorithms need because I don't like writing tailored to what LinkedIn or Instagram needs. So I always write my input, my content first, and then ask it to optimize it so that LinkedIn's or Instagram's algorithms are used optimally for reach. And then I don't have to bother with the language of those algorithms; I can just stick to my own writing.

And for you, chat GPT is clearly male.

Yes, I noticed that too! Yes, you said 'him'..

I wasn't even aware of that.

Sometimes people ask me if I'm against technology. When people asked that, I had to think about it. And no, that's not the case at all. I think I take that critical, hence pessimistic, position because I find it understaffed. And it's necessary that there's balance in this debate.

We're currently writing a paper - sometimes we're not popular.. And of course, you sometimes also want to throw a spanner in the works or provoke a bit. We're writing a paper against optimism. Because I've noticed, again in those tech environments, that the concept of 'optimism' is actually being abused. There was recently, I don't know if you've read it, something that to me was clinically insane: the Optimist Manifesto by Marc Andreessen. That's also an investor from Silicon Valley, a billionaire who says that ethics, corporate social responsibility, the sustainable development goals are the enemy of progress. Yeah, you should read it.. But many people on LinkedIn, including Flemish entrepreneurs, think it's amazing!

And there's a quote that I've personally come to hate: 'optimism is a moral duty' - because sometimes that means 'stop complaining', 'let us do our thing'.

That's what I hear now, although that's probably not always the intention. But I think under the guise of optimism - also about the climate, for example - a lot of criticism is swept under the rug, as if there's always some kind of magical black box solution somewhere between a problem and a solution. So it's always presented as 'it will be fine'. Which makes you think as a doom thinker - a club I now professionally belong to, I fear - wait a minute, how do you get from the problem to the solution purely based on a kind of belief that everything will be fine?

So yes, we sometimes advocate for constructive pessimism. Where you really try to pinpoint the sore spot and then probably also conclude - something the so-called optimists usually don't want - that certain things need to change radically. And then you often end up in the club of anti-capitalist thinkers and so on. So optimism doesn't just mean optimism; it means status quo to me.

I find this an interesting debate because I'm naturally quite optimistic - which doesn't mean I can stand behind all those technological predictions; the social and ethical context definitely needs to be looked at carefully. But what I wanted to get at is the following. I read in some of your presentations that you've used here and there: you're working on the future of work, and somewhere you say - that future, by how we talk about it and act today, we're making it ourselves. And that sometimes makes me a bit worried if you're promoting pessimism - as I just heard you do - that you might easily (something we've probably all experienced in a professional or private context) end up in a self-fulfilling prophecy. If you keep saying something won't work, then it won't work. And vice versa: if you believe enough that something will work, then there's also a greater chance that it will indeed work. I can align myself with the observation that the future is not deterministic, that we actually make it ourselves by how we talk about it and how we look at it.

I think what I'm actually trying to say is that the people who call themselves optimists.. Or for example, that techno-optimist manifesto, for me, that's not about optimism. That's about closing a debate about a radically different and better future.

For example, in the climate debate, climate scientists are accused of being doom thinkers, right? The optimists from Silicon Valley say 'we are gonna can fix climate change' - techno fix, that's called. How? We don't know! When it comes to AI and robots, people in Silicon Valley say 'it's gonna create a world of abundance', so everyone is going to become super rich. Then my question is 'okay, wait, what are the effects of that technology going to be on the labor market?' If very many people are going to lose their jobs, what's the plan then? Are there also projections about how many new jobs will be created? What skills will they require? Will those be the same profiles that are going to be automated who can then flow into those new sectors in those new kinds of jobs that are supposedly going to emerge? Or will there be a surplus of people who fall out of the labor market? What are we going to do with them then? Are we for universal basic income - something that is undiscussable for most politicians?

And that world of abundance.. yes, that may be the case that billions of profits will be made based on that technology but how will that then be distributed evenly in society? Because over the past few decades, we've seen that the top 1% has only gotten richer, so it will be the case that profits will be booked with that technology but where does the assumption come from that that will suddenly be miraculously evenly distributed? Because that's not the society we live in now.I think the word optimism is misused to say in a very superficial way that everything will be fine without concretely saying how.

And the fear you then speak of is that this means 'let us do it, we know how it should be done'. But at the same time, it's about people who may have once started from very noble principles, but are now so driven by the harsh monetary reality and maybe even increasing their personal wealth, that you wonder if we're doing well to leave the future entirely in their hands and to fully trust that - I hear you say.

Yes, and I don't know if it's necessarily true, but there seems to be a basic assumption that optimism is associated with constructive action and manufacturability, and that pessimism is just complaining and doing nothing. But I don't know if that's true. Pessimism can also be associated with resistance, collective action, protest, democratic action. It's just that sometimes people don't want that, I think.

For example, I recently had a conversation with all people who are CIOs: chief informatics or chief information officer, and I told them I could fully agree with the story, but there's one thing I don't understand. Namely, if AI can supposedly do and know everything, why don't we say as a society 'okay, we're going to let AI loose on the sustainable development goals" "Hey AI, chat gpt, solve all sdg’s?" "Make sure that any action taken by the business world can never conflict with the SDGs and always advances at least one goal" And then they just say to my face 'yeah, but there's no money in that, right? Technology development will always pursue a profit model.' And 'well, Coca-Cola didn't have a societal goal when it was invented, and you like drinking a cola, right?' So those are quite strange discussions. On the one hand, people want to emphasize that it's the most important technology ever, but on the other hand, if you then ask if we can't use it to save the world - then you get a kind of profit story back.

I can imagine that your discourse and research within the faculty also provoke some interesting discussions. I can imagine that many of your colleagues are inspired by what happens in Silicon Valley and would like to translate that to Flanders?

Yes, in the beginning, when I was researching this.. I also read some really crazy books, for example, the book about singularity by Ray Kurzweil, former executive at Google, who talks about hyper-intelligent AI, but that book is, I think, over 20 years old! Then I read a book about transhumanism, including about people who freeze themselves in a facility in the Utah desert and with such stories I then came to work, so people indeed found it all very crazy. But now, so many years later, when we then read something in the media about Elon Musk's brain chips - there are already brain chips in China, for example, for people with depression and so that release substances directly into the brain - then people start to turn around a bit: maybe it wasn't all that crazy after all.

And there is, in general, an enormous booming interest in the field of management specifically - also in economics, econometrics, about the future and predictions and all questions about what is going to happen on the labor market. So everyone is busy with it in a different way and I am sometimes worried about whether our ideas were too critical, but I think people find it a pleasant change. What we really advocate for is transdisciplinarity. So I'm not saying that all others are wrong and we are right. For example, we did an analysis of the media, of newspaper articles about the future of work, and what we saw was that the visions put forward there are very limited. Actually, only 3 groups are heard: tech entrepreneurs - the most quoted person is Elon Musk, so about the future of work you hear him most often, also in Flemish newspapers he is mentioned most often. The second group that often comes up are economics professors, but often really from that calculation logic for example 'universal basic income is unaffordable'. And the third group is then so-called doom thinkers, authors, sometimes journalists, sometimes best-selling book authors like Harari - who has written Sapiensbut also a book about the future: Homo Deus, with a very negative vision. Those groups are covered in the newspapers. No policymakers, actually no politicians, no unions or representation of the ordinary working person, no non-experts shall I say. What we actually advocate for is to open up that debate more democratically and to acknowledge: "if the future is manufacturable, then the question is, what future do we want?" And then you're in a political ideological issue that you can't solve with predictions. Technological capacity is one thing, but what is your dream labor market in 2100? We work a lot with the distant future in our lab, because then you also go beyond pure realism or affordability - where the discussion often strands.

And that distant future is, for example, 2100?

Yes, for example. What we advocate for is that you also have to look beyond generations, break free from the current way of thinking or from your own profession, or from the idea 'oh that's science fiction, that will never happen'. With 'never' people mean 'that won't happen in the next 10 years, in Flanders, in my profession' and then they might still be wrong. So through that research, I've also started to believe that anything is possible.

What do you learn in your research that is specifically interesting or important for us to be engaged with here? We're a company that tries to create opportunities for meaningful work for as many people as possible. And that often means, in particular, for people who still fall through the cracks too often today. Do you have any insight into that or is there any research you're involved in or come across that has a relationship with this, or that indicates: all that future of work - what is said about it today, researched about it, has this or that impact on our work that we're all doing here today?

The ideas in my head are tumbling through my head right now. There are so many of those predictions.. You can't take that as truth and many predictions that are relevant for everyone, but certainly for the context you describe are a bit contradictory. On the one hand, you have the story of the retirement age, so we're all going to have to work longer to make social security affordable. On the other hand, other people tell you there won't be enough work, that there's going to be large-scale automation in the coming decades. Those two things are already contradictory.

In the short term, there are a lot of projections about the balance between supply and demand in the labor market - which I've already seen presentations about in several European countries, for example from Statistics Flanders, which says that due to aging there will be too few workers and so we need labor migration. So we need more migrants, and then you feel the political sensitivity and the charged nature of such themes.

On the other hand, there's also the feeling that the people who are now falling through the cracks will fall through the cracks even more in the future. Because it's all about re-skilling, upskilling, digital literacy... Everyone has to keep up with the digital transformation - is that realistic yes or no, is that true yes or no? In 2013, there was that very famous study by Frey and Osborne (The Future Of Employment: How Susceptible Are Jobs To Computerisation?) at the University of Cambridge, which said that 47% of the professions in the United States run a high risk of automation in the next 10 to 20 years. That study is now 10 years old and they talked about high risk, medium risk, low-risk categories and in that study, they said that high risk, for example, were the professions of truck drivers, production, construction. Medium risk refers to professions where you still need fine motor skills or nuanced human perception such as service and maintenance. And under low risk, they placed professions where what they called 'creative or social intelligence' comes into play: medicine, management, education, research,...

But in the past, it was more about robotics, and if you talked to people about automation a number of years ago, I found that you sometimes got a kind of elitist reaction of 'oh yes, those poor factory workers' - so automation was very much associated with blue-collar work. And now, since chat GPT, you notice that people in knowledge work are also starting to realize that their type of job might also be threatened. And last year, for example, there were those strikes in Hollywood (something I find very interesting and disturbing), where screenwriters and actors said that the studios want to go too far with the use of AI for example to mine our scripts and use that data or to replace background actors with scanned or digitally rendered images. Now a kind of discourse has emerged that is very double.

On the one hand, it's about the Arts, when it comes to Hollywood, so artists are now also uniting, because Midjourney and that kind of AI are using the work of artists without respecting that intellectual property. So somewhere I think those are not good tendencies: the kind of professions that people said 10 years ago couldn't be automated, we see that already happening

But on the other hand, I think - and that's a very naughty and very critical take of mine - that maybe we'll finally get some support for solidarity between groups that traditionally..

Groups that traditionally stand far apart?

Yes.

Because it was always about educational level and socio-demographic background and social class. If all our jobs are threatened, maybe there's really a basis for solidarity among everyone who works. So that's then the hope I see in my doom thinking. That's a very pessimistic view, but where there's hope that these tendencies can create a basis. And an incredibly interesting book that I recommend to everyone - it has a bit of a strange title - is the book Shop Class as Soulcraft (by Matthew B. Crawford). It's written by a philosopher who worked in a think tank in Washington, found that completely meaningless, stopped doing that, and started working as a mechanic for motorcycles. That book is a history of how, among other things, through education, a distinction began to be made between thinking and doing and actually systematically devaluing crafts and craftsmanship or the technological professions, the technical and vocational directions because all young people had to become knowledge workers. And if we now put all those predictions next to each other: how is it going to go with the labor market, and the people who sometimes fall through the cracks now, is it going to get worse or better? Well, those predictions go in different directions. That author says that the profession of plumber will be the last that can be automated.

AI won't solve that. Indeed.

I once received a photo from someone on such a large scaffold at a construction site that read 'chat gpt, fix this building' or something.

So that author says you can't outsource the repair of your house to China. And he also argues that plumbing and the profession of electrician: that's intellectual work, that requires advanced cognitive capacities and problem-solving ability, and most people prefer a plumber who has been doing it for 30 years because that person has built up expertise. And about the sector he himself was in, that of mechanic, who also work a lot with retro engines and so, he says, they have a network throughout the US because you sometimes get parts or engines that are no longer made and so,.. so he really advocates for a revaluation of manual labor.

Yes, as you were telling, I was thinking about automation and robotization and technology: to the extent that it can help to sanitize or automate the so-called bullshit jobs, we should only be happy about it. You would expect, thinking of all the challenges our society faces, that there would still be so much other work to be done that that in itself should not be a problem. The development of AI, of course, puts everything a bit more on shaky ground. And yes, that might eventually lead you to the question of whether we will work at all in 2100?

You mention 'bullshit jobs' and David Graeber is really my academic hero, so I've thought a lot about that - also because he's also kind of a role model in terms of whether an academic can also be an activist. I don't know if you know, but he was at the cradle of Occupy Wall Street and of that slogan 'We Are The 99%', he was a very respected academic, a very serious man, but also active in social movements. But for David Graeber, a bullshit job is management, right? He especially targets management. And one of my new interests - I don't know if I'll do research on it - I've asked friends and family to, if their company sends weird emails about digital transformation and so on, forward them to me - and what I read in there.. Often in those emails, you get a kind of very vague very positive story about digital transformation where then somewhere in the second or third paragraph it's mentioned that x number of FTEs are being laid off. And what strikes me so far - but that will be my cynical or doom thinkers' view - is that it's never management that's being sacked. And I found that a bit strange also in the original study by Fery and Osborne: in it, management is also placed in the type of function that could never be automated or has only a low risk of that.

But David Graeber makes exactly the opposite analysis for him, bullshit jobs are marketing, consulting, middle management,.. And he says 'a garbage man, that's someone with a use. A plumber, an electrician,.. According to David Graeber - but of course, he's a very critical thinker and also a bit of a neo-Marxist I think, there's an inversely proportional relationship between the status society assigns to a profession and its real utility. I get the feeling at some companies that they want to automate everything except management. And sometimes I have the opposite view. Because who's going to do the work then? Are those managers going to manage those algorithms or something? I find that a bit of a strange reasoning.

It of course connects a bit with the development of self-direction in organizations, which has also been worked on for about 20 to 30 years now. Which in many companies has also led to a reduction in management. But I understand that maybe it still happens too limited today and so you see the movement you mention still fully happening.

Yes, so that's still the question, whether that trend towards self-direction and less management (where you could probably save the most as a company because those are the highest-paid functions) will continue. And then your earlier question: 'is there still work in 2100'..

The future is open, can go all directions. And there are also positive visions, also of a future without work, but those are then for the optimists, and certainly, the economists, also often undiscussable. There's a movement in the management literature, they call themselves anti-work or post-work. The anti-work people are very critical, they say, all work is exploitation. Ultimately, every person who is not a business owner themselves works for another, so all productivity or all profit you create, you often don't really get your share of. You just get a wage. Post-work is perhaps a utopian vision for some, those are the ones who wonder what the world would look like if we no longer have to work but still live in that world of abundance that they talk about in Silicon Valley. So AI and robots are maybe going to take over most of the work, especially the boring, dangerous, dirty work. What would society look like if we live in a leisure society where none of us have towork anymore?

That's a form of utopia, but if that then comes true, the economists would - very typically - say 'no, work has a super important function in human life'and 'research shows that people who are unemployed are depressed', 'humans have always worked, we need to work',.. But that's so in today'ssociety - if you're unemployed today, you're probably depressed because you probably have financial problems or there's a stigma. But if that falls away, if that's not stigmatized and you have financial security, then that's a whole different story. But I also don't have an answer to that question of how humanity would feel if they didn't have to work. Is that better or worse? We'll see.

We need to start wrapping up because we try not to make our conversations too long for the listener. Although I feel like we could go on for much longer. Is there any advice you would like to give or could give from your academic background to people who are active in the labor market today: the employer or the employee, the job seeker?

I would say, unite. From a doom thinkers' perspective: should things go badly with the future of work - maybe not, maybe yes - I think the solution is the collective. I think the increased individualization and polarization in society are not doing us any good, and I think uniting around shared interests is very important. Union participation too. I think there's currently such a discourse that tells us we all just have to adapt. We have to upskill, reskill, we have to learn to work with that AI, it's inevitable, it's so, it's there, we all have to go along with it. And that's actually not true. That's historically also not true. In previous industrial revolutions, the union was born. When the factory owner said 'we're just going to do this', and the workers united and from that, in the long term, the unions, the labor movement grew.

So I do expect some kind of social struggle in the coming years if it turns out that multiple sectors are going to be threatened, or if we see those effects on the labor market. I think it has to come quite close to people before they take action. But I still want to spread that message on a broad scale: we live in a democratic society, action is always possible. Because the public debate is so much about technology these days that people think 'technology, I really know nothing about that, so I can't participate in this debate', but actually, that technology is just a sauce. What it's really about, as always in history; are the rights, the duties, the working conditions, the labor relations, the labor relations. We can draw inspiration from, for example, social movements around the climate - they are very unpopular, right? Those people glue themselves to the highways, they throw soup at paintings,.. But this is an example of social movements that actually try to exert pressure from the bottom up on policy. As has always been possible in a democratic society.

So we need a union 2.0.?

I think so, yes.

Artemis, do you have a question for us?

Yes.

Nicky, when I just heard you talking about activism, also in the academic world, I had to think of a project by a student, Leen Al Massalma, at UGent: the 'academic activist' or the 'activist academic' (I'm just wondering how she exactly called it). And she advocates for more space in the academic world to also be able to learn about the injustice we see happening globally. It cannot be that if there is a genocide happening in your country of origin, that we do not talk about it or that we do not learn about it, or that we cannot take action about it. And I hear you advocating a bit for a plea for meaning about the future of work, but also more broadly than your research domain actually - in the academic world. And I was wondering: that call to unite, is that also a call to create a new narrative?

Yes, absolutely.

Our research is very much about narratives, of course. If you go outside the academic world, then 'narrative' is probably a vague concept for most people. But in one of the first studies we did about the future of work, we concluded that we live in a narrative reality and that those predictions - for example, if you say 47% of jobs are going to be automated - those are predictions but they are also stories. And you're going to get reactions to a story. And you also see in the last 10 years that that was the headline that people took from that study (47% of jobs are going to be automated), that that has become a benchmark that all those commentators in the newspapers, in the public sphere have started to react to ('no, it's higher than 47%' vs. 'no, it's 0%).. And I know not everyone thinks Sapiensby Harari is a good book but he has an super interesting point about predictions afterward. He says there are predictions that do not influence reality, for example, when the weatherman or weatherwoman says it's going to rain tomorrow - that has no effect on whether it's actually going to rain. But there are also predictions that will determine the future, for example, predictions about the elections - highly topical. If you're now going to say this or that party is going to score so much, people might vote differently than before, making that future more unpredictable by making that prediction itself. Now if that's true - and some people find that really enormously controversial statements because they say there are still objective truths in the world and that's also true, but we live society with a 24/7 information cycle. So if it's true that by constantly saying something in the public agenda or in the media that or well it can become true, or else people will resist it extremely, so it won't become true, then that means that basically everyone, can put something on the agenda in a democratic society.

Last week I had to speak somewhere and one of the discussants said that economists have long calculated that universal basic income is impossible. To which I say that our approach is different. If 80% of the population demands that there be a universal basic income and politicians are busy with elections, then the chance increases that it will happen anyway. That person disagrees with that, right, but somewhere that question of whether we live in a democracy and what things are democratically possible lingers. And if you want a radically different way, for example, of working and living, is that possible? On what term? Can you do that isolated in a certain country? Or does it then have to be on a larger scale? European? Worldwide? How will the world evolve? So the future of work is totally connected with the future of everything, the future of the climate, the future of migration, the future of, yes, war situations and which culture dominates the world,.. We'll see.

Nicky, thank you for being here with us. It was particularly interesting and I'm sure - no, I'm certain - that will be the case for the listener as well. Thank you.

Thank you.

[outro]

>>> You were listening to an episode of 'Let's Talk About Work', the podcast by the WEB-Blenders group. Our conversations are about work, the path to work, well-being at work, and everything that comes with it. You can find us on your favorite podcast platform and at www.blenders.be/podcast. On social media, you can follow us on LinkedIn – where you can find us under Podcast Let's Talk About Work; and on Instagram as blenders.podcast.letstalk. You can also stay up to date through the Blenders newsletter. Were you intrigued? Did this conversation make you think? Would you like to be one of our next guests? Let us know via info@blenders.be and who knows, maybe you'll join us at the table soon.