The Sunday Magazine·Q&A

How to 'futureproof' your job, life and all you hold dear

New York Times tech columnist Kevin Roose’s new book, Futureproof: 9 Rules for Humans in the Age of Automation, explores how AI has changed our lives, our jobs, even our tastes — and why we need to embrace our humanity if we want to live in harmony with the robots.

Tech writer Kevin Roose says we need to remember our humanity to keep up with technological change

Futureproof is the new book by New York Times tech columnist Kevin Roose. (Brian DeSimone, Penguin Randomhouse)

Nowadays it's not uncommon to look up from your phone and realize you have been standing there for way too long. For many, zoning out while scrolling through images, news feeds and social media is a near-daily occurrence. 

For New York Times tech columnist Kevin Roose, that moment hit him where it has hit so many: in the bathroom.  

But rather than drop his phone in the toilet and flush it away forever, Roose was determined to set himself on a journey to redefine his relationship with technology. 

In a world that is being subsumed into the rapidly developing technology of A.I. and machine learning, Roose wants to build a better relationship with our robot overlords.

The Sunday Magazine host Piya Chattopadhyay spoke to Roose about his new book Futureproof: Nine Rules for Humans in the Age of Automation

Here is part of their conversation. 


Is it time to head for the hills and throw our devices in the creek on the way up there?

No, I don't think it is. Obviously, that's a choice some people will make. But, I think for most of us, the thing that we need to be thinking about is how to coexist with machine intelligence.

It's not just that we have phones in our pockets. It's that with these phones, tens of billions of dollars have been spent on making them addictive through social media apps, through personalized recommendations and through push notifications.They are setting our schedules.They are telling us what to think about and when to think about it.They are doing more and more of our cognitive work and they are becoming, in some ways, our bosses.

I realized that I was not in the driver's seat of my own life any more.- Kevin Roose

What's wrong with that? Why is it important for the future of humanity for us to be the boss?

Because the phones are not working for us.... They can steer us in directions that we don't organically want to go in. There's been a lot of research showing that we actually trust algorithmic recommendations more than we trust our own tastes.

I've felt this in my own life. I used to subscribe to a wardrobe algorithm service: you'd input your dimensions [and] the things that you liked, and it would send you a box of clothes every month to wear.

I had this moment one day where I was looking in the mirror at my algorithm-chosen wardrobe. I just thought, "I don't even like this stuff. I just do this because the algorithm picked it for me." And I realized that I was not in the driver's seat of my own life anymore. 

As you write in your book, the difference between using our devices in a way that amplifies our humanity, and a way that diminishes it, usually comes down to who's doing the driving.

Exactly. There's been a lot of research on phone use. All phone use is not created equal. There are ways in which our phones do work for us and make our lives easier, and there are ways in which they corrupt our lives [and] our choices, and make us less human.

One example from my own life is in Gmail, when you can create those auto-generated replies that say, "Yes, I can make that time" or "No, I can't make that time." You just press the button and the A.I. sort of writes the email for you.

That's an example of something that seems like it's saving us work, and in some cases it is, but it's also turning us into a kind of extension of the robot. I think that's really dangerous: not just for our future as workers, but our future as humans with independence, creativity and agency.

If we are outsourcing all of our decisions to these algorithms, who are we? What are we doing that is unique and human? 

You write in your book that you once thought that the fears [that] A.I. would make humans obsolete were overblown, and I'm wondering if you still hold that position.

Yeah, I call myself a "sub-optimist" now because I really am still optimistic about this technology. I think that A.I. and automation, if we do it right, could radically improve lots of people's lives. It could help us discover cures for diseases or help us address climate change. It could do all kinds of things in our lives and make work less central to our lives, which is a good goal to have.

The Canadian flag flies in front of the Research In Motion company logo at one of its buildings in Waterloo, Ont. The city created a job centre in 2012 for people laid off by RIM. (Dave Chidley/Canadian Press)

You came to Waterloo, Ont., for a story about how a place can recover when a technological change leads to massive job losses, which we saw at RIM [Research in Motion]. Tell me what you found in Waterloo.

I was just fascinated because the story of Waterloo is very different than the story that you hear about cities like Detroit or Rochester or other post-industrial cities in the U.S., where they lost their major employers and sort of went into a kind of death spiral. But Waterloo recovered very quickly after RIM started shedding jobs, and I was curious why that was. 

I found that there were basically two reasons for that. One is that I think Waterloo, and Canada in general, just have better social safety nets for people. People aren't losing their health care when they lose their jobs. People aren't rushing — there's not as much need to get a new job immediately and take the first thing that comes along. 

But there are also these kind of local efforts, what I call "small webs" in Waterloo in particular. There was a community-wide attempt to kind of get people who were laid off by RIM and BlackBerry new jobs quickly.

What's left for us is the distinctly human stuff, the stuff that requires skills like empathy.- Roose

You say coding, for example, isn't necessarily the only thing we need to be focused on. Talk about some of these other things that you see as beneficial.

What's left for us is the distinctly human stuff, the stuff that requires skills like empathy, communication, leadership and courage. That's the stuff that machines can't do. We need to be educating people in those areas.

So if you're a coder, that's fine — there will be plenty of work for you. But it's going to be more about your skills at talking with people, understanding their needs, communicating what you're coding to non-technical audiences.

For doctors, it's going to be less about reading scans in a lab, because A.I. is already very good at that. It's going to be more about relating to patients, making them feel heard, understood and cared for. Every job is going to need to become more human and less technical.

There are various skills that I think we need to start teaching not just kids, but ourselves.- Roose

You call these kinds of skills "machine-age humanities."

I think all of them are rooted in humanity, and there are various skills that I think we need to start teaching not just kids, but ourselves.

These are skills like active listening. These are skills like being a good person, like the basic skills that we teach little kids, but they sort of drop off our radar for the rest of our education. 

There's an investor I talked to who invests in a lot of A.I. companies and he requires them all to read the book All I Really Need to Know I Learned in Kindergarten [by Robert Fulghum], which is all about those basic skills we teach little kids: sharing and playing nice with others.

I think a lot of us could use a refresher course on those.

This Q&A was edited for length and clarity.