New post at Medium: The Future of Work vs. the Future of Jobs

A recurring theme throughout my research, writing, and speaking has been the “future of work.” Er, or maybe the “future of jobs.” One reason they’re so hard to talk about is they’re not the same thing.

The future of work has to do with the way companies will achieve productivity in an increasingly automated ecosystem. The future of jobs, meanwhile, has to do with the way human beings will make their living, or in a theoretical system where resources are provided, how human beings will carve out their identity, which they have traditionally done at least in part through their chosen occupations.

Read the rest of my latest piece at Medium:

https://medium.com/@kateo/the-future-of-work-vs-the-future-of-jobs-88d75698b2a4

Goodbye to 2019, hello to our uncertain future

Our emerging tech panel at UN COP25 in Madrid

This time of year is my absolute favorite because for me it’s so much about relaxed reflection and setting intentions for the year — or even the decade! — ahead. And this year, with Christmas and New Years Day falling mid-week, all normal work schedules seem disrupted, creating extra space throughout these final weeks and over the weekend between them to reflect and plan.

It’s also a good time to think about the future in general.

One of the characteristics about the way we tend to think about the future now, though, is with more uncertainty than ever.

Yet as I wrote in Tech Humanist:

Here’s what I want to offer you: To me, the idea that the future is never fixed or certain is actually encouraging. Truly, it fills me with hope. I think of the future largely as something we continuously alter, shape, or at least influence with what we do today.

That thought also fills me with a sense of duty because it means there are always many possible futures that depend on me and you and everyone else doing our parts in the whole. It means our everyday actions have more power to shape outcomes than we are often comfortable admitting.

from Tech Humanist: How You Can Make Technology Better for Business and Better for Humans

Planning your own future

My friend and one of the organizers of House of Beautiful Business, Tim Leberecht, has written a lovely guide to help us all do just that. His process will help you have a productive and insightful “time between the years,” as Tim calls it, and a brilliantly successful 2020:

>> How to Make the Most of the Time Between the Years
(written by Tim Leberecht for Psychology Today)

Some of the questions I like to ask myself and encourage my clients and audiences to ask are:

  • What kind of future do you personally want to have?
  • What kind of future do you want for everyone on the planet?
  • What are you working on building?
  • What are you trying to achieve at scale?

By the way, all of this reflection and planning pairs well with another piece about getting better at training your brain what to retain and what to let go of. Hint: it comes down to the discipline of spending time thinking about what you most want to be thinking about.

>> Your Brain Has A “Delete” Button–Here’s How To Use It
(by Judah Pollack and Olivia Fox Cabane for Fast Company)

What are some other questions that help you clarify your purpose? What are some other exercises you engage in to help you reflect and plan?

Goodbye to my wild 2019

For me, 2019 was a whirlwind of unprecedented life opportunities, but also a time for increasing clarity and commitment to what I see as my mission.

To recap: In January, just a few months after my book Tech Humanist came out, it was featured on the CES stage. The following week, I had a tweet go viral and a follow-up in WIRED that also went viral, and I appeared on just about every major news outlet from BBC to NPR to Marketplace to talk about facial recognition (and to pivot the conversation to the larger issue of how technology is changing our human experiences). The next week, I spoke at the United Nations about innovation and humanity.

Then in June, a few days after delivering a keynote on Tech Humanism at a conference in Mumbai, India, I guest lectured at the University of Cambridge. Yes, the same one Charles Darwin, Sir Isaac Newton, and Stephen Hawking are all associated with. That University of Cambridge. I know, I couldn’t believe it either.

In the second half of the year I keynoted Etsy‘s Engineering Day in Brooklyn, a Google team offsite in Lake Tahoe, the P2P Transformation Summit in London, DevLearn in Las Vegas, UX Australia in Sydney, the Boston CIO Summit, and presented versions of my Tech Humanist talk at INBOUND, Content Marketing World, the Inc. CEO Summit, Mind the Product in London, House of Beautiful Business in Lisbon, and more.

Our emerging tech panel at UN COP25 in Madrid
Our emerging tech panel at UN COP25 in Madrid

Finally in December, after speaking once again at the United Nations headquarters, this time on AI and youth skills, I closed out my work year at the UN COP25 climate change conference in Madrid where I led a panel on the final day about the challenges and opportunities of leveraging emerging technologies to fight climate change.

Oh, and over the course of the year I added representation from Washington Speakers Bureau and Leading Authorities speakers bureau. That’s exciting personally and professionally but in addition it should help make bookings easier for many large company clients, which means there may be even more of those audiences in 2020 and beyond.

I’m telling you this to say: I think all of this activity proves there’s hope. I think my year has been wild because a lot of people see the potential for technology to diminish the humanity in the world, and a lot of people want to see to it that that doesn’t happen. If my experience this year indicates anything, I think it’s that people are determined to make the best of our tech-driven future

So what’s in store for all of us for 2020?

You’ll see many articles with predictions for 2020, and some will be more outlandish than others. I’m including just a few here that will likely affect you and your business more than others:

Expect to see more facial recognition in use everywhere and to hear more debate about it. Governments, law enforcement agencies, and high-traffic facilities like airports see tremendous opportunities and conveniences in deploying this technology, while civil liberties advocates see many privacy risks and challenges. Personally, I’m on Team Ban Facial Recognition Until We Have Better Protections In Place, but I’ll continue to follow all the developments and report on them (as I did in WIRED earlier this year).

Expect to have to grapple with privacy debates inside and outside your organization. The major push for companies to meet GDPR compliance in time for the May 2018 enforcement deadline is only the beginning of such regulatory efforts; the CCPA is due to be fully enforced as of January 1, 2020, and you can bet more regulations will be coming as time goes on. Your best bet to dealing with these is to get ahead of them: enact human-friendly data collection and usage practices such as not collecting more data than you need or than is relevant to the context of the interaction. (I spoke about this topic extensively at House of Beautiful Business in Lisbon, as well as at many other events throughout the year.)

The push for digital transformation isn’t over yet (no matter how tired of hearing about it you may be). Most companies, organizations, and cities are very much just catching up, still sorting out how, for example, the data from their front-end services can inform back-end operations and vice versa. Meanwhile, upstart data-rich apps and services are still disrupting industry after industry, so we’ll still be talking about that for a while. (This was the focus of many of my keynotes to executive audiences, such as the Boston CIO Summit, and more.)

You may also be tired of hearing about AI, but we’ve only scratched the surface of that conversation. While some folks debate the semantics of whether simple machine learning processes really constitute “artificial intelligence,” the advancements within that space progress daily, with both challenges and opportunities aplenty. (Part of my focus throughout 2019 and into 2020 has been on how machine learning and automated intelligence can help with addressing climate change. Stay tuned for more on that.)

Speaking of which, perhaps the biggest and most urgent trend of all will be facing the scale and scope of climate change, and using whatever technologies and tools we can to remediate against its effects.

Looking into the future for me and for us all

Above all, what is ahead in our future is increasing interconnectedness of our experiences. It’s the perfect time to adopt the mindset that in many respects what I do does affect you just as what you do affects me, and that we’re in this together. We need to accept our futures as wholly connected: connected through data, connected to each other, connected to the planet, connected to our collective destinies.

That connectedness shows in the work I’m lined up to do. To prepare for the bookings I have for 2020 so far, for example, I will be examining more deeply the future of jobs and work, the future of privacy, the future of trust, the future of the climate, and more. All of these topics have a through-line: the future of human experiences will depend heavily on our wise use of technology, collectively and individually.

Speaking of my bookings in 2020, I have talks booked throughout the U.S. — and in Budapest for the first time! If you happen to be able to attend any of these events, be sure to come up and say hi — I’d love to see you. And of course you can always book me to speak at your company or event.

And! I’ve begun to work on my next book. More on that to come, but you can be sure it will follow along these themes.

But for now the big question is:

What will you do with the future for you and for us all?

Here’s hoping you find the quiet reflection you need in these last days of 2019 to set the kinds of intentions that will guide you to achieve what you most want to achieve, for your own good and for the good of humanity.


If this theme resonates with the conversations your company, organization, or city has been having and you’d like to hire me as a keynote speaker at an event in 2020, please do reach out. Here’s to a meaningful year for us all. 

Thinking today for tomorrow

Reading today both that Sumatran rhinos are extinct in Malaysia (only 80 of the species are left in Indonesia now) largely due to poaching for their horns, and that koalas are in ever-greater danger (even if they may not be “functionally extinct” as a few now-viral articles have claimed) due to habitat loss from the bushfires which are worsening likely due to human effects on climate. There are untold consequences of human effects for humans everywhere too, of course, but the suffering of animals tugs especially at my heart.

Today seems like a good day to do some deep thinking about both the big, bold actions and incremental choices we need to make to leave the world better off from here, not exponentially worse off.

Is there a small step you can make? Is there a big action you can take, and/or that we can take together? I invite you to join me in considering these questions today.

Preparing for the Next 10+ Years: Data After the #10YearChallenge Data Sharing Discussion

I’ve been fortunate enough to make my living writing, speaking, and advising about the impact of technology on humanity for quite a few years now. Most commonly, though, my audiences tend to be business leaders, and what I write and speak and advise about most often is how they can adopt a digital transformation strategy that helps the company succeed while keeping the human in focus and respecting human data.

So the massive mainstream media reaction to my viral #10YearChallenge tweet and subsequent piece in WIRED was in some ways a switch in perspective: from talking to businesses about human data, to talking to humans about business use of their data. And it gave me the chance to address a far more universal audience than usual — on BBC World News, Marketplace, and NPR Weekend Edition, among many other outlets — in a cultural moment so widely discussed, it was referenced in the top article on the Reddit homepage and mentioned on The Daily Show with Trevor Noah. My goal through it all was to spark greater awareness about how much data we share without realizing it, and how we can manage our data wisely. Just as the goal of my work is to help amplify the meaning in the experiences businesses create for the humans they do business with, my hope in connecting with a mainstream audience was to encourage people to participate in experiences meaningfully and mindfully. People all over the world gave me an overwhelming amount of feedback: some worried, some praising, some critical. I listened as openly as I could to everything I could.

With all that listening, I know that some common questions remain. I see many of the same recurring themes in comments on Twitter and elsewhere. So I’m using this opportunity here, at home on my own company’s site without the time limits and fleeting news cycles of a major news channel, to address a few of them, and I hope they will, in their own small way, be part of the conversation we carry forward.

Let’s get this one out of the way first, since it’s been the biggest misunderstanding throughout this whole deal:

“Facebook says they didn’t have any part in the meme. Didn’t you say they designed the whole #10YearChallenge meme to gather user data to train their facial recognition algorithm?”

It’s funny: I didn’t say Facebook did it, and quite frankly, it wouldn’t matter. I was musing on the fact that the meme was creating a rich data set, and pondering aloud what that data set could theoretically be used for. In any case, it was a thought experiment, not an accusation. In my WIRED article I expanded on the thought experiment and did not accuse Facebook of having engineered it. In fact, more importantly, as I wrote there:

The broader message, removed from the specifics of any one meme or even any one social platform, is that humans are the richest data sources for most of the technology emerging in the world. We should know this, and proceed with due diligence and sophistication.

— excerpt from my article in WIRED

That said, though, I wouldn’t have made any definitive statements from the beginning claiming that Facebook didn’t or wouldn’t have done something like this. I’m sure there are plenty of well-meaning people in the company’s leadership, but between psychological experiments, Cambridge Analytica, and various leaks and breaches, there have been too many missteps, lapses, and outright errors in judgment on Facebook’s part for them to be above suspicion when it comes to violations of data security and trust.

Nonetheless, although it was a very common misconception, I genuinely don’t suspect that the meme began with Facebook — and I don’t believe that matters. What matters is that we use these discussions to deepen our thinking about personal data, privacy, and trust.

“How can people who’ve taken your message to heart and now recognize the importance of this topic learn to manage their data more wisely?”

If you think of your data as money, you may have a better instinct for why you need to manage it well, and take care not to spend it loosely or foolishly. I’m not a fan of the idea of data as currency (partly because I think the human experience is more dimensional than a monetary metaphor conveys), but just this once I think it may be a helpful comparison. And as long as you know you’re safe, not getting lied to or ripped off, this “data is money” comparison may help illustrate why it can be worth it to spend it on experiences that matter to you.

In terms of actionable steps, here are a few helpful resources:

Personally, one easy step I take is to use the On This Day feature on Facebook to go through my posting archive day by day. I may change the permissions on old content, or delete a post completely if it seems like it no longer serves me or anyone else to have it out there.

I also have recurring reminders on my calendar to do reviews and audits of my online presence. I do what I call a weekly glance, a quarterly review, and an annual audit. For the weekly session, you can assign yourself one platform each week, and review your security settings and old content to make sure there isn’t anything out there that you no longer want to share. The quarterly review and annual audit may entail different activities for you, but for me they also involve updating old bios and links in various places, so it becomes a strategic review as well as a security check.

“What about Apple Pay and unlocking your phone with your face, or accessing your bank account with your face? Or paying for your meal with your face? What about other biometric data like fingerprints?”

All of this is relevant, and I’ll unpack some of these issues more in future articles. The short answer, though, is that with some of these uses, such as Apple Pay, you take an educated guess that the company collecting your data will safeguard it, because the company bears some risk if they screw up. But not all data sharing carries proportional risk on both sides, so think critically before using these services.

At least for now, pay for your fried chicken with cash, not your face.

“What about 23andme and other DNA/genetic data issues?”

That’s a whole other article. (I will say I personally haven’t done a commercial DNA test because bad outcomes always seemed possible.) The topic does relate to the rest of this, and it does matter that we’re 1) cautious of using commercial services like this, and that 2) we hold companies accountable to adhere to the uses we agreed to, and not to overstep what we understood to be our contract.

“What about data tracking in smart home systems?”

The standards and precedents are not yet well defined for the use and protections on data collected by smart home devices like smart speakers listening passively for a command. The safest thing to do is hold off on using them, and the second-safest thing is to turn them off when not in use.

While I did address some of the issues and opportunities with smart home automation and devices in Tech Humanist, this is again a topic I’ll dig into more in future articles.

“What about regulations on data? What about regulations on facial recognition, or on AI in general?”

The vast amount of personal data transmitted and collected by business, government, and institutional entities is what powers algorithmic decision making, from ecommerce recommendations to law enforcement. And this vast data and broad algorithmic decision-making is also where machine learning and artificial intelligence takes root. Artificial intelligence, broadly, has the chance to improve human life in many ways. It could help address problems associated with world poverty and hunger; it could improve global transportation logistics in ways that reduce emissions and improve the environment; it could help detect disease and extend healthy human life.

But machines are only as good as the human values encoded into them. And where values aren’t clear or aren’t in alignment with the best and safest outcomes for humanity, regulations can be helpful.

The European Union’s General Data Protection Regulation, or GDPR, that went fully into place in May 2018 is for now the most comprehensive set of regulatory guidelines protecting individuals’ data. And American tech companies have to play by these rules: just this week, Google got hit with a 50 million euro fine for violating the term that requires companies to produce clear disclosure on the data they collect from consumers.

Meanwhile, for many Americans it’s tough to imagine what entity in the United States would be responsible for enforcing any set of regulations pertaining to data and AI.

In the meantime, just as with climate change, we need efforts on the macro and micro scale: the experts tell us that for any kind of real reduction in impact on the environment we need big movement from commercial and industrial entities which produce the lion’s share of emissions, but that doesn’t mean that, say, you shouldn’t put your soda bottle in the recycling bin, not the trash. We’re learning more and more how important it is for us to be mindful of our ecological footprint; we also need to learn how to be mindful of our digital footprint.

“Should I turn off facial recognition image tagging in Facebook?”

I would advise doing so, yes.

the Facebook settings screen where you can disable automatic face recognition

“Are you saying I can’t have any fun online?”

Oh, heck no. By all means, I am very pro-fun. Even when it comes to digital interactions.

It’s easier to have fun when you know you’re reasonably safe, though, right? The biggest takeaway from this discussion about the possible side effects of the #10YearChallenge should be to remember that when any meme or game is encouraging you — and large groups of other people — to share specific information about yourself, it’s worth pausing before you participate. It’s relevant to wonder who might be collecting the data, but it’s far more important to think what the collected data can do.

But share the meaningful parts of your life online with friends and family, and enjoy being able to follow their updates about the meaningful parts of their lives. That has certainly been the most wonderful benefit of social media.

Not only am I pro-fun, I am also very pro-technology. I love tech, and I genuinely think emerging technologies like AI, automation, and the Internet of Things — all largely driven by human data — have the chance to make our lives better. (As I wrote in Tech Humanist, I believe we have the chance to create the best futures for the most people.) But to achieve that, we need to be very mindful about how they can make our lives worse, and put measures in place — in our government, in our businesses, and in our own behavior — to help ensure the best outcomes.

10 Fundamental Insights about the Tech-Driven Future for Humanity*

*and why women, POC, and other underrepresented people in tech should lead it

Today I spoke at the Irish Business Organization of New York’s women’s networking luncheon and addressed them on the tech-driven future for humanity, and why women should be leading it.

Tech Humanist front cover

Here are those insights in brief; if you’d like to hear more of this, of course, I elaborate on all of these points within my keynote presentations and my books.

  1. The tech-driven future will be neither dystopia nor utopia. It will be what we make it.
    We tend to tell a story about technology that pits the worst case scenario against the best case scenario — and conveniently leaves our actions and responsibilities out of the equation. But the truth is we are very much responsible for shaping the future of technology.
    Is it possible that tech can even help us be better humans? As I repeatedly asserted in Tech Humanist, with the emergence of automation, artificial intelligence, and other capacity-expanding tech, we will have the opportunity to create the best futures for the most people.
  2. Humans crave meaning.
    We just do. We seek meaning, we’re compelled by meaning; when you offer meaning to us, we can’t resist it. To bridge the gap between what makes tech better for business and better for humans, business needs to create more meaningful human experiences at scale.
    Moreover, the shape meaning takes in business is purpose, and the amazing thing about purpose is that when you can be clear about what you are trying to do at scale, it helps both humans and machines function more effectively. Humans thrive on a sense of meaning, common goals, and a sense of fulfilling something bigger. Machines thrive on succinct instructions. A clearly articulated sense of strategic purpose helps achieve both of these.
  3. Robots aren’t “coming.” They’re here.
    Everyone talks about robots coming 
like they’re some far-off future 
as if millions of homes don’t already have Roomba and Alexa.
  4. What tech does well vs. what humans do well will continuously evolve.
    What does tech do well, for now? Productivity: 
speed up laborious tasks, improve reliability of variable tasks, automate repetitive tasks, archive, index. Certain types of predictive insights: 
track data, expose patterns. Security: 
impose rules and limits, regulate access.
    What doesn’t tech do as well? Tech isn’t so hot at: 
Managing people. Making judgment calls. Fostering relationships. Discerning contextual nuance. (Yet.)
    Also, humans can’t leave meaning up to machines. That’s value humans add to the equation.
  5. Machines are what we encode of ourselves.
    And since that’s true, why not encode our best selves? Our most enlightened selves?
  6. Data-rich experiences tend to be better experiences. Just remember that analytics are people.
    Everyone loves the oft-quoted statistics about data: every 2 days we create as much information as we did from the beginning of time until 2003, and over 90% of all the data in the world was created in the past 2 years.
    And there are huge opportunities to use this data to make amazing, delightful, fulfilling, enriching human experiences possible.
    But what’s important in all of this is remembering that most of this data comes from humans, and represents human identity, preferences, motivations, desires, and so on. Most business data is about people. Analytics, in other words, are people. And while relevance is a form of respect, discretion is, too. So we need to treat human data with respect and protect it excessively, even as we use it to inform the design of more meaningful experiences.
  7. If you don’t align human experiences with meaning, you risk building absurdity at scale.
    There’s a story I tell (and it’s in the book) about a big retailer encoding a behavior change that, at some point, could put a cultural norm in jeopardy. And the upshot is: experience at scale changes culture. Because experience at scale is culture.
  8. “Online” and “offline” are blurrier than you may think.
    This is basically the whole premise of my previous book Pixels and Place, but the short version of this insight is: just about everywhere 
the physical world 
and the digital world converge, 
the connective layer is 
the data captured through 
human experience.
    And to create more meaningful human experiences, 
we need to design more 
integrated human experiences.
  9. Everything is in flux. Embrace change.
    70-80% of CEOs say the next three years are more critical than the past 50 years. The coming years, for example, are likely to see massive shifts in the scope and types of jobs humans do. Some companies will gain tremendous efficiencies from the use of automation; I propose that companies reinvest some of those gains 
into humanity in various ways: better customer experiences, job training, basic income experiments, etc. And that where possible, companies look to repurpose 
human skills and qualities toward higher value roles.
  10. Diversity in tech is a strategic asset. Scratch that: it’s an absolute imperative.
    We need women — 
and diversity of all kinds — 
in tech, 
leadership, and entrepreneurship for myriad reasons: because algorithms contain our biases, because it makes the space better for everyone, because we need diverse representations of the problems tech can solve, and on and on.

If these ideas and insights resonate with you, check out my book Tech Humanist: How You Can Make Technology Better for Business and Better for Humans. Or inquire about booking me to speak at your company or organization.

Here’s to a more meaningful future for all of us.

Experience Timeline by Technology Era

To understand what constitutes experience and what has constituted experience throughout different eras of technology, I offer this timeline of what characterized and will characterize experiences throughout the major eras of recent and forthcoming technology. We are somewhere around the social-enabled and “smart” era, with elements of the “intelligent” era beginning to show up and legacy remnants of the previous eras still left behind.

To understand what constitutes experience and what has constituted experience throughout different eras of technology, I offer this timeline of what characterized and will characterize experiences throughout the major eras of recent and forthcoming technology. We are somewhere around the social-enabled and “smart” era, with elements of the “intelligent” era beginning to show up and legacy remnants of the previous eras still left behind.

Experience Timeline by Technology Era

platform? context? (not eras, because many overlap)

analog (industrial/pre-industrial?)

digital

web-enabled

social-enabled

“smart”/connected data sources

“intelligent”/AI

fully virtual / ambient virtual

characterized by

solid state, tangible

electronic, power-operated

interlinked, global knowledge, global village

social sharing, FOMO, FONS, selfie culture

data tracking, anticipatory based on past behavior, algorithmic

anticipatory based on externalities, secondary behaviors, cognitive cues, emotional indicators

dominant eras

??-?? (ongoing)

19th century – ?? (ongoing)

1990s – ?? (ongoing)

2000s – ?? (ongoing)

2010s – ?? (ongoing)

2010/20s – ?? (ongoing)

automation

mechanical

electronic

interlinked

social triggers

algorithmic

anticipatory

dominant interface

tactile

tactile, impulse?, text

desktop screen, text, images

mobile screen, text, videos

voice

voice, gesture, ambient

sensory interactions

buttons, dials, levers, etc

typing, mouse, visual cues

typing, mouse, visual cues

typing, touch, visual interactions

buttons, keypads, visual displays, voice

visual

y

y

y

y

y

y

tactile

y

y

y

audio

indicators

indicators

content

content

interactions

Interactions, triggers

ambient cues

kinesthetic

motion-powered

gestures to trigger sensors

gestures to interact

olfactory

detect gas leaks, detect coffee smell

simulate aromas?

taste

simulate taste?

What does placemaking look like in each context?

What does business need to do to innovate in each?

What do meaningful human experiences look like in each context?

What is the future of meaningful human experience?

The future of meaningful human experience is multi-sensory, contextual, dimensional, integrated, intelligent, responsive, anticipatory, adaptive, and inclusive.