Goodbye to 2019, hello to our uncertain future

Our emerging tech panel at UN COP25 in Madrid

This time of year is my absolute favorite because for me it’s so much about relaxed reflection and setting intentions for the year — or even the decade! — ahead. And this year, with Christmas and New Years Day falling mid-week, all normal work schedules seem disrupted, creating extra space throughout these final weeks and over the weekend between them to reflect and plan.

It’s also a good time to think about the future in general.

One of the characteristics about the way we tend to think about the future now, though, is with more uncertainty than ever.

Yet as I wrote in Tech Humanist:

Here’s what I want to offer you: To me, the idea that the future is never fixed or certain is actually encouraging. Truly, it fills me with hope. I think of the future largely as something we continuously alter, shape, or at least influence with what we do today.

That thought also fills me with a sense of duty because it means there are always many possible futures that depend on me and you and everyone else doing our parts in the whole. It means our everyday actions have more power to shape outcomes than we are often comfortable admitting.

from Tech Humanist: How You Can Make Technology Better for Business and Better for Humans

Planning your own future

My friend and one of the organizers of House of Beautiful Business, Tim Leberecht, has written a lovely guide to help us all do just that. His process will help you have a productive and insightful “time between the years,” as Tim calls it, and a brilliantly successful 2020:

>> How to Make the Most of the Time Between the Years
(written by Tim Leberecht for Psychology Today)

Some of the questions I like to ask myself and encourage my clients and audiences to ask are:

  • What kind of future do you personally want to have?
  • What kind of future do you want for everyone on the planet?
  • What are you working on building?
  • What are you trying to achieve at scale?

By the way, all of this reflection and planning pairs well with another piece about getting better at training your brain what to retain and what to let go of. Hint: it comes down to the discipline of spending time thinking about what you most want to be thinking about.

>> Your Brain Has A “Delete” Button–Here’s How To Use It
(by Judah Pollack and Olivia Fox Cabane for Fast Company)

What are some other questions that help you clarify your purpose? What are some other exercises you engage in to help you reflect and plan?

Goodbye to my wild 2019

For me, 2019 was a whirlwind of unprecedented life opportunities, but also a time for increasing clarity and commitment to what I see as my mission.

To recap: In January, just a few months after my book Tech Humanist came out, it was featured on the CES stage. The following week, I had a tweet go viral and a follow-up in WIRED that also went viral, and I appeared on just about every major news outlet from BBC to NPR to Marketplace to talk about facial recognition (and to pivot the conversation to the larger issue of how technology is changing our human experiences). The next week, I spoke at the United Nations about innovation and humanity.

Then in June, a few days after delivering a keynote on Tech Humanism at a conference in Mumbai, India, I guest lectured at the University of Cambridge. Yes, the same one Charles Darwin, Sir Isaac Newton, and Stephen Hawking are all associated with. That University of Cambridge. I know, I couldn’t believe it either.

In the second half of the year I keynoted Etsy‘s Engineering Day in Brooklyn, a Google team offsite in Lake Tahoe, the P2P Transformation Summit in London, DevLearn in Las Vegas, UX Australia in Sydney, the Boston CIO Summit, and presented versions of my Tech Humanist talk at INBOUND, Content Marketing World, the Inc. CEO Summit, Mind the Product in London, House of Beautiful Business in Lisbon, and more.

Our emerging tech panel at UN COP25 in Madrid
Our emerging tech panel at UN COP25 in Madrid

Finally in December, after speaking once again at the United Nations headquarters, this time on AI and youth skills, I closed out my work year at the UN COP25 climate change conference in Madrid where I led a panel on the final day about the challenges and opportunities of leveraging emerging technologies to fight climate change.

Oh, and over the course of the year I added representation from Washington Speakers Bureau and Leading Authorities speakers bureau. That’s exciting personally and professionally but in addition it should help make bookings easier for many large company clients, which means there may be even more of those audiences in 2020 and beyond.

I’m telling you this to say: I think all of this activity proves there’s hope. I think my year has been wild because a lot of people see the potential for technology to diminish the humanity in the world, and a lot of people want to see to it that that doesn’t happen. If my experience this year indicates anything, I think it’s that people are determined to make the best of our tech-driven future

So what’s in store for all of us for 2020?

You’ll see many articles with predictions for 2020, and some will be more outlandish than others. I’m including just a few here that will likely affect you and your business more than others:

Expect to see more facial recognition in use everywhere and to hear more debate about it. Governments, law enforcement agencies, and high-traffic facilities like airports see tremendous opportunities and conveniences in deploying this technology, while civil liberties advocates see many privacy risks and challenges. Personally, I’m on Team Ban Facial Recognition Until We Have Better Protections In Place, but I’ll continue to follow all the developments and report on them (as I did in WIRED earlier this year).

Expect to have to grapple with privacy debates inside and outside your organization. The major push for companies to meet GDPR compliance in time for the May 2018 enforcement deadline is only the beginning of such regulatory efforts; the CCPA is due to be fully enforced as of January 1, 2020, and you can bet more regulations will be coming as time goes on. Your best bet to dealing with these is to get ahead of them: enact human-friendly data collection and usage practices such as not collecting more data than you need or than is relevant to the context of the interaction. (I spoke about this topic extensively at House of Beautiful Business in Lisbon, as well as at many other events throughout the year.)

The push for digital transformation isn’t over yet (no matter how tired of hearing about it you may be). Most companies, organizations, and cities are very much just catching up, still sorting out how, for example, the data from their front-end services can inform back-end operations and vice versa. Meanwhile, upstart data-rich apps and services are still disrupting industry after industry, so we’ll still be talking about that for a while. (This was the focus of many of my keynotes to executive audiences, such as the Boston CIO Summit, and more.)

You may also be tired of hearing about AI, but we’ve only scratched the surface of that conversation. While some folks debate the semantics of whether simple machine learning processes really constitute “artificial intelligence,” the advancements within that space progress daily, with both challenges and opportunities aplenty. (Part of my focus throughout 2019 and into 2020 has been on how machine learning and automated intelligence can help with addressing climate change. Stay tuned for more on that.)

Speaking of which, perhaps the biggest and most urgent trend of all will be facing the scale and scope of climate change, and using whatever technologies and tools we can to remediate against its effects.

Looking into the future for me and for us all

Above all, what is ahead in our future is increasing interconnectedness of our experiences. It’s the perfect time to adopt the mindset that in many respects what I do does affect you just as what you do affects me, and that we’re in this together. We need to accept our futures as wholly connected: connected through data, connected to each other, connected to the planet, connected to our collective destinies.

That connectedness shows in the work I’m lined up to do. To prepare for the bookings I have for 2020 so far, for example, I will be examining more deeply the future of jobs and work, the future of privacy, the future of trust, the future of the climate, and more. All of these topics have a through-line: the future of human experiences will depend heavily on our wise use of technology, collectively and individually.

Speaking of my bookings in 2020, I have talks booked throughout the U.S. — and in Budapest for the first time! If you happen to be able to attend any of these events, be sure to come up and say hi — I’d love to see you. And of course you can always book me to speak at your company or event.

And! I’ve begun to work on my next book. More on that to come, but you can be sure it will follow along these themes.

But for now the big question is:

What will you do with the future for you and for us all?

Here’s hoping you find the quiet reflection you need in these last days of 2019 to set the kinds of intentions that will guide you to achieve what you most want to achieve, for your own good and for the good of humanity.


If this theme resonates with the conversations your company, organization, or city has been having and you’d like to hire me as a keynote speaker at an event in 2020, please do reach out. Here’s to a meaningful year for us all. 

Preparing for the Next 10+ Years: Data After the #10YearChallenge Data Sharing Discussion

I’ve been fortunate enough to make my living writing, speaking, and advising about the impact of technology on humanity for quite a few years now. Most commonly, though, my audiences tend to be business leaders, and what I write and speak and advise about most often is how they can adopt a digital transformation strategy that helps the company succeed while keeping the human in focus and respecting human data.

So the massive mainstream media reaction to my viral #10YearChallenge tweet and subsequent piece in WIRED was in some ways a switch in perspective: from talking to businesses about human data, to talking to humans about business use of their data. And it gave me the chance to address a far more universal audience than usual — on BBC World News, Marketplace, and NPR Weekend Edition, among many other outlets — in a cultural moment so widely discussed, it was referenced in the top article on the Reddit homepage and mentioned on The Daily Show with Trevor Noah. My goal through it all was to spark greater awareness about how much data we share without realizing it, and how we can manage our data wisely. Just as the goal of my work is to help amplify the meaning in the experiences businesses create for the humans they do business with, my hope in connecting with a mainstream audience was to encourage people to participate in experiences meaningfully and mindfully. People all over the world gave me an overwhelming amount of feedback: some worried, some praising, some critical. I listened as openly as I could to everything I could.

With all that listening, I know that some common questions remain. I see many of the same recurring themes in comments on Twitter and elsewhere. So I’m using this opportunity here, at home on my own company’s site without the time limits and fleeting news cycles of a major news channel, to address a few of them, and I hope they will, in their own small way, be part of the conversation we carry forward.

Let’s get this one out of the way first, since it’s been the biggest misunderstanding throughout this whole deal:

“Facebook says they didn’t have any part in the meme. Didn’t you say they designed the whole #10YearChallenge meme to gather user data to train their facial recognition algorithm?”

It’s funny: I didn’t say Facebook did it, and quite frankly, it wouldn’t matter. I was musing on the fact that the meme was creating a rich data set, and pondering aloud what that data set could theoretically be used for. In any case, it was a thought experiment, not an accusation. In my WIRED article I expanded on the thought experiment and did not accuse Facebook of having engineered it. In fact, more importantly, as I wrote there:

The broader message, removed from the specifics of any one meme or even any one social platform, is that humans are the richest data sources for most of the technology emerging in the world. We should know this, and proceed with due diligence and sophistication.

— excerpt from my article in WIRED

That said, though, I wouldn’t have made any definitive statements from the beginning claiming that Facebook didn’t or wouldn’t have done something like this. I’m sure there are plenty of well-meaning people in the company’s leadership, but between psychological experiments, Cambridge Analytica, and various leaks and breaches, there have been too many missteps, lapses, and outright errors in judgment on Facebook’s part for them to be above suspicion when it comes to violations of data security and trust.

Nonetheless, although it was a very common misconception, I genuinely don’t suspect that the meme began with Facebook — and I don’t believe that matters. What matters is that we use these discussions to deepen our thinking about personal data, privacy, and trust.

“How can people who’ve taken your message to heart and now recognize the importance of this topic learn to manage their data more wisely?”

If you think of your data as money, you may have a better instinct for why you need to manage it well, and take care not to spend it loosely or foolishly. I’m not a fan of the idea of data as currency (partly because I think the human experience is more dimensional than a monetary metaphor conveys), but just this once I think it may be a helpful comparison. And as long as you know you’re safe, not getting lied to or ripped off, this “data is money” comparison may help illustrate why it can be worth it to spend it on experiences that matter to you.

In terms of actionable steps, here are a few helpful resources:

Personally, one easy step I take is to use the On This Day feature on Facebook to go through my posting archive day by day. I may change the permissions on old content, or delete a post completely if it seems like it no longer serves me or anyone else to have it out there.

I also have recurring reminders on my calendar to do reviews and audits of my online presence. I do what I call a weekly glance, a quarterly review, and an annual audit. For the weekly session, you can assign yourself one platform each week, and review your security settings and old content to make sure there isn’t anything out there that you no longer want to share. The quarterly review and annual audit may entail different activities for you, but for me they also involve updating old bios and links in various places, so it becomes a strategic review as well as a security check.

“What about Apple Pay and unlocking your phone with your face, or accessing your bank account with your face? Or paying for your meal with your face? What about other biometric data like fingerprints?”

All of this is relevant, and I’ll unpack some of these issues more in future articles. The short answer, though, is that with some of these uses, such as Apple Pay, you take an educated guess that the company collecting your data will safeguard it, because the company bears some risk if they screw up. But not all data sharing carries proportional risk on both sides, so think critically before using these services.

At least for now, pay for your fried chicken with cash, not your face.

“What about 23andme and other DNA/genetic data issues?”

That’s a whole other article. (I will say I personally haven’t done a commercial DNA test because bad outcomes always seemed possible.) The topic does relate to the rest of this, and it does matter that we’re 1) cautious of using commercial services like this, and that 2) we hold companies accountable to adhere to the uses we agreed to, and not to overstep what we understood to be our contract.

“What about data tracking in smart home systems?”

The standards and precedents are not yet well defined for the use and protections on data collected by smart home devices like smart speakers listening passively for a command. The safest thing to do is hold off on using them, and the second-safest thing is to turn them off when not in use.

While I did address some of the issues and opportunities with smart home automation and devices in Tech Humanist, this is again a topic I’ll dig into more in future articles.

“What about regulations on data? What about regulations on facial recognition, or on AI in general?”

The vast amount of personal data transmitted and collected by business, government, and institutional entities is what powers algorithmic decision making, from ecommerce recommendations to law enforcement. And this vast data and broad algorithmic decision-making is also where machine learning and artificial intelligence takes root. Artificial intelligence, broadly, has the chance to improve human life in many ways. It could help address problems associated with world poverty and hunger; it could improve global transportation logistics in ways that reduce emissions and improve the environment; it could help detect disease and extend healthy human life.

But machines are only as good as the human values encoded into them. And where values aren’t clear or aren’t in alignment with the best and safest outcomes for humanity, regulations can be helpful.

The European Union’s General Data Protection Regulation, or GDPR, that went fully into place in May 2018 is for now the most comprehensive set of regulatory guidelines protecting individuals’ data. And American tech companies have to play by these rules: just this week, Google got hit with a 50 million euro fine for violating the term that requires companies to produce clear disclosure on the data they collect from consumers.

Meanwhile, for many Americans it’s tough to imagine what entity in the United States would be responsible for enforcing any set of regulations pertaining to data and AI.

In the meantime, just as with climate change, we need efforts on the macro and micro scale: the experts tell us that for any kind of real reduction in impact on the environment we need big movement from commercial and industrial entities which produce the lion’s share of emissions, but that doesn’t mean that, say, you shouldn’t put your soda bottle in the recycling bin, not the trash. We’re learning more and more how important it is for us to be mindful of our ecological footprint; we also need to learn how to be mindful of our digital footprint.

“Should I turn off facial recognition image tagging in Facebook?”

I would advise doing so, yes.

the Facebook settings screen where you can disable automatic face recognition

“Are you saying I can’t have any fun online?”

Oh, heck no. By all means, I am very pro-fun. Even when it comes to digital interactions.

It’s easier to have fun when you know you’re reasonably safe, though, right? The biggest takeaway from this discussion about the possible side effects of the #10YearChallenge should be to remember that when any meme or game is encouraging you — and large groups of other people — to share specific information about yourself, it’s worth pausing before you participate. It’s relevant to wonder who might be collecting the data, but it’s far more important to think what the collected data can do.

But share the meaningful parts of your life online with friends and family, and enjoy being able to follow their updates about the meaningful parts of their lives. That has certainly been the most wonderful benefit of social media.

Not only am I pro-fun, I am also very pro-technology. I love tech, and I genuinely think emerging technologies like AI, automation, and the Internet of Things — all largely driven by human data — have the chance to make our lives better. (As I wrote in Tech Humanist, I believe we have the chance to create the best futures for the most people.) But to achieve that, we need to be very mindful about how they can make our lives worse, and put measures in place — in our government, in our businesses, and in our own behavior — to help ensure the best outcomes.

The Rental Car Traffic Violation Scam Hypothesis and Personal Privacy

vintage photo of driver being ticketed by police officer
vintage photo of driver being ticketed by police officer (source: Wikimedia Commons)

I got a ticket in the mail yesterday for running a red light. Well, it wasn’t a ticket, exactly. It was a “notice of an administrative fee” for a red light violation that allegedly happened while I was driving a rented car in my mom’s town the day before her birthday. The ticket itself apparently hasn’t even arrived in the mail yet, but the rental car company has a whole operation to process the administrative fees from traffic violations incurred by their renters, and they’re not wasting any time collecting theirs.

I bring this up here because it’s actually happened to me quite a lot. Nearly every time I rent a car, I end up getting a traffic ticket in the mail a few months later (and as a consultant and speaker who often travels to clients and events, I rent a lot of cars). You may be tempted to joke that I’m a terrible driver, but these traffic violations by mail never showed up when I was driving my own car and the discrepancy has become enough of a pattern that my mind, not usually given to conspiracy theories, started to formulate a hypothesis about how this could be part of a program to make money off of car renters.

Anatomy of a Scam Hypothesis

How could this be happening? Well, the rental car agencies could be selling their driver rental information to the companies that operate the traffic cameras. The traffic cameras could be scanning license plates and matching them against a list from the rental agencies. They could be issuing tickets on violations or close-enough-to-be-violations only when there’s a match.

I also notice that I never get more than one traffic violation per rental. The system could be set to throttle the tickets to one per rental period. Casual renters wouldn’t think much about it. “Oh well,” they’d think, “I got a ticket. I’ll just pay.”

But frequent renters, like me, start to notice a pattern. Why is it I owned a car all my adult life until late last year, drove all over the country, frequently taking my car on road trips, and have never once been issued a traffic-camera-ticket for any of those trips, yet when I drive in some of these same towns in a rental car, I get tickets mailed to me?

“Maybe It’s Just You”

Perhaps you’re skeptical, as we discuss this over drinks, and you offer that maybe instead of this elaborate scam, there’s instead a behavioral science element to all this: suppose we all drive a little more recklessly when we’re in a rented car. That seems a reasonable counter-hypothesis, I’d concede with a tip of my beer mug, but without supporting data or a compelling argument to convince me that there might be truth to this, I maintain that, whether I own the car or borrow it, I drive as I drive. And pass the peanuts.

Anyway, What About Due Process?

Most of all, whether the intent to conspire is there or not, surely it brings up questions of due process. If a police officer had simply pulled me over in each of these places, there’d be far less question of legitimacy. You say I ran a red light? Pull me over right then. The action on which the claim is based will be fresh in my head and I can either challenge the officer (calmly and politely, of course) about the veracity of the claim or accept the ticket. (Or cry, and maybe get off with a warning. Oh, relax; I’m kidding.) But you say I ran a red light three months after the fact? I barely recall being at the intersection in question, let alone what the conditions of the intersection were, or the timing of the light, or the layout of the traffic around me. Even if you were to furnish me with photographic evidence of my rented car with me in it clearly violating a red light, I still don’t have the consideration of context, and I get no due process at all.

The Bigger Issue: Privacy, Personal Rights, and Public Data

I don’t necessarily believe my conspiracy hypothesis about the rental car traffic violation scam; I just think it’s possible, and at the rate I get these tickets, I admit I’m a tad suspicious. But I’m less concerned with that and more conscious of the the bigger issue: how vulnerable people are and increasingly will be to schemes that take advantage of ever-present tracking data, surveillance, and systems with default authority, such as rental car companies and traffic enforcement bureaus. Even if these entities aren’t trying to be exploitative, the more access they have to integrated data about our movements and behaviors, the greater the potential will be for them to overstep the authority we think we’ve granted them.

So why do I share this half-baked conspiracy idea anyway? Because the premise is not mere science fiction; it’s certainly not impossible, and it’s important that we remind ourselves regularly of the powerful data about people that can be used by companies and government. That power is growing, and to a great degree, it’s already out of our hands as citizens, consumers, patients, and the public. So where and when we can, it’s important that we think critically about what the implications are, and it’s important for those of us who work in and around data systems that track human actions to be mindful of what that means.

Meanwhile, to finish on a lighter note, here’s how comedian Joe Lycett handled a mailed-in notice of a parking ticket. Enjoy.

Column: “How to get customer data without being creepy” at The Tennessean

An excerpt:

It’s vital for companies to use the resources at their disposal, including data and data-driven experiments, to make themselves better, faster, and smarter at making decisions and serving their customers. That includes testing to make products better, make customer experiences smoother, make messaging more relevant, make operations more efficient, and so on. It covers a lot.

But what businesses should not do is take extraordinary license with the permissions customers give them through their interactions with them.

Read the rest at the link:
http://www.tennessean.com/story/money/tech/2014/07/03/customer-feedback-data-experiments-online/12189743/