Have you ever had to delete parts or all of your online presence because you feared for your life? This story has been on my mind since I read this:
“USAID, the United States’s humanitarian arm, purportedly sent an email over the weekend to partners asking them to go through their social media accounts and websites with a fine-toothed comb to ‘remove photos and information that could make individuals or groups vulnerable’. USAID also advised partners still operating in Afghanistan to delete and wipe any personal identifying information of those they’d worked with on the ground, in case it fell into the wrong hands.”
Are there lessons we can take from this story about data privacy architecture and such? Probably, and out of fairness to these and potentially the next humans who will go through this we should absolutely work through that discussion and create better solutions. For reasons far less grave but still important, we have long needed to re-think the opportunities we have to control where our data goes, who has access to it, and how we can pull it back or lock it down when we need to.
But I don’t want the vastness of that conversation to overshadow the very real experiences people are living through right now. So in the meantime this is just a placeholder of compassion for human beings dealing with an imminent existential threat that is complicated even further by the latticework of digital experiences and data most of us take for granted.
Here’s wishing safety and peace to those who desperately need it.
Scratch the surface of any debate about the future of work and you’ll find there an argument for Universal Basic Income.
And certainly from a purely survivalist standpoint that’s an important consideration.
We need to know what it is going to look like for people not to have the financial resources from working. We also need to understand how this model might concentrate power and opportunity into fewer and fewer hands.
But we also need to think beyond this consideration of the future of work. Humans rely on work for more than income; we also rely on work for meaning.
Humans have historically derived associated work with what we do; we have historically derived associated work with who we are.
Our work is in so many cases our identities, as the long tradition of names, last names and family names, derived from professions demonstrates. Carpenter, Baker, Butcher, and so many others — and this happens across languages, not just English. Throughout the world and throughout human history, we have taken so much of who we are and what we are about from what we do for a living, and what our ancestors have done for a living.
As I have previously written:
We derive a tremendous amount of meaning from our work—the sense of accomplishment, of problems solved, of having provided for ourselves and for our families, of having made a contribution, of having value and self-worth.
We have to recognize the possibility of a post-human-work world, or at least a world where human work has fundamentally changed—so that as we look at automation, we see the impact on both the experiences automation creates and the experiences automation displaces. Because in the future scenario where all the human work has vanished, where do humans get the same sense of meaning? That meaning we have historically derived from work will have to come from something other than work. We need a better answer.
My radical idea is that there needs to be some kind of replacement, or reinforcement, for the meaning we derive from work, like a “Universal Basic Meaning” that’s supplied around us.
Not to take the place of work; not to replace jobs. But to enhance jobs and everything else we do, every experience we have. What matters in all of this is that humans have the opportunity for meaningful experiences in the future, whether they derive from work or not.
Because while I do think about the financial implications of job displacement and replacement from automation, I’m nearly as concerned about people not having the resources of meaning and identity.
I wonder about what it’s going to do to us, as human jobs shift away from work we can develop identity around. What I think is going to be needed, even more than ever, are meaningful experiences in the world around us. Meaningful experiences at scale.
One concern I have is that as experiences become increasingly automated and are often selected for automation by how mundane and repetitive — and hence, how meaningless — they are, that we will be increasingly surrounded by meaningless experiences. It makes rational sense to automate the tedious tasks in our workflow and throughout our lives, but it’s easy to imagine this at scale where more and more of our everyday experiences and interactions are automated, and they’re all meaningless.
Because the interconnectedness of data and algorithms and emerging technologies are more and more part of our everyday environments, and they can create experiences that have outsized impact on who we are and how we live our lives. And it’s important that we appreciate the way these systems change us.
This is why I always say we should “automate the meaningful too.” It is important that we now, in the early stages of automating human experiences, encode them with all the enlightenment, all the equity, all the evolved thinking we can.
In the weeks and months to come, I’ll write more about Universal Basic Meaning, how this idea can inform our understanding of ethical and practical data-based experiences, and how we can build the most meaningful experiences at scale.
This time of year is my absolute favorite because for me it’s so much about relaxed reflection and setting intentions for the year — or even the decade! — ahead. And this year, with Christmas and New Years Day falling mid-week, all normal work schedules seem disrupted, creating extra space throughout these final weeks and over the weekend between them to reflect and plan.
It’s also a good time to think about the future in general.
One of the characteristics about the way we tend to think about the future now, though, is with more uncertainty than ever.
Here’s what I want to offer you: To me, the idea that the future is never fixed or certain is actually encouraging. Truly, it fills me with hope. I think of the future largely as something we continuously alter, shape, or at least influence with what we do today.
That thought also fills me with a sense of duty because it means there are always many possible futures that depend on me and you and everyone else doing our parts in the whole. It means our everyday actions have more power to shape outcomes than we are often comfortable admitting.
My friend and one of the organizers of House of Beautiful Business, Tim Leberecht, has written a lovely guide to help us all do just that. His process will help you have a productive and insightful “time between the years,” as Tim calls it, and a brilliantly successful 2020:
Some of the questions I like to ask myself and encourage my clients and audiences to ask are:
What kind of future do you personally want to have?
What kind of future do you want for everyone on the planet?
What are you working on building?
What are you trying to achieve at scale?
By the way, all of this reflection and planning pairs well with another piece about getting better at training your brain what to retain and what to let go of. Hint: it comes down to the discipline of spending time thinking about what you most want to be thinking about.
Then in June, a few days after delivering a keynote on Tech Humanism at a conference in Mumbai, India, I guest lectured at the University of Cambridge. Yes, the same one Charles Darwin, Sir Isaac Newton, and Stephen Hawking are all associated with. That University of Cambridge. I know, I couldn’t believe it either.
Oh, and over the course of the year I added representation from Washington Speakers Bureau and Leading Authorities speakers bureau. That’s exciting personally and professionally but in addition it should help make bookings easier for many large company clients, which means there may be even more of those audiences in 2020 and beyond.
I’m telling you this to say: I think all of this activity proves there’s hope. I think my year has been wild because a lot of people see the potential for technology to diminish the humanity in the world, and a lot of people want to see to it that that doesn’t happen. If my experience this year indicates anything, I think it’s that people are determined to make the best of our tech-driven future.
So what’s in store for all of us for 2020?
You’ll see many articles with predictions for 2020, and some will be more outlandish than others. I’m including just a few here that will likely affect you and your business more than others:
Expect to see more facial recognition in use everywhere and to hear more debate about it. Governments, law enforcement agencies, and high-traffic facilities like airports see tremendous opportunities and conveniences in deploying this technology, while civil liberties advocates see many privacy risks and challenges. Personally, I’m on Team Ban Facial Recognition Until We Have Better Protections In Place, but I’ll continue to follow all the developments and report on them (as I did in WIRED earlier this year).
Expect to have to grapple with privacy debates inside and outside your organization. The major push for companies to meet GDPR compliance in time for the May 2018 enforcement deadline is only the beginning of such regulatory efforts; the CCPA is due to be fully enforced as of January 1, 2020, and you can bet more regulations will be coming as time goes on. Your best bet to dealing with these is to get ahead of them: enact human-friendly data collection and usage practices such as not collecting more data than you need or than is relevant to the context of the interaction. (I spoke about this topic extensively at House of Beautiful Business in Lisbon, as well as at many other events throughout the year.)
The push for digital transformation isn’t over yet (no matter how tired of hearing about it you may be). Most companies, organizations, and cities are very much just catching up, still sorting out how, for example, the data from their front-end services can inform back-end operations and vice versa. Meanwhile, upstart data-rich apps and services are still disrupting industry after industry, so we’ll still be talking about that for a while. (This was the focus of many of my keynotes to executive audiences, such as the Boston CIO Summit, and more.)
You may also be tired of hearing about AI, but we’ve only scratched the surface of that conversation. While some folks debate the semantics of whether simple machine learning processes really constitute “artificial intelligence,” the advancements within that space progress daily, with both challenges and opportunities aplenty. (Part of my focus throughout 2019 and into 2020 has been on how machine learning and automated intelligence can help with addressing climate change. Stay tuned for more on that.)
Speaking of which, perhaps the biggest and most urgent trend of all will be facing the scale and scope of climate change, and using whatever technologies and tools we can to remediate against its effects.
Looking into the future for me and for us all
Above all, what is ahead in our future is increasing interconnectedness of our experiences. It’s the perfect time to adopt the mindset that in many respects what I do does affect you just as what you do affects me, and that we’re in this together. We need to accept our futures as wholly connected: connected through data, connected to each other, connected to the planet, connected to our collective destinies.
That connectedness shows in the work I’m lined up to do. To prepare for the bookings I have for 2020 so far, for example, I will be examining more deeply the future of jobs and work, the future of privacy, the future of trust, the future of the climate, and more. All of these topics have a through-line: the future of human experiences will depend heavily on our wise use of technology, collectively and individually.
Speaking of my bookings in 2020, I have talks booked throughout the U.S. — and in Budapest for the first time! If you happen to be able to attend any of these events, be sure to come up and say hi — I’d love to see you. And of course you can always book me to speak at your company or event.
And! I’ve begun to work on my next book. More on that to come, but you can be sure it will follow along these themes.
But for now the big question is:
What will you do with the future for you and for us all?
Here’s hoping you find the quiet reflection you need in these last days of 2019 to set the kinds of intentions that will guide you to achieve what you most want to achieve, for your own good and for the good of humanity.
If this theme resonates with the conversations your company, organization, or city has been having and you’d like to hire me as a keynote speaker at an event in 2020, please do reach out. Here’s to a meaningful year for us all.
So the massive mainstream media reaction to my viral #10YearChallenge tweet and subsequent piece in WIRED was in some ways a switch in perspective: from talking to businesses about human data, to talking to humans about business use of their data. And it gave me the chance to address a far more universal audience than usual — on BBC World News, Marketplace, and NPR Weekend Edition, among many other outlets — in a cultural moment so widely discussed, it was referenced in the top article on the Reddit homepage and mentioned on The Daily Show with Trevor Noah. My goal through it all was to spark greater awareness about how much data we share without realizing it, and how we can manage our data wisely. Just as the goal of my work is to help amplify the meaning in the experiences businesses create for the humans they do business with, my hope in connecting with a mainstream audience was to encourage people to participate in experiences meaningfully and mindfully. People all over the world gave me an overwhelming amount of feedback: some worried, some praising, some critical. I listened as openly as I could to everything I could.
With all that listening, I know that some common questions remain. I see many of the same recurring themes in comments on Twitter and elsewhere. So I’m using this opportunity here, at home on my own company’s site without the time limits and fleeting news cycles of a major news channel, to address a few of them, and I hope they will, in their own small way, be part of the conversation we carry forward.
Let’s get this one out of the way first, since it’s been the biggest misunderstanding throughout this whole deal:
“Facebook says they didn’t have any part in the meme. Didn’t you say they designed the whole #10YearChallenge meme to gather user data to train their facial recognition algorithm?”
It’s funny: I didn’t say Facebook did it, and quite frankly, it wouldn’t matter. I was musing on the fact that the meme was creating a rich data set, and pondering aloud what that data set could theoretically be used for. In any case, it was a thought experiment, not an accusation. In my WIRED article I expanded on the thought experiment and did not accuse Facebook of having engineered it. In fact, more importantly, as I wrote there:
The broader message, removed from the specifics of any one meme or even any one social platform, is that humans are the richest data sources for most of the technology emerging in the world. We should know this, and proceed with due diligence and sophistication.
That said, though, I wouldn’t have made any definitive statements from the beginning claiming that Facebook didn’t or wouldn’t have done something like this. I’m sure there are plenty of well-meaning people in the company’s leadership, but between psychological experiments, Cambridge Analytica, and various leaks and breaches, there have been too many missteps, lapses, and outright errors in judgment on Facebook’s part for them to be above suspicion when it comes to violations of data security and trust.
Nonetheless, although it was a very common misconception, I genuinely don’t suspect that the meme began with Facebook — and I don’t believe that matters. What matters is that we use these discussions to deepen our thinking about personal data, privacy, and trust.
“How can people who’ve taken your message to heart and now recognize the importance of this topic learn to manage their data more wisely?”
If you think of your data as money, you may have a better instinct for why you need to manage it well, and take care not to spend it loosely or foolishly. I’m not a fan of the idea of data as currency (partly because I think the human experience is more dimensional than a monetary metaphor conveys), but just this once I think it may be a helpful comparison. And as long as you know you’re safe, not getting lied to or ripped off, this “data is money” comparison may help illustrate why it can be worth it to spend it on experiences that matter to you.
In terms of actionable steps, here are a few helpful resources:
Personally, one easy step I take is to use the On This Day feature on Facebook to go through my posting archive day by day. I may change the permissions on old content, or delete a post completely if it seems like it no longer serves me or anyone else to have it out there.
I also have recurring reminders on my calendar to do reviews and audits of my online presence. I do what I call a weekly glance, a quarterly review, and an annual audit. For the weekly session, you can assign yourself one platform each week, and review your security settings and old content to make sure there isn’t anything out there that you no longer want to share. The quarterly review and annual audit may entail different activities for you, but for me they also involve updating old bios and links in various places, so it becomes a strategic review as well as a security check.
“What about Apple Pay and unlocking your phone with your face, or accessing your bank account with your face? Or paying for your meal with your face? What about other biometric data like fingerprints?”
All of this is relevant, and I’ll unpack some of these issues more in future articles. The short answer, though, is that with some of these uses, such as Apple Pay, you take an educated guess that the company collecting your data will safeguard it, because the company bears some risk if they screw up. But not all data sharing carries proportional risk on both sides, so think critically before using these services.
At least for now, pay for your fried chicken with cash, not your face.
“What about 23andme and other DNA/genetic data issues?”
That’s a whole other article. (I will say I personally haven’t done a commercial DNA test because bad outcomes always seemed possible.) The topic does relate to the rest of this, and it does matter that we’re 1) cautious of using commercial services like this, and that 2) we hold companies accountable to adhere to the uses we agreed to, and not to overstep what we understood to be our contract.
“What about data tracking in smart home systems?”
The standards and precedents are not yet well defined for the use and protections on data collected by smart home devices like smart speakers listening passively for a command. The safest thing to do is hold off on using them, and the second-safest thing is to turn them off when not in use.
While I did address some of the issues and opportunities with smart home automation and devices in Tech Humanist, this is again a topic I’ll dig into more in future articles.
“What about regulations on data? What about regulations on facial recognition, or on AI in general?”
The vast amount of personal data transmitted and collected by business, government, and institutional entities is what powers algorithmic decision making, from ecommerce recommendations to law enforcement. And this vast data and broad algorithmic decision-making is also where machine learning and artificial intelligence takes root. Artificial intelligence, broadly, has the chance to improve human life in many ways. It could help address problems associated with world poverty and hunger; it could improve global transportation logistics in ways that reduce emissions and improve the environment; it could help detect disease and extend healthy human life.
But machines are only as good as the human values encoded into them. And where values aren’t clear or aren’t in alignment with the best and safest outcomes for humanity, regulations can be helpful.
The European Union’s General Data Protection Regulation, or GDPR, that went fully into place in May 2018 is for now the most comprehensive set of regulatory guidelines protecting individuals’ data. And American tech companies have to play by these rules: just this week, Google got hit with a 50 million euro fine for violating the term that requires companies to produce clear disclosure on the data they collect from consumers.
In the meantime, just as with climate change, we need efforts on the macro and micro scale: the experts tell us that for any kind of real reduction in impact on the environment we need big movement from commercial and industrial entities which produce the lion’s share of emissions, but that doesn’t mean that, say, you shouldn’t put your soda bottle in the recycling bin, not the trash. We’re learning more and more how important it is for us to be mindful of our ecological footprint; we also need to learn how to be mindful of our digital footprint.
“Should I turn off facial recognition image tagging in Facebook?”
I would advise doing so, yes.
“Are you saying I can’t have any fun online?”
Oh, heck no. By all means, I am very pro-fun. Even when it comes to digital interactions.
It’s easier to have fun when you know you’re reasonably safe, though, right? The biggest takeaway from this discussion about the possible side effects of the #10YearChallenge should be to remember that when any meme or game is encouraging you — and large groups of other people — to share specific information about yourself, it’s worth pausing before you participate. It’s relevant to wonder who might be collecting the data, but it’s far more important to think what the collected data can do.
Not only am I pro-fun, I am also very pro-technology. I love tech, and I genuinely think emerging technologies like AI, automation, and the Internet of Things — all largely driven by human data — have the chance to make our lives better. (As I wrote in Tech Humanist, I believe we have the chance to create the best futures for the most people.) But to achieve that, we need to be very mindful about how they can make our lives worse, and put measures in place — in our government, in our businesses, and in our own behavior — to help ensure the best outcomes.
I’m back visiting my old home town of Nashville for a few days, and had a super-fun breakfast conversation with Mary Laura Philpott this morning during which she mentioned that she sometimes gets automatically tagged in photos as Nashville mayor Megan Barry. The two bear a passing resemblance but not enough that you’d probably think to comment on it. She also mentioned that she has occasionally heard that she looks like Reese Witherspoon (although she never hears that in Nashville — Witherspoon’s home town).
I also used to hear all the time that I looked like Sandra Bullock, and the joke used to go that I could be her security double — the person who goes out the front door to throw off the fans and paparazzi so the actual star can sneak out the back door. (I’ve always wished I could be mistaken for Connie Britton just for the hair, but I don’t have the patience for hot rollers.)
Anyway, it occurred to me as Mary Laura and I were chatting that there’s a new kind of double: the facial recognition algorithm double. Facial recognition algorithms have become a routine part of our social media and personal photo library management, but they’re going to show up more and more in varied aspects of our lives, from surveillance to shopping. And the idea that you can “pass for” someone else — and that someone else could pass for you — is a tad troubling, isn’t it?
After all, there’s not much we can do about it, unless we have reconstructive surgery and hot-roll our hair and even then we might start getting tagged as Jocelyn Wildenstein or something, so we should probably just accept whatever Doppelgänger fate hands us and get on with life. The machines don’t know whether we’re the mayor of Nashville or the star of “Nashville” or just visiting Nashville.
Rather, the most interesting things about Pokemon Go have to do with connected experiences, and the sweeping changes these are bringing: new marketing models, opportunities with augmented reality, location-based marketing, and all the assorted issues with data privacy and security. The most interesting things about the Pokemon Go phenomenon have nothing to do with the game itself and everything to do with how different things are starting to be and are going to continue to be.
But since you can engage with the game through a camera view of what’s ahead of you, it’s actually possible to walk and play and still be at least somewhat connected to your surroundings.
Connected Experiences… and Your Business Strategy?
This is only the beginning of what’s to come.
On social media, people have been laughing at the businesses who are developing Pokemon Go strategies (and well, it does sound absurd), but honestly if they’re starting now even these are a little late to the biggest opportunity. The gold rush was this past two weeks, when everything was novel and players were entertained by the outreach. Even if the game’s popularity continues to grow, players will likely begin to be put off by overt attempts to capitalize on the game from late entrants. And if your business is still laughing, you’re missing out on time to think about how augmented reality and connected experiences stand to change the status quo.
Of course then there’s this:
PetSmart Exec: I saw a Digiday post that every company needs a Pokemon Go strategy. WHAT'S OURS? Marketing Manager: pic.twitter.com/8Zddyaqges
So I’m not saying to rush out and do something specific to Pokemon Go that has no alignment with your customers’ motivations or your brand. (Although if you have an idea for an experience that aligns and integrates your customers’ experience with the game in an organic, authentic, and/or memorable way, by all means do it, measure it, and publish a case study about it.) This is a call for strategic action about a macro trend, not mindless reaction to a micro trend. Trying to capitalize on the trend without strategy will probably come across to people like an attempt to manipulate the moment.
You need strategic planning (and do please note: I offer strategy workshops) that sets you up for success as the physical and digital worlds increasingly converge. There’s enough transformation taking place that there will be a relevant, meaningful way to make these opportunities align with your brand and your customers. Your job is to try to catch it.
The thing about the Internet of Things is it isn’t about the things; it’s about the people.
The “things,” for the most part, are designed to create more connected experiences for humans. And the data layer that connects the digital experiences to the physical world through our gestures and actions is our data.
The transactional data that connects the online and offline world happens largely through us, through our transactions and purchases, through our speech, through our attention, through everything we do.
In the course of analyzing, optimizing, and targeting, we can’t let ourselves forget about the humanity in the data.
I got a ticket in the mail yesterday for running a red light. Well, it wasn’t a ticket, exactly. It was a “notice of an administrative fee” for a red light violation that allegedly happened while I was driving a rented car in my mom’s town the day before her birthday. The ticket itself apparently hasn’t even arrived in the mail yet, but the rental car company has a whole operation to process the administrative fees from traffic violations incurred by their renters, and they’re not wasting any time collecting theirs.
I bring this up here because it’s actually happened to me quite a lot. Nearly every time I rent a car, I end up getting a traffic ticket in the mail a few months later (and as a consultant and speaker who often travels to clients and events, I rent a lot of cars). You may be tempted to joke that I’m a terrible driver, but these traffic violations by mail never showed up when I was driving my own car and the discrepancy has become enough of a pattern that my mind, not usually given to conspiracy theories, started to formulate a hypothesis about how this could be part of a program to make money off of car renters.
Anatomy of a Scam Hypothesis
How could this be happening? Well, the rental car agencies could be selling their driver rental information to the companies that operate the traffic cameras. The traffic cameras could be scanning license plates and matching them against a list from the rental agencies. They could be issuing tickets on violations or close-enough-to-be-violations only when there’s a match.
I also notice that I never get more than one traffic violation per rental. The system could be set to throttle the tickets to one per rental period. Casual renters wouldn’t think much about it. “Oh well,” they’d think, “I got a ticket. I’ll just pay.”
But frequent renters, like me, start to notice a pattern. Why is it I owned a car all my adult life until late last year, drove all over the country, frequently taking my car on road trips, and have never once been issued a traffic-camera-ticket for any of those trips, yet when I drive in some of these same towns in a rental car, I get tickets mailed to me?
“Maybe It’s Just You”
Perhaps you’re skeptical, as we discuss this over drinks, and you offer that maybe instead of this elaborate scam, there’s instead a behavioral science element to all this: suppose we all drive a little more recklessly when we’re in a rented car. That seems a reasonable counter-hypothesis, I’d concede with a tip of my beer mug, but without supporting data or a compelling argument to convince me that there might be truth to this, I maintain that, whether I own the car or borrow it, I drive as I drive. And pass the peanuts.
Anyway, What About Due Process?
Most of all, whether the intent to conspire is there or not, surely it brings up questions of due process. If a police officer had simply pulled me over in each of these places, there’d be far less question of legitimacy. You say I ran a red light? Pull me over right then. The action on which the claim is based will be fresh in my head and I can either challenge the officer (calmly and politely, of course) about the veracity of the claim or accept the ticket. (Or cry, and maybe get off with a warning. Oh, relax; I’m kidding.) But you say I ran a red light three months after the fact? I barely recall being at the intersection in question, let alone what the conditions of the intersection were, or the timing of the light, or the layout of the traffic around me. Even if you were to furnish me with photographic evidence of my rented car with me in it clearly violating a red light, I still don’t have the consideration of context, and I get no due process at all.
The Bigger Issue: Privacy, Personal Rights, and Public Data
I don’t necessarily believe my conspiracy hypothesis about the rental car traffic violation scam; I just think it’s possible, and at the rate I get these tickets, I admit I’m a tad suspicious. But I’m less concerned with that and more conscious of the the bigger issue: how vulnerable people are and increasingly will be to schemes that take advantage of ever-present tracking data, surveillance, and systems with default authority, such as rental car companies and traffic enforcement bureaus. Even if these entities aren’t trying to be exploitative, the more access they have to integrated data about our movements and behaviors, the greater the potential will be for them to overstep the authority we think we’ve granted them.
So why do I share this half-baked conspiracy idea anyway? Because the premise is not mere science fiction; it’s certainly not impossible, and it’s important that we remind ourselves regularly of the powerful data about people that can be used by companies and government. That power is growing, and to a great degree, it’s already out of our hands as citizens, consumers, patients, and the public. So where and when we can, it’s important that we think critically about what the implications are, and it’s important for those of us who work in and around data systems that track human actions to be mindful of what that means.
Meanwhile, to finish on a lighter note, here’s how comedian Joe Lycett handled a mailed-in notice of a parking ticket. Enjoy.
By now it seems everyone and their dog has shared their predictions and observations about the the trends of 2015, so it may seem I’m a little late to the party. But I was holding off because I knew I was scheduled to present on the topic 13 days into the year. That happened yesterday — I was the keynote speaker at the Franchise Business Network annual kickoff meeting — so I can break my silence, such as it is. Anyway, I spoke about the major trends affecting business that I see taking shape, particularly around data and technology, heading into 2015. And today, before we get any further into the year, I thought I’d share some of what I presented last night with readers here.
Bear in mind that this audience was primarily franchisers and franchisees, along with service providers to those businesses, and with a healthy sprinkling of high-potential startup founders in the mix. So I introduced the subject by talking about relevance and meaningfulness, and that I had tried to narrow the scope of the talk to those emerging topics that seemed like they could have the most meaningful impact on their businesses this year. I talked about six major trends:
Right-sizing big data
Ongoing channel shakeup
Rental crowding out the ownership model
Deeper and blurrier integrations of the ideas of “online” and “offline”
Disruption of payments: mobile payments, crypocurrency
Evolving ideas of “work,” “team,” and “leader”
I went into more detail for each trend, of course, but more importantly, I tried to summarize each trend with a “minimum viable opportunity,” repurposing the idea from the “minimum viable product” in the Lean Startup methodology. In case you’re not familiar with the notion of an “MVP,” as it’s called, a minimum viable product is a scaled-down first-stage version of your offering that you can produce with minimal resources to validate the overall direction and gain initial customers. My repurposing of the idea is to suggest that for each of these trends, there could be a scaled-down first-stage approach smaller businesses can take to implement them so that they can determine the trend’s potential impact on their business.
For “right-sizing big data,” for example, I said that although big data is not a new concept, it’s something there’s a growing awareness of, and its ongoing and increasing impact on business can’t be overstated. But I suggested that small businesses and startups can sometimes get bigger impact from being strategic with smaller data. So the minimum viable opportunity, perhaps, is to work on building processes that use the customer and marketing data already present in a business effectively before trying to tackle large-scale data mining or analysis projects. As small and growing businesses become more sophisticated about making data-informed decisions, they can potentially tackle more complex data sets to inform those decisions with a greater likelihood of effectiveness.
For “ongoing channel shakeup,” after covering some of the changes in the digital marketing landscape brought on by new advertising opportunities, algorithm changes, and so on, I talked about the opportunity, as I often do, for marketing to start from empathy and an understanding of customers’ motivations in a segmented and meaningful way so that they can craft relevant messages and experiences and test them in relevant channels. It’s increasingly an experience-aware world.
I won’t rehash the entire talk here (although if you’d like to have me come present to your company or organization, please reach out) — I’ll just offer that when you go back and skim the lists and roundups of 2015 trends, you might want to borrow this idea of the “minimum viable opportunity” for your business. What small change could you experiment with that might help shine light on where your next investments need to be? Bring in me or another strategic facilitator if you have to; we can help guide the brainstorming and identification of opportunities. However you approach it, I hope you do it with an intention to learn. Good luck, and may 2015 be full of maximum opportunities for you. Cheers!
Ever wonder what you have in common with yourself? I didn’t really, either, but an app I was using for social analytics showed me my own account and presented me with a view of what I had in common with @kateo.
According to this metadata, here are some of the things I share an interest with myself about:
Big Data, Data Visualization And Infographics, Dataviz and Infographics. Well, OK, those were gimmes.
Parenting. I’m publicly on record (in TIME magazine, among other outlets) as being child-free by choice. So that’s actually an understandable semantic link; it’s just a misleading one.
Both Country Music and Classical Music. I live in Nashville, a.k.a. Music City, and yes, I have ties to country music and the industry, but this one serves more as proof that computer-led analysis can be imbued with the jumpy biases of its programmers, since “Nashville” = “country music” to many people who don’t know anything else about the city. And Classical Music, while I respect it, has less significance in my digital life than, say, bacon does, and that’s saying something since as you can probably infer from the Vegan, Vegetarian, and Raw Food tags above.
Pay Per Click Marketing, Ecommerce, Testing & Optimization Software, Advertising & Marketing, Email Marketing. Sort of, I guess. They’re all, like, fractional pieces. But I get that “digital behavioral strategy” is a pretty esoteric conceptual space. And I’ve certainly expressed interest in topics relating to each of these areas online. So those are forgivable oversimplifications.
Horror. I can’t even. Maybe we should interpret that as part of a set with QR Codes. Or US Politics.
If you were trying to use this metadata across a user base to build targeted messaging and experiences, based on how my own authentic interests align and misalign with this data, I can tell you you’d miss more often than you’d hit. Which would maybe be OK if you’d built learning cycles into your process, so you could continually refine your understanding of your audience and what resonated with them.
Data is just dots. Analysis is trying to draw lines between or around those dots, but there’s no guarantee you’ll produce anything truly meaningful. It usually takes some understanding of context to make any sense, or meaning, out of data, and that’s more true the more abstract and open-ended the data is, such as social metadata.
A sound business data strategy involves both framing up data collection so that what you collect is most useful, and looking at the data collected in the context of business realities.
Now if you’ll excuse me, I have a virtual reality nature hike to plan gamification strategies for.