It’s a common theme in popular culture, advancing technology robbing societies of their freedoms and their power – from George Orwell’s 1984 to Philip K. Dick’s Minority Report, and even James Cameron’s Terminator film franchise. Each story frames futuristic technology as a system of oppression through which, due to lack of privacy and constant surveillance, people are no longer in control of their own destinies.
Today fiction has merged with reality. It’s become common-enough for us to have artificial intelligence taking on roles in our workplaces, controlling our homes through home-hub devices and appliances, and even monitoring our health via wearable fitness trackers and smart watches. Just like in pop-culture all of these developments have been marketed to the public with the purpose of making their lives richer, easier, heathier and, simply, better. But is the reality living up to this dream?
We may think that, until recently, such tech existed strictly within the realms of science fiction and fantasy, but in fact, forms of AI and smart tech have been used by companies to impact our lifestyles long before Alexa and Fitbit became household names. For example, store loyalty cards have quietly been collecting and analysing data on customer spending habits for years. First introduced to customers with the provision of offering regular discounts and other benefits, these cards encourage further spending by requiring customers to gain those discounts by building up points in relation to their spending. Going further, they routinely influence customers to purchase items outside of their typical shopping lists by using customer’s purchasing data to make recommendations and offer discounts on new items.
And whilst, though in a less dystopian manner than Orwell envisioned, the offers provided by loyalty cards are seen by some customers as helpful, it’s all too easy for the tech to cross that invisible line between helpfulness and privacy invasion. Turning satisfied customers into distrustful and unhappy ones as a result.
In 2012, an anecdote shared by a statistician working for US retail giant Target and concerning the store’s loyalty card scheme, was widely shared following its publication in the New York Times. During an interview with the journalist, the statistician told a story about a disgruntled customer who marched into a store in Minneapolis demanding to see the manager. He’d been enraged to discover that the company had been sending vouchers to his teenage daughter encouraging the purchase of baby clothes, cribs and other items related to baby-care. He demanded to know why the retailer was being so irresponsible as to advertise pregnancy to a young girl.
However, unknown to him and much to his embarrassment, his daughter was actually expecting. It turns out the retailer’s loyalty card algorithm was able to spot a trend in the items she was typically purchasing week on week (and which items she had stopped buying), and had used these data points to both correctly predict her pregnancy, and then suggest the purchase of related items, via vouchers and discounts, to encourage her future spending habits.
And aside of the upset the news of her impending arrival no doubt caused her parents, the fact that the retailer had identified this deeply personal scenario, and used it to their advantage, before their daughter had been allowed the opportunity to share the news herself, did not sit well with them.
Though today’s smart tech is no doubt far more advanced than it was in 2012, eight years later, the issue of trust, and a lack of “human touch” remain a common concern with the AI-enabled devices we choose to put in control of our homes and our lives. Together with my colleagues; Prof Rebecca Walker Reczek of Ohio State University, Prof Markus Giesler of York University in Canada and Prof Simona Botti of London Business School, I recently published an article which focuses on the growing influence that lifestyle-enriching advanced tech can have on our daily lives, and the lived realities of the customer experience.
In it, we argue that, whilst developers are continually required to find new ways to make monitoring and surveillance palatable to customers by linking it to convenience, productivity, safety, or health and well-being, they must also constantly push the boundaries of what private information consumers should share through a complex landscape of notifications, reminders, and nudges intended to initiate behavioral change. Thus, AI can transform consumers into subjects who are complicit in the commercial exploitation of their own private experience.
And this is where the problems occur.
As developers have continued to prioritise technological development over the user experience, a gulf has emerged between a product’s capabilities and the expectations and experiences of users. The biggest reason why is a significant lack of human influence at the development level. To bridge it, companies must develop a customer-centric view of AI that places value on both advancing technological capability and how those are experienced by customers; for better or for worse. To help companies approach this challenge, my co-researchers and I developed a framework that separates out the four core experiences consumers have with AI;
Data capture – AI providing users with access to a customised service – for example providing users with a local weather report
Classification – allowing AI services to make recommendations based on your previous use and common characteristics of other users who fit your demographic,
Delegation – AI performing tasks on behalf of users, such as Siri searching for a phone number or making a call
Social – facilitating communication with humanised services such as a chatbot, and identifies where the sociological and psychological tensions occur.
For example, in a modern-day spin on the Target loyalty card scenario, social media platforms use AI to analyse personal data from their users to tailor the advertising users are exposed to, but this crosses an ethical line when these recommendations infiltrate what users believe to be a private user experience. Another example is a chatbot being incapable of judging the sensitivity or urgency of the information shared with it by a customer, and answering questions in a tone-deaf or detached manner, causing upset or aggravation.
Such scenarios can be hard for a company to recover from, but our study provides a suggestion for preventing this scenario from happening in future. Published in the Journal of Marketing, our work recommends that marketing professionals should be included in AI product development from the outset, to help organisation provide a better user experience. Encouraging software designers to combine their technical expertise with the human-focused values of marketers, would create the opportunity to better question the design and deployment of such tech, and introduce evaluative criteria beyond its practical capabilities.
Some organisations are already making strides in this area, crafting ethical guidelines around AI’s use. However, these efforts do not specifically carve out a role for marketers, and neither do the guidelines put in place by international bodies such as the European Commission. However, by continuing to exclude marketers from the discussion is to continue to craft and deliver products which, might be smarter than the best and brightest of Mensa, but lack the personal considerations vital for ensuring we maintain ethical standards, and avoid the dystopian future of our sci-fi nightmares.
Stefano Puntoni is a Professor of Marketing at Rotterdam School of Management, Erasmus University (RSM)