If brand ‘transparency is the new black’, then is AI the new opaque?

It’s raining on a Tuesday afternoon at the Judge Business School of Cambridge University, wherein a room of business leaders are discussing the future of consumer behaviour and marketing.

Marcia Kilgore, the founder of so many successful brands captivates her audience in a description of the devious world of cosmetics. Showing her new successful flagship brand Beauty Pie, she declares “Transparency is the new Black”.

This rallying cry to be open, authentic, and honest is mirrored by many of the successful disruptive founders and influencers around her. Alex Letts, the founder of Unbank, describes how he has launched a fintech company dedicated to unwinding the unethical nature of bank charges. Catherine Summers, a top fashion blogger, describes how she has forged trust with her thousands of followers through presenting her authentic self.

The theory: if we are honest and authentic then our customers will fall in love with us and our businesses will flourish, but is this true?

So are customers always authentic and transparent?

Diana Fleischman, a lecturer in evolutionary psychology, disagrees. “Successful social communication relies on unacknowledged motivations of being deeply unauthentic”, her theory on ‘Adaptive Trainers’ insists.

In fact, it seems entire areas of our brain are dedicated to remembering others’ preferences so we can buy them a great birthday present – in an attempt to manage their future behaviour. But this apparent act of generosity is undermined when you realise that effort is simply ‘benevolent self-interest’.

In short, we quickly become machiavellian when we engage in any relationship we expect to recur.

Richard Summers, Chief Scientist at CrowdCat, speaks about empirical evidence from studying millions of people and their buying decisions around hundreds of products. The data show clearly that people are unaware of why they do the things they do. “When asked to describe why they make a decision, it is not that they intentionally lie, it’s simply that their beliefs about their behaviour is at odds with their actual behaviour”.

Although not as inflammatory as Diana’s position, it is another indicator that we are not so authentic. What we believe to be true is frequently at odds with what is true.

If our customers are not authentic, then how can a brand be?

The answer might be that they are pulling the same trick as the human brain does: creating an authentic shell of beliefs about what they are doing, separate from what their organisation is actually doing. For example, a separation between management intent and organisational process

One mechanism creating this is Artificial Intelligence (AI) and machine learning. These AI systems are computer programs that learn patterns.

By learning patterns they are able to predict things. Whether that’s Netflix suggesting the movie you would like to watch next to the analytics that allow Amazon packages to leave the warehouse before the delivery address is confirmed.

AI opens up enormous possibilities, through scale, speed, and low cost. At a pace comparable to the internet, AI is touching every business process and radically effecting the products available to consumers. The Gmail app offers up appropriate responses to an email, but it’s not a human that’s reading the mail. Harvard Business Review cites a virtual assistant named Angie which sends about 30,000 emails a month and interprets the responses to determine who is a hot lead. Only its recommendations are followed up, the rest ignored.

The most interesting thing about AI however, is not what it can do but what it can’t. AI is not self-aware, it can’t tell you why it does the things it does. It very effectively learns patterns that make things work better, but nobody knows what patterns it’s learning.

These same forward-thinking brands that embrace transparency are also embracing the brave new world of AI, which leaves an interesting contradiction. Often these companies appear to advocate transparency, authentically showing us what they do. However, the AI process at their core are utterly opaque.

It’s not that the customer won’t be told what’s going on, it’s that the brand can’t tell them.

Is AI capable of being ethical?

As the discussion moves on into ethics and data, this theme is developed into a cautionary point of view. Ray Anderson, one of the icons of technology innovation in Cambridge, tells us “we are often avoiding our ethical dilemmas by placing those decisions in the hands of AI”.

From Uber’s bidding system to Facebook’s edge-ranking algorithms, AI systems with seemingly benign or even benevolent purpose are delivering less-than-ethical behaviour.

Remember these systems are often the only option for companies where a human-involved process simply cannot cope. In one minute, 18,000 minutes of video are uploaded to YouTube, leaving human-only moderation implausible. It is not their design that is biased, but the human behaviour they learn and react to.

Increasingly, decisions are being made by machines which not only have no morals, but are inscrutable. When we intentionally create a policy that discriminates against disadvantaged people we create moral outrage, so what to do with AI policy if it cannot be inspected?

Could we audit AI to show transparency?

To better understand humans, we use cognitive psychology, exploring our behaviour and thought processes. DeepMind, the Alphabet think tank, considers cognitive psychology as one solution, where one is needed. A recent paper proposed a new approach to the problem that employs methods from cognitive psychology to understand deep neural networks.

By considering cognitive psychology as a potential solution to uncover this opaqueness, it begins to open up a route to audit and perhaps to reveal biases in an AI system.

It should be recognised however, that unlike the systems we have become used to, AI is continually learning and adapting. Rather than a typical IT system that gives repeatable consistent results, AI will need to multiple return audits.

And then what? If an AI system believes it has derived that female leads shouldn’t be followed up, or less-educated people should be subject to greater fraud-checking, what do we do?

Are these systems deliberately opaque or purely pragmatic?

If AI becomes the new way to avoid scrutiny, this would rightly concern people. But, is it the brand or the consumer that is to blame?

Our demand for authenticity as consumers may represent an impossible contradiction for businesses that rely on machines to meet our needs.

There is perhaps an irony if, by allowing AI to do things we wouldn’t intentionally do elsewhere in our business, brands mimic the human propensity for self-deception in saying one thing while doing another.

This article was authored by Richard Summers and Tim Sutton following the 2017 Cambridge University Judge Business School conference, The Science of Disruption.