If Brand Transparency is the new Black, then is AI the new Opaque?

 

It’s raining on a Tuesday afternoon at the Judge Business School Cambridge University where a room of business leaders are discussing the future of consumer behaviour and marketing.

Marcia Kilgore, the founder of so many successful brands captivates her audience in a description of the devious world of cosmetics. Showing her new successful flagship brand Beauty Pie, she declares “Transparency is the new Black”.

This rallying cry to be open, authentic and honest is mirrored by many of the successful disruptive founders and influencers around her. Alex Letts, the founder of unbank, describes how he has launched a fintech company dedicated to unwinding the unethical nature of bank charges. Catherine Summers, the top fashion blogger describes how she has forged trust with her thousands of followers through authentically presenting who she is.

The thinking? If we are honest and authentic, then our customers will fall in love with us and our businesses will flourish. But are we sure this is really true?

So Are Customers always Authentic and Transparent?

Diana Fleishmann, a lecturer in evolutionary psychology, would strongly disagree. “Successful social communication relies on unacknowledged motivations, of being deeply unauthentic”, her theory on ‘Adaptive Trainers’ insists.

In fact, it seems entire areas of our brain are dedicated to remembering other’s preferences so we can buy them a great birthday present – in an attempt to manage their future behaviour. But this apparent act of generosity is undermined when you realise why you make that effort – with what might be described as ‘benevolent self-interest’ at its heart.

In short, we quickly become machiavellian when we engage in any relationship we expect to recur.

Richard Summers. Chief Scientist at CrowdCat, speaks about empirical evidence from studying millions of people and their buying decisions around hundreds of products. What is absolutely clear from the data is that people are unaware of why they do the things they do. “When asked to describe why they make a decision, it is not that they intentionally lie, its simply that their beliefs about their behaviour is totally at odds with their actual behaviour” he confirmed.

Although not as inflammatory as Dianna’s position, it is another indicator that we are not, at all, authentic. What we believe to be true is frequently at odds with what is true.

Whether it be self-deception or to make people like us, authentic is not then a straightforward human position.

If our customers are not authentic, then how can a brand be?

The answer might be that they become capable of pulling the same trick as the human brain does. Creating an authentic shell of beliefs about what they are doing which is entirely separate from what their organisation is actually doing, at the coalface.

In short, a separation between Management Intent and Organisational Process

It occurs that one mechanism creating this might be AI – Artificial Intelligence, or machine learning.

AI systems are basically computer programs that learn patterns.’

By learning patterns they are able to predict things – whether that’s Netflix predicting what movie you would like to watch next to Amazon’s predictive analytics that allow packages to leave the warehouse before the delivery address is known.

AI opens up enormous possibilities, through scale, speed and low cost. Rapidly like the internet did, AI is touching every business process and radically effecting the products available to consumers. The Gmail app on my phone offers up appropriate responses to an email I’ve just received – but it’s not a human that’s reading my mail. Harvard Business Review cites a virtual assistant named Angie, which sends about 30,000 emails a month and interprets the responses to determine who is a hot lead. Only her recommendations are followed up, the rest ignored.

The most interesting thing about AI however, is not what it can do but what it can’t. AI is not self-aware, it can’t tell you why it does the things it does. It learns patterns, it does it very effectively, it can make things work better but nobody knows what patterns its learning.

These same forward-thinking brands that embrace transparency also embrace the brave new world of AI; which leaves an interesting contradiction. Often these companies are transparent on the surface, authentically showing us exactly what they do. At their heart, however are a core of processes run by AI and these processes are utterly opaque. It’s not that the customer doesn’t know what they are doing; what is interesting is the brand doesn’t know either.

Is AI capable of being ethical?

As the discussion moves on into ethics and data, this theme is developed into a cautionary point of view.

Ray Anderson, one of the icons of technology innovation in Cambridge. “We are often avoiding our ethical dilemmas by placing those decisions in the hands of AI”.

From Uber’s bidding system to Facebook’s edge ranking algorithms, AI systems with seemingly benign or even benevolent purpose are delivering less-than-ethical behaviour.

Remember these systems are often the only option for companies where a human-involved process simply cannot cope. Every single minute, some 18,000 minutes of video are posted on YouTube, meaning human-only moderation has long been impossible. It is not that their design is biased, but the human behaviour they learn and react to.

Increasingly decisions are being made by machines which not only have no morals but are also incapable of being scrutinised. If we intentionally create a policy that means we discriminate against women, or lower income people, or disabled people, we create moral outrage. The truth is however that AI is already creating its own policy, it’s just that nobody knows or can introspect it to discover that its doing it.

Could we audit AI to show Transparency?

To better understand humans, Cognitive Psychology explores behaviour and thought processes. Deepmind – the Google/Alphabet think tank – considers Cognitive Psychology as one solution, if one is needed. A recent paper proposed a new approach to this problem that employs methods from cognitive psychology to understand deep neural networks.

By considering cognitive psychology as a potential solution to uncover exactly this opaqueness, it begins to open up a route to audit, and then perhaps highlight transparently, any found bias in such an AI system.

It should be recognised however that unlike the systems we have become used to, AI is continually learning and adapting. Rather than a typical IT system that gives repeatable consistent results, AI will need multiple return audits.

And then what? If an AI system believes it has learned that women are not great leads, or less-educated people need more fraud-checking, what do we do?

Are These Systems Deliberately Opaque, Or Purely Pragmatic?

If AI should become the new way to be opaque and avoid scrutiny, this would rightly concern people.  But is it the brand or the consumer that is to blame?

We, the consumer, in our belief in being authentic, are demanding total transparency from our brands, – and so brands are responding. This then represents a perhaps impossible contradiction for businesses that are asked to survive by appearing likeable, transparent, and so selling us things.

There is perhaps a delicious irony if, in allowing AI to do things we couldn’t intentionally do elsewhere in our business, brands mimic the human’s ability for self-deception, and for saying one thing while doing another.

 

This article was authored by Richard Summers and Tim Sutton, following the 2017 Cambridge University Judge Institute conference ‘The Science of Disruption.