In this blog, Insight reveals expert opinion and advice from world-renowned psychologist, Paul Marsden, about how brands can use AI to predict behaviour and provide tailored experiences. Marsden explains:
- How AI is able to provide better customer experiences by understanding human’s five key personality traits
- The dangers of trying to manipulate people’s behaviour and thoughts
- Why digital literacy is more critical than ever.
Brands are increasingly relying on AI to provide personalised recommendations for customers by serving content and products that closely match an individual’s specific personality.
By knowing the kind of content that individuals have previously responded to, it’s possible to improve future recommendations for products and content. An example is the “You might like” section on Netflix, that makes recommendations based on the content you’ve previously watched.
There’s little doubt among psychologists that data can be used to accurately predict behaviour.
“While human behaviour is complex, simple models can still have some predictive power, based on the knowledge that people have five different personality traits,” says Marsden.
These personality traits include whether an individual is:
- Emotionally stable / neurotic
These five traits can predict all kinds of behaviour, from relationship compatibility to personality profiles, but we also know that through technology, identifying a person’s traits and tailoring content that fits them is a powerful way to drive responses to certain kinds of content.
Research has demonstrated that we can make someone more likely to click on an advert, respond to text or buy a product if we firstly identify the dominant traits of their personality then present information in a way that matches them. So, for example if a person fits an extravert personality type, presenting content specifically tailored for extraverts is more likely to generate a response.
Marsden suggests this process is surprisingly acceptable to people across the western world, but only if they are kept informed and aware that it is being used.
“From my research, across Germany, the UK and the US, people are generally ok with this form of micro data targeting, as long as they are aware of it and can opt into it. It’s a matter of consent. Where they get really upset is when they are manipulated without knowing it,” he says.
However, Marsden explains that in addition to its many benefits, misuse of data-driven personalisation risks potentially negative consequences, both for brands and wider society.
Marsden suggests that a dangerous side to combining technology with insight into personality traits is the strong potential to manipulate beliefs, exploit prejudices and shape opinions.
At that stage, this same technology would no longer be used for merely displaying certain types of content but instead framing crucial debates and opinions in a specific way to deliberately provoke a certain response.
It may already be happening too, as it has been proven that public opinion during recent political events has been deliberately shaped by third parties, including through the use of AI technology.
Social media in particular creates echo chambers where a person’s views can be reinforced by reading and consuming content that validates those beliefs, making it more likely that they will then act and vote in accordance with what they perceive to be a majority view.
He believes that consent is key to the use of this technology, and that in the future, the prevalence of AI in targeted online content will need a trust logo to alert readers to its use and reassure them it can be trusted to not manipulate their views.
Despite this potential for negative consequences, Marsden remains positive about AI and is keen to highlight that this same technology can be equally used to improve people’s lives, nudging them towards more positive decisions, such as promoting more healthy lifestyles.
But he believes that to fully educate the public about the numerous issues surrounding AI and its increasing prevalence in all walks of life, greater digital literacy among the general public is key to ensuring people remain informed.
Rather than learning history about ancient Kings and Queens, he suggests people may benefit more from an education that teaches about how AI technology works, its relevance to the modern world and how it could potentially mislead them unless it is well understood.
To hear our interview with Paul click here or to read more expert opinion and insights you can read our blogs here.