Share:

The Brain Science Behind Storytelling and Details from an Ad Success Case Study

Our friends from Unilever Portugal invited us for a session on the brain science behind storytelling, along with a showcase of part of the pilot we did with them. Although the link was not immediately evident, it turned out as a great opportunity to present research on why we, as humans, build stories, and connect it with what we do at MindProber. Being a session on storytelling, we shared some stories from Cognitive Neuroscience and Social Psychology, with the overall message that building narratives results from a multifaceted set of skills that have conferred humans’ adaptive fitness on many levels throughout evolution. 

Some of the reasons why this is so may be both surprising and humbling. So, these were the stories, along with the lessons, we shared with them: 

Brain lesions, split brain patients 

The first story was about brain lesions. I shared the cases of split brain patients, famously described by Michael Gazzaniga. These are patients whose corpus callosum (the main group of white fibers that connect both brain hemispheres) have been severed as a therapy for particularly serious cases of epilepsy, back in the old days. The curious thing about these patients is that, due to the particular neuroanatomy of visual pathways, it is possible to show isolated visual stimuli to the right and left hemispheres. Since the corpus callosum is severed, the processing of the stimulus is limited to each hemisphere. 

Following the footsteps of Nobel laureate Roger Sperry, Gazzaniga has shown several interesting things in these patients. With some exceptions, the left hemisphere (in right-handed people) can consciously understand written words and talk about whatever stimulus is being visualized, while the right hemisphere can’t. However, the right hemisphere can follow procedures if instructed, even if the patient is not aware of why.  

For instance, if we show the word “face” to the left hemisphere and ask the patient to read it, they will be able to answer “face”. However, if we show the same word to the right hemisphere, the patient won’t report seeing anything. However, if we instruct them to draw what they see with the left hand (which is controlled by the right hemisphere) they’ll draw a face, or, if we ask them to press a button with the left hand every time “face” appears, they will do so, even without being consciously aware of it. 

Gazzaniga’s interpreter theory 

The insightful element for storytelling comes from the observation that even if the individual is not aware of why they are performing a given behavior since they have been instructed through their right hemisphere, their left hemisphere confabulates, building a narrative that justifies their actions (Gazzaniga calls this the interpreter theory). In a widely known case, Gazzaniga and his team presented a split-brain patient with different words in each visual field and instructed the patient to choose an image that would fit what they saw from among a group of pictures. For example, when flashed the words “Music” to the left hemisphere and “Bell” to the right hemisphere, the patient would be aware of the word “Music” but not “Bell”. 

However, when asked to choose from a group of pictures related to music, the patient would choose a bell tower. The cool thing is that, when asked to justify, the patient answered “Music – last time I heard any music was from the bells outside here, banging away” (see all the beautiful experiments here). 

So, this was the first lesson: Our left brain builds narratives to make sense of the world and our actions (even if the narratives are confabulated).

Our left brain builds narratives to make sense of the world and our actions (even if the narratives are confabulated). 

Primary visual cortex lesions 

The second group of enlightening cases is about patients with lesions in the primary visual cortex. These patients have a condition known as cortical blindness: they are blind due to a brain lesion. A great group of studies from Beatrice de Gelder’s labs suggested that these people process visual affective stimuli anyway. For instance, although these patients are not aware of the stimulus being shown to them, they are able to “guess” the emotional tone of the stimuli above chance level and show physiological emotional responses to the stimuli. 

These results have been interpreted in the light of the dual-route model for emotional vision, popularized by Joseph LeDoux, where it is suggested that there are at least 2 visual pathways: one cortical, slower, and that usually results in visual awareness, and another subcortical, faster, and that does not result in visual awareness. (This brought me back to my academic work, where we were looking at emotional processing through each of these pathways in psychopathy).

Therefore, lesson number two: Effective information is processed at a subconscious level and exerts an impact on our responses. 

Effective information is processed at a subconscious level and exerts an impact on our responses.  

Power of affective processing 

I followed on from this lesson with a story on the power of affective processing, by highlighting the research tradition on both implicit and explicit evaluative conditioning. These effects were first popularized by studies on suboptimal affective priming by Robert Zajonc. He showed that if he paired positive and negative stimuli with neutral images, those would contaminate the judgment of the neutral pictures (in other words, participants report liking more the pictures paired with the positive stimuli and less the images paired with the negative stimuli). These effects happen both when the effective pictures are presented consciously or unconsciously. 

Emotional information and moral judgment 

Then, we showed that emotional information also affects moral judgment, by highlighting Joshua Greene’s work on the neural bases of moral reasoning using an ethical thought experiment known as the Trolley Problem. Back in the 60’s Philippa Foot introduced the modern version of a classical thought experiment in ethics that goes along these lines: 

“There is a runaway trolley barreling down the railway tracks. Ahead, on the tracks, there are five people tied up and unable to move. The trolley is headed straight for them. You are standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. However, you notice that there is one person on the side-track. You have two options: Do nothing, and the trolley kills the five people on the main track. Pull the lever, diverting the trolley onto the sidetrack where it will kill one person.  Which is the most ethical choice?” 

Judith Thomson, in her turn, produced multiple versions of this problem. One of them, “The fat man” reads like this: 

“As before, a trolley is hurtling down a track towards five people. You are on a bridge under which it will pass, and you can stop it by putting something very heavy in front of it. As it happens, there is a very fat man next to you – your only way to stop the trolley is to push him over the bridge and onto the track, killing him to save five. Should you proceed?” 

Participants typically say they would change the train track in scenario 1, but wouldn’t push the fat man in scenario 2. Why is this, given that the utilitarian outcome of the situation is the same? 

Greene showed that scenario 2 (and similar others where directly harming others is imagined) activates brain areas usually involved in emotional processing and this biases moral judgments (we could have also mentioned the amazing work on moral dumbfounding by Jonathan Haidt which, among other things, showed that people produce strong intuition-driven moral responses that sometimes can’t be justified when challenged, or the great work on the dependence of moral reasoning on brain regions typically related to emotional and social cognition by Jorge Moll, but had no time to).

Lesson number three: Effective information shapes our attitudes and moral judgments, even if we’re not aware of it. 

Effective information shapes our attitudes and moral judgments, even if we’re not aware of it. 

Readiness potential 

I guess the most shocking lesson came from Benjamin Libet’s controversial work. In a ground-breaking seminal experiment, he asked subjects wearing an EEG cap to move a finger while looking at a clock, and then to signal on the clock when they had decided to move the finger. The amazing result was that a brain response (termed Readiness Potential) could be detected hundreds of milliseconds before subjects reported having the intention. 

The fourth and most brutal lesson was: We build narratives to justify our actions, to build a story that places our intentions as the causes of our actions, when in fact “we” don’t cause them (whatever “we” is). 

We build narratives to justify our actions, to build a story that places our intentions as the causes of our actions, when in fact “we” don’t cause them (whatever “we” is). 

Why do we need to tell stories? 

After these stories, the question Why arises. Why do we need to tell stories and what do we, as a species, gain from doing so? This is clearly not the full story, but I mentioned that building narratives can be a highly adaptive by-product of a great trick we as humans do: describe physical motion using what Daniel Dennett has termed the intentional stance. We showed Heider and Simmel’s stimuli animation, from an experiment back in the 1940s where they showed that people would automatically attribute intentional states to a bunch of moving triangles, squares, and circles (and mentioned that actually autistic patients fail to do this). Then, we highlighted the advantages such a trick confers: if we treat the other as an agent with intentional states, then we can make sense of their behavior, we can simulate their intentional state, and this is a powerful weapon if you happen to be trying to cooperate with them, or cheat them, or prevent being cheated. And this is a neat trick when you’re a highly social species as we are. And if you do that to others, why not do that to yourself, by building narratives that explain your actions in intentional terms? 

Of course, as I mentioned, this is not the full story, and storytelling should confer other adaptive advantages, such as the ability to transmit adaptive memes (as defined by Richard Dawkins, not Facebook or Reddit), through the creation and transmission of legends. 

The final lesson: We like narratives that involve others. We like narratives because building stories allows us to make sense of the world and ourselves. This is an evolved ability that gives humans a specific evolutionary advantage. 

We like narratives that involve others. We like narratives because building stories allows us to make sense of the world and ourselves. This is an evolved ability that gives humans a specific evolutionary advantage. 

And then it was time for the second part of the presentation, one that included MindProber. 

Unilever Portugal Case Study

Applying building stories and storytelling principles to ad success 

I went on to apply this principle to ad success, by illustrating with a study by Quesenberry and Coolsen, where they showed that Superbowl ad success was related to the number of dramatic acts present in the ad. Notably, dramatically richer Superbowl ads are better rated by viewers.  

Paul Zak further showed that good stories, with engaging and clear narrative acts, are intimately related to emotional activation processes (with physiological correlates such as ACTH and oxytocin release). And, in fact, a beautiful study by Reagan et al., using text sentiment analysis showed that all books from Project Gutenberg mostly follow 6 types of emotional arches intimately connected to their narrative acts. 

This tells us that in fact, the drama in the story is linked to an emotional voyage that we want the viewer to embark on and that successful ads are those that can take the viewers on that ride. 

The drama in the story is linked to an emotional voyage that we want the viewer to embark on and that successful ads are those that can take the viewers on that ride. 

Now, the issue is, how do we measure this emotional arch effectively? We could, of course, ask the user, but that would interrupt the process and introduce a less than desirable bias in the emotional process itself. Also, we would lose temporal granularity, as it is hard to collect declarative measures for each second of the ad.  

We could also use facial coding, but we’re finding that in fact, it is not the norm (especially if viewers are alone) to produce facial expressions when one is looking at ads (do you do that?). In fact, a generally accepted view is that facial expressions evolved as a set of tools for effective social communication among humans and signal situations and themes that are emotionally charged and recurrent in our evolutionary past. How charged are you when looking at ads? And who are you communicating with? 

So, we resorted to our preferred methods: peripheral physiological indexes (electrodermal activity a.k.a. galvanic skin response and heart rate) of emotional activation. We showed them that this was the right way to measure the emotional trip of the spectator and link it to emotional events. 

Live Study 

The reason why we appreciate the people from Unilever PT so much is that they were the first to trust MindProber, and in fact, these results are from the first live study we ever did.

What we did back then, was to register the physiological activation of a relatively small group (55) of participants in a room (still not in their homes at the time ), to Unilever brands vs. direct competitors and ask them about ad likability and memory impact. We also asked participants to signal, through our mobile app, the moments they especially liked or disliked. 

In addition to getting a useful benchmark of emotional impact, Unilever PT could see the emotional arch of their content, understand whether it fits what they intended on the drawing board, and relate it to memory associations by participants.  

Our favorite way of looking at the data is: 

  1. Understand how impactful the initial seconds are by looking at activation rise time (this is especially important if you’re on digital media and want to avoid the skip button), 
  2. Look at the moments where you’re supposed to produce impact and analyze whether that happened across segments, 
  3. Search other points of interest (e.g. a moment with unexpected peaks, where participants express significant liking or disliking) and try to figure out their drivers
  4. Compare overall activation with benchmarks, and 
  5. Analyze post-visualization surveys and relate them to the activation arch

Doing this provided some important insights, which unfortunately I can’t tell you much about. One thing we can disclose is that we were able to understand that although ads were producing impact where intended (especially via visual load), a couple would produce unforeseen memory associations in high arousal moments that granted some caution (things we didn’t want people to associate with the ad). We were also able to show that long-format ads were much more effective (especially when the narrative is good and impactful) in involving customers.  

Lastest blog and news

Lastest blog and news