FAKE NEWS OR TRUE LIES?: REFLECTION ABOUT PROBLEMATIC CONTENT

Scholars in different scientific fields and practitioners are analyzing the rise of production and diffusion of fake news and problematic information that is rapidly contaminating the digital world.

Fake news, defined as “news articles that are intentionally and verifiably false, and could mislead readers”, has just recently gained scholarly attention predominantly in the fields of journalism, psychology and political sciences. Less is done empirically in the marketing and consumer behaviour literature, with some recent and few exceptions. Fake news represents only one aspect of the ongoing crisis of problematic information, that is inaccurate, misleading, inappropriately attributed, or altogether fabricated information. Problematic information includes also hoaxes, conspiracy theories, propaganda, and true specialist information presented in distorted ways to support one’s viewpoint (our “true lies”). Conspiracy theories about vaccines, coronavirus and palm oil are just the most recent examples of true lies, of how it is possible to cause harm and have a strong negative impact on consumers, companies and democracy at large. All these concepts describe the inaccuracy of media contents and take on different shades of meaning. Such differences may seem small, but they are important for getting a thorough understanding of the issue. The different shades of disinformation seem to appear along a continuum concerning the truthfulness and intent. At one extreme of the continuum, there is disinformation entirely created without a factual basis (i.e. fake news) but able to amplify and reinforce previous beliefs. At the other extreme, there is disinformation rooted in a truthful reality but distorted to the point that the core facts are no longer factual (i.e. conspiracy theories). Scientific literature still lacks in providing a convincing explanation of the determinants of creating and sharing problematic contents on social media and their consequences from a marketing point of view. Relatedly, extensive interaction with practice might shed light on the issue.

To understand the relevance of the problem for brands, we provide a sketch of three real illustrative cases. Firstly, we know that “Pepsi Co. stock fell around 4% just prior to the 2016 US presidential election when a fake news story about Pepsi’s CEO, Indra Nooyi, telling Trump supporters to ‘take their business elsewhere” spread in social media” This is a case when fake news directly affects a brand. Secondly, in the case of New Balance, a “fake news spreader misquoted the New Balance spokesman and repackaged the message with the headline ‘New Balance offers a wholesale endorsement of the Trump revolution’” causing anti-Trump groups to burn New Balance shoes and sharing the video online. This is a case when fake news has an indirect impact negatively affecting the brand image. Thirdly, Cova and D’Antone (2016) illustrate the contrasted reaction of consumers to a hoax, rooted in a real point raised by Greenpeace Italy in 2008, on the negative effects of palm oil, an ingredient of the iconic Nutella brand. Based on the strong attachment of consumer to the brand, some of them “co-created and spread discourses that give any Nutella lover the possibility to relinquishing the new tension and support the idea that the brand should be kept as it is. As such, they ultimately reinforce the overall devotion to the brand”. A negative hoax added more brand content for Nutella in this case, showing an unexpected positive effect in terms of branding by boosting brand’s mythology.

These examples clearly show how disinformation can greatly undermine brand equity, especially when consumers collectively exhibit brand-dissociative behaviours, after being exposed to fake news. However, they also suggest that this topic deserves attention, as companies can turn a possible threat into an advantage by keeping primary control of their marketing agenda and avoiding to ceding it to outsiders.

Two important and unexpected political outcomes encouraged the proliferation of academic interest on the possible impact of misinformation after 2016: the US Presidential Elections and the Brexit Referendum. One of the main drivers of problematic information sharing on social media is confirmation bias i.e. the individuals’ tendency to select only information consistent with their vision of the world. Some people choose their personal truth or rely on their own authorities, preferring to hold on inaccurate beliefs. Consequently, people may keep on sharing problematic information even if it is known as false as they “care more about the point of view being advocated than the legitimacy of the content”. Information Technology and computer science research subsequently proposed new techniques to automatically detect misinformation on social media. Since fake news, as we today define it, started targeting politicians and political organizations, marketing interest in the phenomenon came later when some multinational companies faced a boycott wave after falling victims of fake news.

To date, few marketing studies focus on the negative effect of social media misinformation on brands. Recent research evaluated the possible consequences of fake news on brands, proposing different response strategies for companies. Some authors suggest the need to provide tools to improve fact-checking, assuming that individuals might change their mind when confronted with the evidence of facts. Other scholars focus on possible cues that support sharing behaviour, like media trust, self-efficacy, amount and convergence of the information available. In the attempt to determine the effect of coupling fake news to a brand ad, Scholars supported that deception detection self-perceived efficacy does not affect the formation of attitudes toward the brand.

New forms of misleading contents have started spreading on social media, potentially more dangerous than other forms of problematic information. They are called “cheap fake” and “deep fake”. Cheap fakes employ a simple “selective editing” technique to change videos in ways that they do not show what really happened. Deep fakes, instead, use artificial intelligence to create entirely fictional videos, images or audio. To date, these new techniques are utilized predominantly in politics, to discredit politicians or political organizations. What is the level of individuals’ susceptibility to these new techniques? And, what are the effects in term of attitudes and behaviour?

Finally, it is inevitable to mention the opportunity for research in the wake of the Coronavirus emergency. Unfortunately, COVID-19 has triggered a massive spreading of disinformation. For example, fake news linking the spread of the virus to the development of 5G technology caused the vandalization of many cell phones masts in the UK, physical attacks to telecoms engineers, and threatened the reputation of specific mobile communications (e.g. Vodafone) or technology (e.g. Huawei) companies. Many conspiracy theories – that centre around the virus as a bioweapon created in Wuhan – are creating a climate of distrust where the public is treating official sources of information with growing scepticism. For this reason, traditional media outlets are now facing severe problems of brand reputation, especially for what concerns the trustworthiness and credibility dimensions. Unlike previous outbreaks, the spreading of disinformation about COVID-19 has been dramatically amplified by social media to the extent that “We’re not just fighting an epidemic; we’re fighting an infodemic”, said World Health Organization (WHO) Director Dr. Tedros Ghebreyesus. Social media platforms, as well as Google and the WHO, have taken actions to fight the infodemic, intensifying collaborations with fact-checking organizations and promoting the sharing of reliable health information from acknowledged experts in an attempt to alleviate the risk of a strong negative impact on people’s trust in scientific data. However, given the overwhelming amount of information that flows in digital environments, and the fast “rise and decline” rates of trend topics on social media timelines, the empowerment of fact-checking organizations might be not sufficient. Accordingly, social media platforms, traditional media and institutions should adopt a more “human-focussed” approach, by instilling in people the necessity of spending more time and cognitive efforts confronting various legitimate sources before accepting information as true.

Copyrights ©️ 
OBSERVERTIMES GLOBAL NEWSNETWORK PRIVATE LIMITED reserves the rights to all content contained within its official website  www.observertimes.in /Online Magazine/ Publications

Our Visitor

2048568

Related posts

Leave a Comment