top of page
Search
  • jimpierpoint

Budweiser: A measure of success



Every year, the Super Bowl kicks off a side bet that captivates consumers and the press. Tapping into the spectacle of the National Football League's biggest game of the year -- and a nationwide audience approaching 100 million people -- marketers put millions of dollars on the table, vying for the title of best ad aired during game.

Super Bowl LV in February 2021 was no exception. While the Tampa Bay Buccaneers soundly defeated the Kansas City Chiefs 31-9, the game drew the lowest broadcast audience since 2006 as fans shifted to streaming platforms. Still, 56 brands dropped a cool $5.5 million each on slick 30-second productions designed to garner attention for their products, services and causes.


Cheetos came ready to play, with a cheeky spot about celebrities sneaking snack foods from their loved ones. M&M's offered consumers a sneak peak at its ad featuring former NFL head coach and racing team leader Joe Gibbs. Bud Light Seltzer sought to turn lemons into lemonade, while Super Bowl ad rookie Chipotle asked the existential question, "Can a Burrito change the world?"


But for the first time ever, the winner of the Super Bowl ad contest wasn’t an ad at all.


It was publicity.


If the measure of advertising success involves capturing consumer attention, Budweiser won by a landslide. Instead of paying millions of dollars for air time and ad production costs, though, Budweiser brand managers called an audible. Executing a pure publicity play, the company issued a press release announcing that their annual Super Bowl marketing budget would instead be diverted to ads supporting COVID vaccine awareness.


It worked. Even after leaving the Clydesdales in the stable for the first time in 37 years, attention to the Budweiser brand rose by nearly 35% during the Super Bowl news cycle, according to YouGov BrandIndex tracking data. The increase in Attention was significantly higher than every other brand considered to have aired a top-rated ad during the game.


Companies have long known the value of integrated marketing communication. National ad campaigns are routinely launched with press releases by marketers hoping to catch that added boost of attention generated from news coverage. Intuitively, that tactic makes sense. News is consumed actively. Ads are consumed passively. Quite simply, news catches people's attention.


But how does PR measure the reach and resonance of news? And by extension, how does PR tease out the contribution of news to brand perceptions and organizational outcomes, including purchase consideration and sales for companies, membership and donations for non-profits, or support and votes for candidates?


In this blog post, we will tackle the first question, examining traditional approaches to measuring news. In the next blog, A model of success, we will take on the second question, considering news as one of a number of factors influencing brand perceptions and impacting organizational outcomes.


What's the agenda?

Let's start, then, with traditional media measurement.


In the late 1960s, two Chapel Hill, N.C., professors took the budding field of public relations research to another level. A virtually perfect correlation (r = 0.967 for those of us keeping score) was found between what 100 residents of this small college town thought were the most important issues in the 1968 presidential campaign and the issues covered in the press.


If the mass media sets the agenda for the public, so the thinking goes, then published news is a proxy for public opinion. Agenda-setting theory was born, and by extension a framework for measuring the importance of public relations and the success of PR campaigns. Agenda-setting theory created a foundation for PR research that has been in place since the days of three television networks and two metro newspapers per city.


Setting aside for the moment any concerns about correlation as causality -- or reliance on a survey sample size of 100 people -- the assumptions behind agenda-setting theory demonstrated the power of the press. But the theory also provided PR with a compelling argument for demonstrating success. Simply by measuring the volume and tonality of published news, companies began to use content data to gauge media reach, brand perceptions, and business impacts. In the years since, clip books have given way to sizzle reels and automated online dashboards, but the underlying assumptions remain the same.


Let's think about that for a minute. Based solely on the volume of news published about a company, PR could claim success in terms of generating content reaching consumers. Taken further, the tonality of the news coverage became a proxy for influence of that news on brand perceptions. By comparing the volume of published news against the volume of competitor news, share of voice became a standard yardstick of PR effectiveness. And quantifying news coverage by the costs of buying an equivalent ad yielded ad value equivalence, a way to put a dollar value on publicity.


While there is a longstanding debate over ad-value equivalence, virtually every company, cause and campaign relies on volume and tonality as baseline metrics of PR effectiveness. Nearly all organizations also use share of voice as a comparable measure of success. And highly refined approaches have been developed to assess the prominence of coverage, and surface notable themes in news and social media content.


That all seems fairly straightforward. What's the rub?


Well for one thing, news volume has always tended to overstate actual exposure to news. Content analysis assumes that every story in every newspaper is read by every subscriber, and every segment of the nightly newscast is seen by every viewer. Remember the Krispy Kreme jab campaign? The news generated six billion media impressions. Six billion impressions. But there are only roughly 250 million U.S. adults. So the content metrics would imply that we each heard the Krispy Kreme news nearly 30 times. Unlikely.


Measuring volume has become even more challenging in the digital age. In 2019, Global PR firm MSL, a unit of Publicis, conducted a straightforward analysis. The firm's research team used five leading online social media tools to determine the volume of online posts mentioning India Pale Ale, or IPA. How? Run a search of online content -- news and social media chatter -- using the simple keywords "India Pale Ale." Develop what's known as a Boolean search string, asking the listening tools to find any IPA mentions within seven words of "craft beer" posted in the global social media, in English, posted between August 12th - 18th in 2019.


The results? As you can see below, one of the leading listening tools returned about 1,700 online mentions during the test week. Another one found less than 600. The rest were somewhere in between. Why the difference? Mainly, it depended on the listening tool's reliance on Twitter posts. The volume metrics from all of the social listening platforms, with the exception of YouScan, were heavily reliant on their access to the Twitter fire house.



Keep in mind, volume is only one dimension of content analysis. Another dimension is the tonality or sentiment of the news. Basically, sentiment scoring is used to designate content as positive, negative or neutral in its portrayal of a company, cause or campaign. Like volume metrics, sentiment scoring also faces reliability challenges. "Best-in-class" natural language processing claims to generate data with 80% reliability. That's a problem for businesses with expectations of 95% confidence intervals.


Let's take a closer look at an example of tonality data. Using the same content captured by the Boolean search, the MSL research team had each of the five listening tools gauge the tonality using their proprietary systems. The results? As you can see below, two of the social media listening tools -- NetBase and YouScan -- assessed the posts as largely neutral. The other three returned results with a significant number of positive mentions. Only one, Sysomos, considered more than 10% of the posts to be negative.


While the social media listening platforms rely on machine learning and artificial intelligence to gauge tonality of published content, human coders face similar challenges in generating reliable tonality scores. A few years back, PR researchers gathered in Barcelona to launch a push to standardize PR measurement. Seeking to resolve the data reliability issues in content metrics, the resulting Barcelona Principles basically sought to standardize measurement approaches. By getting everyone on the same page, the researchers reasoned, PR research could effectively transform unreliable data into reliable data. Voila.


Separately, researchers backed by the Institute for Public Relations twice tried to crack the code on tonality, with mixed results. In the first study, three professional coders were asked to record basic information about a sample of news stories, including whether the tone of the story was positive, negative, neutral or balanced.


"The research results yielded low to moderate inter-coder reliability," the researchers reported. In laymen's terms, it didn't work.


A few years later, they tried again, this time with more experienced coders, a refined code frame, and a renewed push to demonstrate the viability of the measurement standards. That study cleared the bar. But just barely. Why? Because tonality is simply not a discrete metric, mathematically speaking. It is qualitative, not quantitative. Tonality cannot be counted.


And the PR industry is well aware of the measurement challenges. According to a Muck Rack survey completed in May 2021, nearly half of PR professionals say existing measurement tools cannot gauge business impacts of their work. In that same survey, four in ten PR pros actually consider a lack of quantifiable measurement to be one of the biggest challenges facing the field. The survey also confirmed a success bias that filters into PR research. "PR pros see sourcing more coverage and tying PR activity to business outcomes as the top ways to increase value at their organizations," Muck Rack said.


Back to the game

Putting measurement challenges aside, there is no question that news moves the needle with consumers, and is a critical variable that organizations need to be able to measure in order to understand media impacts on business. This blog started out talking about the Super Bowl. So let's get back to the game.


With the growth in social media, Super Bowl ad campaigns now typically begin a full two weeks before kick-off. On the Monday following the divisional championship games, marketers begin issuing press releases to tease their 30-second spots, often steering people toward the ad footage posted online.


The press releases, in turn, trigger a news cycle in the traditional media and an echo in the online and social media. And the Super Bowl ad competition does not end with a post-game trophy presentation. For the next week, the press opines on the winner of the best ad. Bragging rights – and multi-million dollar marketing budgets – are at stake.


Northwestern University's Kellogg School of Business is one of the most prominent arbiters of the Super Bowl ad contest. Every year, business school students gather to watch the game and rate the ads, using the school's ADPLAN framework to asses strategic effectiveness of the spots. The acronym stands for Attention, Distinction, Positioning, Linkage, Amplification and Net Equity.


Based on that industry best-practices but largely qualitative framework, the top ad aired during the game featured Cheetos, according to a press release publicizing the results. Other ads receiving top marks included 30-second spots aired by Amazon, Bud Light Seltzer, Doritos, Indeed and Reddit, Tide, M&Ms and Chipotle.


Using the only quantifiable variable in that marketing framework -- Attention -- the clear winner is Budweiser, by a statistically significant margin. In the weeks leading up to and following the game, attention to the Budweiser brand peaked 34% above pre-game levels.


Issuing a press release is functional. Creating buzz from the publicity is tactical. But aligning buzz to shifts in reputation and behaviors? That's strategic. How can we measure that? Stay tuned.



Budweiser won the 2021 Super Bowl ad contest by issuing a press release.


From your perspective, how do publicity and advertising

-- earned and paid media --

differ in terms of reaching and influencing people?


Respond in the blog below.





72 views20 comments

Recent Posts

See All
bottom of page