logo

Users – And Advertisers – Want Social Media to Be Perfect

Social Media to Be Perfect

Social media has been a difficult environment for marketers since the first ad was served there. But as more and more people and marketers use social media, the drawbacks of the platforms have become more apparent and more complex. Leading platforms have struggled with data privacy and the rise of fake news… none more notoriously than Facebook, although YouTube, Instagram and Twitter also struggle. Perhaps the biggest issue with today’s social media marketing is that advertisers want an environment that is “safe”—i.e., “perfect” for their brands and campaigns. It’s becoming plainer every day that digital marketing can’t deliver on brand safety. Perfection is simply not available, for advertisers nor individual users… and not solely because individual users are difficult to control.

A More Perfect Facebook…

The backlash over the Cambridge Analytica scandal involving third-party misuse of Facebook data, and the amount of foreign advertising and influencing served via the platform during the 2016 U.S. Presidential campaign, forced Facebook founder Mark Zuckerberg to defend the platform before the U.S. Congress. It also led to the British government levying the maximum allowable fine on the company for breaches of the Data Protection Act. Additional revelations later emerged that Facebook elected to cover up and even push back against critics with their own negative, fake news efforts.

The realization of how social media was manipulated for political purposes in 2016 created initiatives on other platforms to try to deter political manipulation. Platforms now regularly purge fraudulent accounts and fake followers, used to amplify fake news. Facebook has tightened up internal guidelines for political ads and news on their platform. That rollout didn’t go smoothly; initially the company grouped promoted news content in the same lump as paid political ads. A reconfigured policy rolled out in July 2018 separates news from paid political marketing.

Then Facebook faced the firing line for tagging many LGBT ads as “political” when they were just business ads aimed at the LGBT Facebook community. Facebook’s human monitors failed to catch that commercial ads using common LGBT keywords had been tagged as “political topics” by the more stringent algorithms aimed at reducing content not properly identified as political or activist.

Keeping Advertisers Uninformed

Social media companies manage frequently to irritate entire communities that rely on their platforms to communicate, share and socialize. It doesn’t help Facebook’s case that many “hate” groups are allowed to operate on the platform without sanction or restriction. Advertisers had been directly targeting ads to Neo-Nazis and white supremacist groups using Holocaust-related names, keywords and phrases, forcing Facebook to announce in March 2019 that it would take down posts supporting white nationalism and white separatism on Facebook and Instagram. They had already prohibited white supremacist content. It remains to be seen whether the action will have a positive impact on reducing content from hate groups. 

The platform is also under fire for promoting vaccination misinformation, potentially contributing to new outbreaks of diseases believed to have been eradicated. (Facebook banned ads promoting misinformation, but anti-vax content is still being posted and shared.) Just last month, Facebook settled a lawsuit in which the Department of Housing and Urban Development (HUD) charged that Facebook was allowing ads to be targeted based on ethnic and religious groups, and other potentially discriminatory tags, in violation of the Fair Housing Act. The platform says it removed 5,000 targeting options related to the settlement. 

None of this helps advertisers coping with a politicized social media environment. And Facebook tends to set rules and parameters for advertisers to work around potential problems under tightened oversight, but fails to share those workarounds with advertisers. 

“How are you supposed to stay on the right side of the policy if you don't know what constitutes a violation?” asked Jon Fingas at Engadget.

Congress has expressed interest in applying the same disclosure rules governing other political advertising to online ads. Facebook and Twitter are both said to support that legislation, but how (or when) it will be enacted remains to be seen.

Advertisers Want Better Accountability

Do advertisers and users expect too much from social media giants? And are marketers worrying about the right issues?

Beyond concerns about data privacy and fake news, Facebook is facing advertiser ire (including a lawsuit by digital marketers) over revelations that the platform reported inflated ad metrics through 2016 (some erred by as much as 900%)… and continued to do so for nearly a year after their internal checks discovered the error. This caused ad buyers to buy more ads at a premium price than they would have done had they seen the true metrics. Facebook’s legal team employed a “fake news” response, claiming information from internal Facebook documents were taken out of context and “cherry-picked,” and anyway, they corrected the problem… with no comment on how their lack of transparency cost paying customers a lot of extra dollars.

Remember when traditional media would offer “make-goods” if ads failed to run as booked, or an ad placement was missed? Should social media campaigns receive the same courtesy? We can only dream…

Facebook is not alone, of course. Google’s YouTube platform was hit with a major brand boycott when it became known that innocuous videos featuring children had comment threads tagged by pedophiles. YouTube reported they had removed offensive comments and deleted offending accounts and channels. But, said eMarketer video analyst Paul Verna, the platform’s scope makes monitoring and preventing these sorts of “unsafe” incursions “like a game of whack-a-mole.” The recent viral sharing of the Christchurch massacre video, and the slow response of social media platforms to shut it down, illustrate both the difficulty of preventive monitoring, and the lack of nuance in machine-learning algorithms, still ill-equipped to parse offensive vs. newsworthy content

Funny… image and idea-sharing platform Pinterest seems to be handling fake news and misinformation with clarity and purpose. It seems that leader commitment matters. But scale also has much to do with the problems faced by the major social networks; purveyors of problematic content prefer “mass” channels best, aided by the channels’ reach that makes it tricky to effectively block that content.

Google and Facebook together account for nearly 60 percent of all digital ad spending; digital ad spending is predicted to increase to 54 percent of total U.S. ad spending in 2019. That pretty much guarantees brands will keep using troublesome channels.

Little Will to Change

Facebook derives 98% of its annual revenue from ad dollars, reported Reuters in 2017, with 91% of that coming from mobile ads. Although users and regulators see the platform as having broken trust with its users, the simple truth is Facebook has little motivation to be more transparent about ad effectiveness, or to give users more control over ads they are willing to see. Increased regulatory oversight seems to put the company’s leadership in a defensive (and sometimes offensive) posture, rather than altering damaging policies. And internal cultures are contributing to the slow revision of those policies.

Fast Company published an article about the “rah-rah” efforts of Facebook’s in-house poster makers, the Analog Research Lab. ARL is supposed to be employee-led, but has often promoted Zuckerberg/Sandberg talking points, functioning more as internal propaganda than employee-led team motivation. Employees have begun to push back against the almost cult-like work environment; one former Facebook employee told CNBC the constant pressure to act happy in order to advance is soul-killing: “It is not OK to act like this is not the best place to work.” 

Employee fears that speaking up could impact on career opportunities further suppresses the impulse to criticize leader decisions. Facebook uses a performance evaluation process called stack ranking that has fallen from favor elsewhere—most notably at Microsoft, which tried the system but ditched it. The system became less objective over time, and had a negative impact on employee trust and internal politics. Former Facebook managers reported on the pressures to meet stack-ranking quotas, rather than to grade employees honestly, and employees began to be stressed about getting a single lower ranking that could follow them for their entire term of employment. Final grades, given at year-end, create a panicked rush to deliver on short-term goals and push user engagement features—not a a healthy environment for stopping to ask, “Should we really be doing this? What are the potential unintended consequences?” [Read “Do We Need to Revive Design Ethics?” in the Summer 2018 Second Wind Magazine.]

Discouraging employees from expressing dissent to leadership decisions doesn’t sound like a great way to bring about change or more ethical management of the platform. Echo chambers also are notoriously awful places to have difficult conversations about design ethics, moral choices and user experience. 

Meanwhile, Advertisers Are Paying More to Reach Facebook Users

As if the lack of transparency and plodding response to problems surrounding advertising on the platform are not enough to deter marketers, “Facebook's share of digital ad revenues far exceeds its share of digital media time,” reported eMarketer. Their analysis also suggested that fraudulent users greatly reduce the “real” Facebook audience from Facebook’s estimated 2.32 billion users to a far lower 1.64 billion. That’s still a lot of people, but given the lack of accountability and the rising cost of reaching an individual, how much longer will Facebook’s digital ad dominance continue? At some point (one hopes), digital marketers will tire of to the swindle of Facebook ad serving and shift dollars elsewhere. Some of that ad spending may go to more trusted digital media, and maybe even return to traditional media buys.

Getting Real About Social Media

While digital media advocates argue that the scale of social media giants’ operations now makes rapid change difficult to impossible, that argument ignores that there is minimal will to make changes. As long as Facebook, Twitter, YouTube and other platforms rely on ad serving as their biggest revenue source, real change in how those platforms operate—and serve their advertisers—is unlikely. The major platforms now so dominate the Internet, and life in general for millions of people, any negative reactions to individual scandals seem too small to force changes. Those scandals could build over time to finally steer users away from social media… and ad dollars follow where users go.

Expecting perfection from channels so inherently imperfect is unrealistic. If marketers want to really “get real,” maybe they should look harder at the real data on social media marketing. Entrusting ever more ad dollars to digital companies intent on maintaining their own business models at any cost is a sucker’s bet. Media planners, start your planning engines…

Comments

0 comments
Please to use this feature.
Comments for website administrator (optional):