4

Social Media Fuels Division and Angst – But Solving the Underlying Issues at Pla...

 2 years ago
source link: https://www.socialmediatoday.com/news/social-media-fuels-division-and-angst-but-solving-the-underlying-issues-a/616881/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Social Media Fuels Division and Angst – But Solving the Underlying Issues at Play is Hugely Complex

Published Jan. 9, 2022

By
Andrew Hutchinson Content and Social Media Manager

Despite various studies, and counter studies, largely funded by the networks themselves, social media remains a hugely problematic vehicle for divisive messaging and harmful movements.

But its influence is often misunderstood, or elements are conflated to obscure the facts, for varying reason. The real influence of social is not necessarily down to algorithms nor amplification as focus elements. The most significant harm comes from connection itself, and the capacity to plug into the thoughts of people you know, something that wasn’t possible in times past.

Here’s an example - let’s say you’re fully vaccinated against COVID, you fully trust the science, and you’re doing what health officials have advised, no problems, no concerns about the process. But then you see a post from your old friend - let’s call him ‘Dave’ - in which Dave expresses his concerns about the vaccine, and why he’s hesitant to get it.

You may not have spoken to Dave for years, but you like him, you respect his opinion. Suddenly, this isn’t a faceless, nameless activist that you can easily dismiss, this is somebody that you know, and it makes you question whether there may be more to the anti-vax push than you thought. Dave never seemed stupid, nor gullible, maybe you should look into it some more.

So you do - you read links posted by Dave, you check out posts and articles, maybe you even browse a few groups to try and better understand. Maybe you start posting comments on anti-vax articles too, and all of this tells Facebook’s algorithms that you’re interested in this topic, and that you’re increasingly likely to engage with similar posts. The recommendations begin to change in your feed, you become more involved with the topic, and all of this drives you further to one side of the argument or the other, fueling division.

But it didn’t start with the algorithm, which is a core rebuttal in Meta’s counter-arguments. It started with Dave, somebody who you know, who posted an opinion that sparked your interest.

Which is why broader campaigns to manipulate public opinion are such a concern. The disruption campaigns orchestrated by Russia’s Internet Research Agency in the lead-up to the 2016 US election are the most public example, but similar pushes are happening all the time. Last week, reports surfaced that the Indian Government has been using bot-fueled, brute-force campaigns on social to ‘flood the zone’ and shift public debate on certain topics by getting alternative subjects to trend on Facebook and Twitter. Many NFT and crypto projects are now seeking to cash in on the broader hype by using Twitter bots to make their offerings seem more popular, and reputable, than they are.

Scam bots

Most people, of course, are now increasingly wary of such pushes, and will more readily question what they see online. But much like the classic Nigerian email scam, it only takes a very small amount of people to latch on, and all that effort is worth it. The labor costs are low, and the process can be largely automated. And just a few Daves can end up having a big impact on public discourse.

The motivations for these campaigns are complex. In the case of the Indian Government, it’s about controlling public discourse, and quelling possible dissent, while for scammers it’s about money. There are many reasons why such pushes are enacted, but there’s no question that social media has provided a valuable, viable connector for these efforts.

But counter-arguments are selective. Meta says that political content is only a small portion of the overall material shared on Facebook. Which may be true, but that’s only counting articles shared, not personal posts and group discussions. Meta also says that divisive content is actually bad for business because, as CEO Mark Zuckerberg explains:

We make money from ads, and advertisers consistently tell us they don't want their ads next to harmful or angry content. And I don't know any tech company that sets out to build products that make people angry or depressed. The moral, business and product incentives all point in the opposite direction.”

Yet, at the same time, Meta’s own research has also shown the power of Facebook in influencing public opinion, specifically in political context.

Back in 2010, around 340,000 extra voters turned out to take part in the US Congressional elections because of a single election-day Facebook message boosted by Facebook.

As per the study:

"About 611,000 users (1%) received an 'informational message' at the top of their news feeds, which encouraged them to vote, provided a link to information on local polling places and included a clickable 'I voted' button and a counter of Facebook users who had clicked it. About 60 million users (98%) received a 'social message', which included the same elements but also showed the profile pictures of up to six randomly selected Facebook friends who had clicked the 'I voted' button. The remaining 1% of users were assigned to a control group that received no message."

Facebook election day message

The results showed that those who saw the second message, with images of their connections included, were increasingly likely to vote, which eventually resulted in 340,000 more people heading to the polls as a result of the peer nudge. And that’s just on a small scale in Facebook terms, among 60 million users, with the platform now closing in on 3 billion monthly actives around the world.

It’s clear, based on Facebook’s own evidence, that the platform does indeed hold significant influential power through peer insights and personal sharing.

So it’s not Facebook specifically, nor the infamous News Feed algorithm that are the key culprits in this process. It’s people, and what people choose to share. Which is what Meta CEO Mark Zuckerberg has repeatedly pointed to:

Yes, we have big disagreements, maybe more now than at any time in recent history. But part of that is because we’re getting our issues out on the table — issues that for a long time weren’t talked about. More people from more parts of our society have a voice than ever before, and it will take time to hear these voices and knit them together into a coherent narrative.”

Contrary to the suggestion that it’s causing more problems, Meta sees Facebook as a vehicle for real social change, that through freedom of expression, we can reach a point of greater understanding, and that providing a platform for all should, theoretically, ensure better representation and connection.

Which is true from an optimistic standpoint, but still, the capacity for bad actors to also influence those shared opinions is equally significant, and those are just as often the thoughts that are being amplified among your networks connections.

So what can be done, beyond what Meta’s enforcement and moderation teams are already working on?

Well, probably not much. In some respects, detecting repeated text in posts would seemingly work, which platforms already do in varying ways. Limiting sharing around certain topics might also have some impact, but really, the best way forward is what Meta is doing, in working to detect the originators of such, and removing the networks amplifying questionable content.

Would removing the algorithm work?

Maybe. Whistleblower Frances Haugen has pointed to the News Feed algorithm, and its focus on fueling engagement above all else, as a key problem, as the system is effectively designed to amplify content that incites argument.

That’s definitely problematic in some applications, but would it stop Dave from sharing his thoughts on an issue? No, it wouldn’t, and at the same time, there’s nothing to suggest that the Dave’s of the world are getting their information via questionable sources, as per those highlighted here. But social media platforms, and their algorithms, facilitate both, they enhance such process, and provide whole new avenues for division.

There are different measures that could be enacted, but the effectiveness of each is highly questionable. Because much of this is not a social media problem, it’s a people problem, as Meta says. The problem is that we now have access to everyone else’s thoughts, and some of them we won’t agree with.

In the past, we could go on, blissfully unaware of our differences. But in the social media age, that’s no longer an option.

Will that, eventually, as Zuckerberg says, lead us to a more understanding, integrated and civil society? The results thus far suggest we have a way to go on this.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK