0

Voting for electoral integrity in a gen AI age - tech platforms need to step up...

 3 weeks ago
source link: https://diginomica.com/voting-electoral-integrity-gen-ai-age-tech-platforms-need-step-mark
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Voting for electoral integrity in a gen AI age - tech platforms need to step up to the mark

By Stuart Lauchlan

April 17, 2024

Dyslexia mode

election
(Pixabay)

2024 is not a drill. Tech platforms must do better to protect our online civic spaces and democratic systems. 

This Friday the biggest elections in the world get underway as India begins the lengthy process of going to the polls. The ballots finally close on 1 June, so there’s a lot of room there for electoral interference from ‘bad actors’. But there’s a lot more to come - 2024 has more elections around the world than ever before, with more than 40 countries having major plebiscites, including (most likely) the UK and the US. 

So that opening statement from the Global Coalition for Tech Justice (GCTJ) rings very true. In an open letter, the group warns:

Tech companies wield significant and direct influence over political affairs beyond their home countries, yet they operate with minimal political or legal accountability. They have a track record of under-investing in platform safety across global majority countries. This year, the stakes are higher than ever before.

They are indeed and we’ve already seen enough to assume that worse is to come. Prior to the New Hampshire primary in the US, voters received robocalls using AI to mimic the voice of Joe Biden to suppress turnout, while the UK Prime Minister Rishi Sunak has been the subject of multiple deep fake videos. In addition, GCTJ notes: 

In Indonesia, AI-generated content spread on tech platforms helped an alleged human rights abuser become president, while in Pakistan, female journalists and politicians were targeted in a hateful post-elections social media campaign.

Meanwhile the Microsoft Threat Analysis Center (MTAC) has released a report Same targets, new playbooks: East Asia threat actors employ unique methods that predicts Chinese interference in the November US election is highly likely: 

China will, at a minimum, create and amplify AI-generated content that benefits their positions in these high-profile elections. While the impact of such content in swaying audiences remains low, China’s increasing experimentation in augmenting memes, videos, and audio will continue—and may prove effective down the line. While Chinese cyber actors have long conducted reconnaissance of US political institutions, we are prepared to see influence actors interact with Americans for engagement and to potentially research perspectives on US politics.

Responsibilities 

According to GCTJ, it has actively engaged with tech platforms, including Meta, Google, X, and TikTok, to develop and publicly disclose robust election plans for the 2024 elections megacycle to ensure online safeguards. But:

Despite our and many others’ engagement, tech companies have failed to implement adequate measures to protect people and democratic processes from tech harms that include disinformation, hate speech, and influence operations that ruin lives and undermine democratic integrity. In fact, tech platforms have apparently reduced their investments in platform safety and have restricted data access, even as they continue to profit from hate-filled ads and disinformation.

There’s a similar reaction from Full Fact, a UK not-for-profit organization that in 2023 published 776 fact checks and secured 130 corrections from newspapers, broadcasters, MPs and the Prime Minister. It warns: 

The emergence of popular generative AI tools has created new challenges and opportunities for anyone fighting against misinformation and disinformation. As an election approaches, politicians need to restore public trust in our political system, and pledge to use generative AI responsibly. 

It goes on: 

AI can be an enormous force for good, and Full Fact has used it to build automated fact checking tools for fact checkers and journalists in more than 20 countries, helping them separate checkable fact from opinion in the torrent of information online.

But generative AI is also a serious threat, making it cheap, easy and quick to spread misinformation and disinformation, and creating content so plausible that it is impossible to judge quickly whether something is real or not. In advance of a UK General Election, politicians should pre-empt public disquiet about the legitimacy of an election influenced by AI, and promise publicly to use generative AI responsibly in all their campaigning and other political activity. Not to do so would further degrade already historically-low trust in UK politics and institutions.

Full Facts also argues that responsibility lies with online platforms and search engines, calling on them to become “far more transparent about the data they collect”: 

Online platforms and search engines should voluntarily commit to establishing and improving their policies on AI-generated misinformation and disinformation before the end of the current parliamentary session, regardless of whether the UK government compels them to do so….Technology companies should participate in international standards for indirect disclosure techniques and be transparent about the accuracy and reliability of detection tools used to moderate content and enforce policies.

It concludes: 

With elections being held in many countries around the world in 2024, it is essential that online platforms and search engines outline how they intend to mitigate the risks of misleading AI-generated content in the context of election campaigns, while paying due attention to different global contexts and the need to protect freedom of expression, particularly during election periods.

Industry response

So what is the industry doing in practice? Back in February at the Munich Security Conference, 20 companies, including Meta, Microsoft, Google, X and OpenAI, signed up to the Tech Accord to Combat Deceptive Use of AI in 2024 Elections. This is based on seven principles: 

  1. Prevention: Researching, investing in, and/or deploying reasonable precautions to limit risks of deliberately Deceptive AI Election Content being generated. 
  2. Provenance: Attaching provenance signals to identify the origin of content where appropriate and technically feasible.
  3. Detection: Attempting to detect Deceptive AI Election Content or authenticated content, including with methods such as reading provenance signals across platforms. 
  4. Responsive Protection: Providing swift and proportionate responses to incidents involving the creation and dissemination of Deceptive AI Election Content. 
  5. Evaluation: Undertaking collective efforts to evaluate and learn from the experiences and outcomes of dealing with Deceptive AI Election Content. 
  6. Public Awareness: Engaging in shared efforts to educate the public about media literacy best practices, in particular regarding Deceptive AI Election Content, and ways citizens can protect themselves from being manipulated or deceived by this content. 
  7. Resilience: Supporting efforts to develop and make available defensive tools and resources, such as AI literacy and other public programs, AI-based solutions (including open-source tools where appropriate), or contextual features, to help protect public debate, defend the integrity of the democratic process, and build whole-of-society resilience against the use of Deceptive AI Election Content.

These resulted in eight specific commitments: 

  1. Developing and implementing technology to mitigate risks related to Deceptive AI Election content, including open-source tools where appropriate.
  2. Assessing models in scope of this accord to understand the risks they may present regarding Deceptive AI Election Content.
  3. Seeking to detect the distribution of this content on their platforms.
  4. Seeking to appropriately address this content detected on their platforms. 
  5. Fostering cross-industry resilience to deceptive AI election content
  6. Providing transparency to the public regarding how the company addresses it.
  7. Continuing to engage with a diverse set of global civil society organizations, academics •
  8. Supporting efforts to foster public awareness, media literacy, and all-of-society resilience.

That’s all good, but needs to be played out in practice. Is every signatory really taking the issue seriously? For example, on the one hand, Meta says it has 40,000 people committed to making sure that online elections are secure worldwide and has spent more than $20 billion on this to date. On the other hand, Meta’s Head of Global Affairs, Nick Clegg, said in London earlier this month that we shouldn’t be getting worked up: 

It is right that we should be alert and we should be vigilant. But of the major elections which have taken place already this year, in Taiwan, Pakistan, Bangladesh and Indonesia, it is striking how little these tools have been used in a systematic basis to really try to subvert and disrupt the elections. I would urge everyone to think of AI as a sword, not just a shield, when it comes to bad content. The single biggest reason why we’re getting better and better and better in reducing the bad content that we don’t want on our walls, on Instagram and Facebook and for so-on, is for one reason: AI.

My take

As the opening sentence puts it - this is not a theoretical exercise. Given the current geo-political climate, electoral interference is an even more serious consideration than ever before. Tech companies around the world have been issuing statements about how seriously they’re taking this issue. We will all be watching to make sure that these are promises fulfilled, not platitudes for appearance’s sake. 


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK