How robots are coming for your vote- iWONDER

  26 November 2019    Read: 1350
 How robots are coming for your   vote-  iWONDER

From music playlists to marriage prospects, we’ve grown comfortable with algorithms making the decision for us. So why should picking your candidate be any different?

You’re standing in the polling booth, crisp ballot paper in hand, a dozen or more names printed across it. Many of them you’ve never encountered before. The cross you put next to one of them, however, will shape the government for years to come. How do you choose?  

It is a conundrum faced by voters in democratic elections all over the world. Usually it will come down to a combination of party loyalty, one or two memorable policies, and how you think your friends and family will vote. To politicians’ dismay, the electorate can be incredibly fickle, basing their decision on a candidate’s haircut or how attractive they are, a recent news story, or the desire to “send a message” to those in power. (Read more about how faces can change politics.)

Making an informed choice in the voting booth requires a great deal of investment – reading up on every candidate, weighing their relative merits against one another. Most people don’t have that kind of time. Voters in the world’s largest democratic elections in India, for example, had to choose from more than 8,039 candidates nationwide from 650 different parties in polling split into eight phases.

With the US Presidential elections looming in 2020 and a bitter general election likely in the UK in the coming months, the task facing voters is unlikely to get any easier. But can technology help?

After all, machines make all kinds of difficult decisions for us. Recommendation engines find us the cheapest flights, the best car insurance, the optimum mobile phone package, serve us advertisements for things we didn’t know we wanted, find us books to read, movies to watch, suggest gift ideas, and curate playlists of our favourite artists. We even let machines shortlist our romantic prospects. So can artificial intelligence find our perfect match when it comes to political candidates? 

Making an informed decision in elections can be time consuming and often voters resort to more superficial ways to make their choice (Credit: Getty Images)

Step in Doru Frantescu, director of Vote Watch Europe. This year, hundreds of thousands of EU citizens used a tool the think tank produced to match voters with their most suitable candidate in the European Parliament elections. To do this, Frantescu’s team put together a suite of 25 questions drawn from real-life decisions made by the EU parliament. Visitors to the website voted as if they were politicians themselves, which the algorithm used to match them to like-minded candidates. They hope the tool, called, will be adapted for other elections in the future.

“Most choices we make are based on anything else but logical reasons, mostly emotional ones – how a person speaks, or how a person dresses,” says Frantescu. “We built this so people can make informed choices based on logical reasons. We are trying to bridge the gap using technology to help citizens make informed choices.”

That’s easier said than done. To begin with, Frantescu’s team had to choose the most contentious issues – it doesn’t help to know where a voter stands on an issue which most parliamentarians agree upon. The issues also had to be popular ones, so users would have firm opinions on them. Finally, counterintuitive results had to be weeded out – for example, environmental legislation that was voted down by green parties, perhaps for not being stringent enough.

For US citizens, Nick Boutelier and colleagues have developed a similar tool, covering local, national and the 2020 Presidential elections. Like Frantescu’s system, users answer a battery of questions on immigration, the environment, foreign policy and more to discover their most closely-aligned candidate.

Originally uploaded to Facebook in 2011, over a million people took the quiz in the first six months and as of September 2019 the tool, called, has attracted 52 million users. A key draw, says Peck, was including third party candidates who are often overlooked by the media – and voters. “People often said ‘Wow, I didn’t know about these candidates, I have so much in common with them’,” says Peck. “There are options people didn’t know that they had.”

India held the world’s largest democratic elections last year but those going into the ballot boxes faced a bewildering number of options (Credit: Getty Images)

But can we trust tools like these to serve us the politician most deserving of our vote? The selection of issues on which candidates are rated carries a great deal of influence, though both Frantescu and Peck are keen to point out that their representation of candidates’ views is evidence-based: either from votes they’ve cast in government, manifesto pledges, or comments made on the record.

Then there is the question of how the balance of politician versus party should be weighted. A candidate may lean toward right-wing economic principles, for instance, but these aren’t likely to be exercised in parliament if their party as a whole leans to the left. Can algorithms capture the complex world of politics – which is often defined by compromise and pragmatism?

One algorithm created by Zeliha Khashman, a researcher in international relations at the Near East University in Nicosia, Cyprus, claims to be able to predict how US Members of Congress will vote on national issues by combining their public opinions with their party affiliation.

But Beth Singler, an AI researcher at the University of Cambridge, warns against investing too heavily in their predictive power. “If you have someone running in a presidential election, the candidate may never have been president before, so you can’t say they’ll act in the way that they, or their data, predicts,” she says. “You can’t say that people would definitely act in a certain way – people are messy.”

Voting by proxy

Even if I fill out all of the questions posed on YourVoteMatters and iSideWith, I’m only able to give the highlighted issues a moment’s notice. Deep down, few of us have a clear picture of how farming subsidies should work, or the complexities of trade agreements. The whole point of politics is electing someone else to give these matters serious thought. What I need is someone who aligns with my core values, rather than my shaky opinions. 

The best recommendation engines, then, are those which can work without me having to bone up on the nitty gritty of the Irish backstop, H1-B visa regulations, or World Trade Organization rules. Research at the University of Texas found that a handful of questions on non-political topics, like whether I prefer cats to dogs, can give a fairly good estimate of my political alignment. But I need something with precision.

There are no tools able to do this yet – although the developers of a piece of AI software called Nigel (no, not that one) claim that it will get to know you by observing your behaviour, and may even one day help you decide who to vote for.  So far, however, its achievements are modest: installed on a phone, the software learned to switch the mobile to silent in cinemas.

Artificial intelligence could allow us to co-opt the technology on our smartphones to use what it learns about us to tell us how to vote (Credit: Getty Images)

There’s another problem: when I look into the algorithmic mirror, I might not like what I see. Political identity is important to many of us, and discovering that our political alignments are different from what we believe them to be could be hard to swallow. For example, it would be disconcerting to find out that your views and online behaviour mark you out as a conservative, when you believed that, like your friends and family, you inhabited the left of the political spectrum.

Flipping the script

Artificial intelligence is also a double-edged sword. Just as these algorithms can be used to interrogate candidates, the information they collect can tell politicians what voters think on various issues. Legislators in Austin, Texas, for example, purchased anonymised data from iSideWith to see how the city’s residents felt about ride-sharing apps. “We helped the city council understand the issue, and how diverse voters were on it, rather than relying on the people who turn up to public meetings and shout about the issue,” says Peck.

The use of algorithmic insights in this way, however, can be controversial. During the UK Brexit referendum, Facebook users were targeted on their likely trigger points, such as immigration or over-zealous bureaucrats, and served political ads which engaged those fears. The ability to target adverts in this way was made possible by data that Facebook has collected on its users. (You can even see for yourself what Facebook believes is your political affiliation).

The practice of targeting adverts caused a scandal when it came to light, and the Canadian company responsible for the UK referendum campaign, Aggregate IQ, is now under investigation.

There is another concern as AI creeps into the political sphere and influences voting behaviour: by accident or design, even the most data-stuffed and complex algorithms come pre-packaged with very human flaws. “AI is not purely neutral in decision making – it can perpetuate existing biases,” says Singler.

There have been growing concerns that algorithms used to curate what people see on social media sites may subvert the democratic process (Credit: Alamy)

This is important, as artificial intelligence increasingly plays a role not only in determining which political candidates the public are exposed to, but also what we don’t see. Social networks are increasingly reliant on AI to root out inaccurate and fraudulent information posted online in the run up to elections. Already accused of allowing this sort of interference to mar the UK referendum and the 2016 US Presidential election, Facebook is working to make its networks more resilient to bad actors. 

For India’s 2019 general election, the company launched several tools to combat misinformation. Suspicious articles were shown lower in newsfeeds and page administrators were alerted if they shared debunked articles. A “candidate connect” tool also gave users verified information on local candidates.

Candidates in the 21st century may find they will have to convince the algorithms that curate our newsfeeds that their campaigns are genuine

And in 2018, WhatsApp changed its platform so that a message could only be forwarded to five others, to limit the spread of false information. The change was quietly rolled out across the world earlier this year. New research suggests that this strategy is working.

The tricky work of deciding what constitutes fake news still rests with independent human fact checkers, but the scale of social networks means they are often overwhelmed. As a result, candidates hoping to break through to voters in the 21st century may find that as well as overcoming the usual hurdles, they will have to convince the algorithms that curate our newsfeeds that their campaigns are genuine.

An informed populace is key to a strong democracy, and that’s part of the reason why we have regulations that defend free speech and prosecute misinformation. But as AI becomes more and more involved in deciding what we see online, our concern over whether we can trust the authors of the information we consume could be augmented with fears about the trustworthiness of the algorithms that compose our newsfeeds.

Cutting out the middleman

If we do decide to place our faith in these codes, we could decide that algorithms should in fact play a much bigger role in a democratic system. If our daily activities on Facebook supply enough activity to understand our political will, do we even need politicians at all?

“It may be that we end up voting for algorithms that a politician should follow,” says Frantescu. He notes that a manifesto, after all, is a type of algorithm describing what sort of promises a politician would keep were they in power.

Matching voters to the candidates that most represent their views based on their past record and public statements could make elections more democratic (Credit: Alamy)

“But these are intentionally very broad, so that politicians keep a big margin for manoeuvre, and that they can spin things around after the elections,” says Frantescu. “An algorithm doesn’t do that, it instead delivers what we ask it to deliver, and it does not have personal interests that affect its decisions.”

Cesar Hidalgo, director of the Collective Learning group at MIT Media Lab, is also pushing the idea that we can boost citizens’ engagement with the democratic process by doing away with politicians altogether. While some governments already incorporate elements of direct democracy – asking citizens to vote on new legislation – Hidalgo’s “augmented democracy” project imagines making citizens responsible for all legislative decisions, with digital agents voting on their behalf.

Electronic voting makes counting votes easier but fears of its security means it has not been adopted in many countries (Credit: Getty Images)

Politicians may feel stung by the idea that their job could be automated: after all, what robot could replicate skills such as leadership, oratory, inspiration, compromise? But the idea is not new: Singler notes that science fiction author Isaac Asimov contemplated just such an outcome in Evidence, a 1946 short story which follows a politician who may – or may not – be a robot. Fears that he may be some kind of Manchurian candidate for roboticists are ultimately defused by the righteousness of his time in office: after all, a robot in Asimov’s universe must follow the Three Laws.

Real life, however, has some way to go. “I don’t know if I’d ever trust the neutrality of an AI system,” says Singler. “It’s a bit more complicated than that – I hope.”

Which brings us back to the polling booth, and our spool of ballot paper. Can a machine mind help you decide which candidate to vote for? Yes, and it’s likely that one or more has already influenced you in some way. But the final decision is yours alone to take – and take responsibility for.


BBC Future

More about: AI   voting