AI candidates are running for office



In the United Kingdom, a new upstart political candidate going by the name “Steve” is calling for a four day work week and economic incentives for commuters to switch to electric cars. But Steve isn’t quite like other candidates in the race. “He” isn’t even human. Steve is actually an AI-powered chatbot that Brighton Pavillion voters can speak with over an online voice chat. The human creators behind Steve, and several other AI candidates jostling for political office, are betting large language models like those powered by OpenAI and Google may be somehow better equipped to accurately and authentically represent the views of voters than their morally fallible flesh and bones competitors.

[ Related: Google’s AI Overview tells you to eat glue. (Don’t.) ]

In reality, these AI candidates will likely face an uphill battle. At the moment, it’s not at all clear whether running a computer for elected office is even legal. The practicality of relying on a piece of software to conduct day-to-day political duties amongst other humans is more baffling still. Even if these models can find a way to overcome those sizable hurdles, they will still have to prove they are able to avoid making up facts and repeating harmful biases, two issues endemic in generative AI systems. As of now, it’s far more likely these novel efforts will be remembered as yet another shiny trick propped up by immediate shock value, but little else. 

Here are a few AI candidates currently running for office around the world. 

 ‘AI Steve’

AI Steve—which is currently running for a seat in the U.K. Parliament as an Independent—is an AI avatar based on a British businessman named Steve Endacott. The AI candidate was created by Neural Voice, a company headed by Endacott, which builds custom, voice-based AI models. Voters can interact with AI Steve and ask it about policy positions and, more importantly, recommend their own. AI Steve’s initial policy frameworks mirror that of the political party “Smarter UK,” which AI Steve notes is aiming to “revolutionize democracy by involving constituents in policy creation.” Endacott environs his AI candidate transcribing and summarizing conversations it has with voters and then using information from those summaries to advocate for particular policies. AI Steve can reportedly hold up to 10,000 separate conversations at the same time. This model, the thinking goes, could emphasize the representative part of representative democracy. 

“We are actually, I think, reinventing politics using AI as a technology base, as a copilot, not to replace politicians but to really connect them into their audience, their constituency.” Endacott recently told Wired

One thing AI Steve definitely can’t do however is show up for votes in person. Endacott says he will fill that role personally and act as a type of human surrogate attending rallies and meetings in his AI’s place, though the legality of doing so is murky. The businessman says he intends to vote in line with AI Steve’s policy decision even in instances where they may diverge with their own personal views on a particular subject. 

‘Virtual Integrated Citizen’

Voters in the Wyoming capital Cheyenne may soon have the opportunity to vote in an AI chatbot as mayor but they will have to do so through a human proxy. The AI candidate, referred to as “Virtual Integrated Citizen” (VIC) was created by a local library employee named Victor Miller. VIC is built on top of OpenAI’s GPT 4 and, according to Miller, has an “IQ” of 155. (Scholars have questioned the usefulness of IQ as a measure of intelligence in humans). Miller claims his chatbot generated the name Virtual Integrated Citizen itself. The abbreviations (VIC) are also the shorthand version of Victor, Miller’s first name.

Cheyenne voters won’t actually get to vote for VIC directly due to local election laws that prevent nonhumans from running for office. While Miller’s name will be the one listed on the ballot the librarian recently told Cowboy State Daily he intends to let the chatbot do “100% of the voting,” if he’s elected. VIC’s creator claims he would simply serve as a vessel, or a “meat avatar” in his words, to carry out the AI’s orders.

“Obviously, if you’re voting for VIC, you’re voting for Victor Miller,” he told the publication. “That’s the legal thing you’re doing and I’m the human heart of it and you need to trust me. But trust me in telling you I’m going to let that guy do all the work.”

‘Yas Gaspada’

A third AI candidate, unassumingly named “Yas Gaspadar,” is reportedly running for position in the Belarusian parliament. Gaspadar, who is described as a “35-year-old from Minsk,” is really an AI chatbot built on top of Open AI’s GPT 4. Gaspadar, according to a blog post, claims it’s running on a pro-Democracy platform with policy positions that include banning the import of nuclear weapons, investing in education, and advocating for free and fair elections. A headshot of the candidate, also generated by AI, depicts Gaspadar as a youthful blond man wearing a dark brown suit and red tie. 

Unlike AI Steve and VIC, Yas Gaspadar appears to have been made explicitly as a protest symbol. The chatbot was created by Sviatlana Tsikhanouskaya who leads the country’s anti-authoritarian opposition party. 

“Frankly, he’s more real than any candidate the regime has to offer,” Tsikhanouskaya wrote on X. “And the best part? He cannot be arrested!”

What’s the point of AI political candidates? 

Hypothetically, AI-powered political candidates could use their vast compute capabilities to quickly absorb and summarize large swaths of tedious government documents and policy briefs that may otherwise get missed by a human. That in-depth parsing of material could then, in theory, lead the AI politician to make more rational policy prescriptions. If the lingering issue of AI hallucinations are ever mitigated (which remains a huge if) an AI’s summarized inventory of communications with voters could then function like training data used to form a policy agenda. That agenda, if followed dutifully, could reflect a clearer representation of voters’ aggregate interests that one created by a human susceptible to self-interest and political gamesmanship. 

But that’s still all extremely hypothetical. In reality, AI politicians—for now at least—are at best a cheap parlor trick and at worse a harmful distraction. For starters, as the VIC example in Wyoming highlights, it’s unclear whether these nonhuman algorithmic entries are even legally allowed to run for office in most localities. Chuck Gary, Wyoming’s Secretary of State, reportedly sent a letter to the state’s county clerk saying the AI candidate violates “both the letter, and spirit, of Wyoming’s Election Code.” Even if voters are given the opportunity to cast ballots for AI candidates they will have to put their trust in the tool’s creators that they will actually abide by the model’s policy prescription (if that’s what they’re actually into). That gets increasingly thorny when the AI candidate advocates for a policy at odds with its creator’s personal interests. 

Generative AI models like ChatGPT and Google’s Gemini also aren’t nearly as unbiased or immune from error as the creators of these AI candidates seem to want to admit. All AI models suffer from so-called “hallucinations” where they propose statements as facts that simply are grounded in reality. That means an AI candidate supposedly immune from the irrationality of human dishonesty, might just as easily make up facts in a policy proposal or during a debate. It’s also unlikely an elected AI candidate could actually participate in the person-to-person discussion and deal making that make up a not insignificant portion of a politician’s daily duties. 

And even with political dissatisfaction heating up around the world, it’s unclear whether or not voters would actually find AI candidates any more reassuring. Last year, activists and voters spoke out after a chatbot released by New York Mayor Eric Adams’ erroneously advised small business owners in the area to violate local laws. On the national level, voters generally appear more weary of AI than optimistic. More than half (58%) of US adults included in a recent poll conducted by the University of Chicago and The Associated Press said they were concerned AI would contribute to the spread of misinformation ahead of the 2024 presidential election. A similar percent of US adults recently surveyed by Pew Research said they were more concerned than excited about increased use of artificial intelligence in daily life.





Source link

About The Author

Scroll to Top