Election Lies, Damned Lies, and Chatbots

Chatbots — The Consummate Stochastic Parrot

With all due respect to the Associated Press (and apologies to Benjamin Disraeli for the irresistible title), I prefer a headline like, “Chatbots Lie to Voters About Elections” better than their headline not too long ago, Chatbots’ Inaccurate Misleading Responses About U.S. Elections Threaten to Keep Voters From Polls which also appeared in POLITICO, but was produced by AP’s Emmy-winning global investigative journalist Garance Burke. And that drove me to finally take a moment or three to comment on the substance of Garance’s article.

Garance is spot on, of course, reporting about the predictably dreadful results from ordinary voters trying to use general-purpose ordinary Chatbots to get answers to questions about how their elections work, what they have to do to vote, and so on.

But “Lie” is what it amounts to in practice, even though a Chabot can’t lie per se — having no intention of course, any more than any other software does — and even though the AI companies operating these Chatbots certainly do not want for their Chatbot to lie about elections. Still, many (not all, more on that in the next installment) of those companies don’t seem to particularly mind if the Chatbot tells lies about many kinds of things. 

For elections, the net effect is the same: a well meaning person deciding to trust some Internet thing on their screen, trusting it to tell the truth about something important; getting false and/or confusing information; and in some cases, as a result, not being able to vote, or deciding not to vote at all.

We also worry about AI and “Lies” about elections, where we have seen and will continue to see cases where users of AI tools (not the AI titan vendors, but their customers) use them to create intentionally persuasive and false content to intentionally lie to voters. Yet, as Garance points out, we don’t even need those bad actors to get bad results!  Ordinary people can get badly deceived and confused just fine on their own, with a little “Do It Yourself” help from Chatbots powered by the AI tech from a handful or so of companies who are making big bets on big profits from that technology.

Why ?

  • Why is this happening? 

  • Why is it predictable? 

  • What do the AI titans say? 

  • Is anybody trying to do anything to reduce the harm?

  • What would be better? 

So, so many questions, and some answers… coming in future postings. Let me start here with the first two. I will rant generally, though it all applies to elections as much as anything else. This is happening because Chatbots are the next iteration of “Web 1.0” — more on that in a minute — and are new, powerful, darn convenient… and funny! Try asking (“prompting”) one to “Summarize the process for reviewing and counting absentee ballots, expressed in iambic pentameter quatrains.” While a rather amusing response to read, you won’t get exact iambs, nor quatrains, nor a description that is accurate for your locality and state. 

And it is predictable because these generalist Chatbots are doomed to be generally mediocre at a whole lot of things. The ability to create inaccurate, mendacious, misleading content is built in, especially generating “hallucinations”— the current clever word for inventing out of whole cloth a very complete and convincing story that is shaped like the real world, but in fact, false. Chatbots are born fabulists. That’s because they’re based on large language models (LLMs) that are huge and fundamentally polluted. 

Web 1.0 – All Over Again

I’ve written about that fundamental pollution recently, so I won't say more here, except to paint the analogy with Web 1.0. Back in that day, a lot of people were working out Tim Berners-Lee’s (and others’) vision of collaborating and sharing information of many kinds. And others developed this wonderful convenience called search engines, that enabled an ordinary person with a question or an information need, to quickly find a bunch of information that already existed on a website somewhere (that had been “indexed” by the search engine), and dutifully sort through the search results to find what they were looking for. It was early days, and there was a sort of implicit trust by many people: if a search engine helped them find something, and if the something was useful, and it looked legit (as opposed to a personal website of content expressing personal opinions), then it was likely true enough to run with. 

Those days didn’t last long. Those good old interwebs got overrun with ads, spam, scams, malware, and then these profitable and powerful aggregators of info-ish, social media: you give them your personal information for free, you create content for them for free, and your and others’ content becomes the mind candy for digital billboards, targeted using the personal information.

In other words, the Internet became the repository for a great deal of the good, bad, and ugly of human thought in words and pictures; a repository so vast, cluttered, and monetized that the good old search engines were not so great anymore: they could no longer serve up relevant and trustworthy content that we could use as easily as in early Web 1.0.

Now, the Next Great World Oracle – and Muse!

That’s been the case for years now, and a lot of people have been wanting something better. We thought of The Google as the Great and Powerful Oz; then we saw behind the curtain at all the gadgets that helped to make our current info-hell-scape. And like humans through history, wanting an Oracle to answer our questions, a lot of people have been wanting the next one. It’s arrived, the Great & Powerful Oz-GPT — actually an expanding crew of them.

Yet, unlike early search days, this new Great & Powerful Oz-GPT comes pre-polluted, having been built on all the detritus that grew out of Web 1.0 and ended up choking search. And it is so much better, way better!  It is not just an oracle, but also a muse, an inspiration for creativity; and also a tool that hugely amplifies human creative capability and output, a waldo as well as muse. In short, as a colleague would say, “A floor wax and a dessert topping!

And that’s part of what makes the new Great & Powerful Oz-GPT so seductive as a source of a quick-hit of information.

How bad is this? Not bad, if you use AI for the many tasks that it handles well. But it’s truly terrible when generative-AI-based Chatbots are used by people who trust what it says — trust for no really good reason other than being captivated by a machine’s (stochastic) response! — on a topic of real importance, where falsehoods have consequences.

And it is — from my constrained viewpoint in election-land — particularly terrible for U.S. election information, for several reasons: 

  • It varies by state and even county; 

  • It has no central repository or trusted reference point; and 

  • It changes over time, so that what once was true and helpful is still in the training data, can’t be removed, and is now false. 

And that’s all without even considering the hallucinations, much less the intentional election falsehoods that have become part of the training data of the new ever bigger large language models (LLMs) that are the basis of today’s general purpose Chatbots.

I posed several other questions before the start of this rant, and more of them will come in subsequent postings. I hope you stay engaged! There is actually some good news coming.

John Sebes

Co-Founder and Chief Technology Officer

Previous
Previous

Election Lies and Dodgy Chatbots

Next
Next

Who Should Make an Elections AI Service Agent? (Part 6)