Who Should Make a Voter AI Chatbot?

One of the side effects of the AI frenzy this past year is that lots of people are talking about the idea of having an AI-powered Chatbot for their favorite topic or interest. Denizens of election-land are no exception. Like people in nearly every other pond, the idea is more-or-less similar:

Wouldn’t it be great if we could wave a magic wand and have an Oracle appear that is safe and reliable to answer any question about my favorite topic?

That’s a tempting vision because we already have a magic-seeming Oracle (per Clarke’s 3rd Law) that is not safe, and not reliable for every topic — actually there are several of the ChatGPT ilk. It’s so tempting to think, “If we downscoped to just one topic, then could it be safe and reliable?” Well, as the old saying goes, “Not so fast!”

For now, let’s pass over the assessment of whether it would be useful to have a safe reliable Oracle for voters to “Ask Me Anything.” I think it would be valuable; others may not, but for now let’s stick to the question of who should even try to build such a thing.

Because the AI space is so buzzword and vague, I’ll take it in three steps.

1. Who Should Make an Elections AI Chatbot?

For the most common meaning of the vague buzzword “Chatbot” when used in the realm of AI, my answer is “Nobody!” to the question of who should build a voter helper Chatbot. In fact, that answer applies equally well for any Chatbot that’s focused on one particular area of information, and for which there is nearly zero tolerance for errors and hallucinations. 

OK, I suppose that is not the case for every possible kind of Chatbot, but I argue its true for every kind that could be important.

So, go ahead! Make a Baldur's Gate Chatbot that can give you advice on anything such as whether you should help Astarion inscribe Cazador … engaging, if not someone esoteric, and it no harm if it gave bad advice on your trendy game’s play strategy, but what if it happened seemingly on its own to offer a helpful explanation on how to make a pipe bomb?

I hope we can agree that electionshelping voters specifically — is where there must be zero tolerance for errors and hallucinations. So why should nobody try to build a voter AI Chatbot? To see why, consider the most common type of what’s called a “Chatbot.”

2. What exactly is a Chatbot, in the simplest sense?

Well, you’ve probably already used one, for example ChatGPT. You’re using a web browser, you type into a text box some question, say, “Who lives at the north pole?” The snippet of software in your browser then sends a request across the Internet, which means more or less “O great and powerful GPT-4, please provide a response toWho lives at the north pole?’” Then some large computers grind a bit, send the response back to your browser, which displays a response that might be: “Nobody” or “Santa Claus” depending on the context.

That’s a general purpose Chatbot. Now, you could make a special purpose Chatbot about, say, cacti (not this type, but of the plant variety), pretty easily. You make a simple web interface that’s not so different than that of ChatGPT, but has prompts about this or that cactus or cacti in general. GPT-4 already has the entire Internet’s text about cacti in its base model (or foundational model or large language model, aka LLM) so you don’t need to do too much. However, your cactus Chatbot could still answer questions about bomb-making, be tricked into producing hate speech, and so forth. Could you do better? Sure (or as my friend’s son would say, “Oh, hail yeah”).

With increasing levels of effort, your could use the API (the protocol for requests and responses) to try to filter out non-cactus questions, constrain the responses, and so forth. But the base model still has the entire Internet’s text of human invention, good, bad, ugly, true, false, fictitious, mendacious, and more. Experience with current base models show, you can’t keep out all that ugly, and you can’t stop lies and inventions.

Good enough for cacti? Maybe. Elections? Absolutely not. You just can’t build a Chatbot this way for anything important because the underlying base models are fatally flawed. There’s been huge efforts since the Jurassic era of modern natural-language AI (i.e., 2021) to work around the fundamental pollution of base models, but it has proven to be no more effective than an anti-virus or every other attempt to keep malware off your personal computing device, which was designed to run any software from anywhere, without question.

3. Who Should Build Something Better?

So, if simple Chatbots are a terrible idea for elections (and frankly, a bunch of other things), what’s better, and who should build that better thing for elections?

There’s actually several  things to be unpacked from there, and I’ll tackle them in the next couple installments; here are four:

  1. Who should have the responsibility to build and actually construct an unpolluted base model (LLM) for elections?

  2. Who should build a “domain specific”— specific to the domain of discourse of elections — “natural language agent?” (And how?)

  3. Who should be responsible for monitoring such an agent for safety violations, public harm, and more?

  4. Who has the resources to do any of these things? 

There are some serious buzzwords in those serious questions, but fear not, my next installments will unpack them.

John Sebes

Co-Founder and Chief Technology Officer

Previous
Previous

Who Should Make a Voter AI Chatbot? (Part 2)

Next
Next

Towards an AI Research Agenda for Elections and Beyond (Part 3)