What Judy Said; Seriously

When Judy Estrin speaks or writes about technology, it’s worth paying attention. At the risk of reading like a fan-boy, there’s no one with more experience in the computer technology industry, since pre-Internet days, with more breadth of experience across technologies and industry roles, and with more intelligence and wisdom to share about the good, the bad, and the ugly of how technology business works and the impacts to all of us.

Now, I know what you’re thinking: “Whoa — one more salvo in the ‘OMG, AI blather’ that we’re drowning in?”

Well, in this case, yes; but stay with me here. Estrin has summed up the essentials on pretty much every important angle that so many people are blathering about. And not just for the moment — bookmark this recent short Time Magazine article of her’s and come back to it in the next iteration of AI chatter:

The Case Against AI Everything, Everywhere, All at Once
https://time.com/6302761/ai-risks-autonomy/

Need I say more? Probably not, but in any case, please don’t tune out on the “Against” element until you at least consider her background and credentials, read her short article, and then read the balance of my comment below (it leads to our favorite topic: technology innovation in elections.)

Let’s be honest; you could probably spend all day every day these days, reading stuff that is pro-AI, anti-AI or angst-AI. Estrin is not anti-AI, any more than any technologist could have been anti-smartphone or anti-social media in prior epochs. The various technology innovations that fall under the “AI” umbrella are here, quite usable, seeing a lot of activity, and not going away. Yet, given the really broad and negative consequences that we’ve seen from adoption of those and other hot technologies, Estrin points out (rightly, I think) that we can and should take the time now to consider how to avoid the many downsides of AI adoption that would occur if it was adopted as heedlessly as social media was.

My poster child of harm resulting from heedless adoption is supposedly topic-specific chatbots, particularly some intended to provide help with safety-critical topics. For my personal worst example, take the well-meaning people who stood-up a chatbot about eating disorders, that actually cranked out chatter that was precisely counter-productive, potentially harming real people seeking health and safety information. For this quick-and-easy chatbot-standup, it is unavoidable, because the underlying language models come pre-built, but also pre-polluted with the entire Internet’s worth of fact, fiction, fancy, ideals, malice, and every other aspect of human nature. And “bots” built on them can’t not lie, can’t not deceive, and are highly capable (without intent or malice) of providing actually harmful information.

Looking out from my little pond of elections and technology, I shudder to think what will happen when some hapless people do the same for a “voter assistance” chatbot, what kind of “hallucinations” (what an understatement of a word) will result in amplification of the various election dysfunctions we now have growing every day.

And you should shudder when you think of the same thing done by people who are not hapless, but actually mendacious, seeking to deliberately inflame election angst.

But why does it have to be so easy to make dangerous systems, and why so hard to make safe systems? Estrin makes a telling point about AI research:

What is happening is not just a battle for market control. A small number of tech titans are busy designing our collective future, presenting their societal vision, and specific beliefs about our humanity, as the only possible path. Hiding behind an illusion of natural market forces, they are harnessing their wealth and influence to shape not just productization and implementation of AI technology, but also the research.” 

(My emphasis added on the research point, and my thanks to OSET Institute colleague and longtime election expert Eddie Perez for highlighting the key point about research.)

I hate to sound so much like a public-benefit technologist, but today’s research is more about unicorn-chasing than about public benefit and public safety. There is truly important research to be done, including, specifically, how generative natural language AI can be used for public benefit, safely, without having to try (futilely) to beat back biases and propensity for “hallucinations” that are the result of the generative natural language AI technology that we currently have available. However, that research is, alas, not alluring and maybe not easily monetized.

Of course, that’s not a surprise. But thanks to Estrin’s thoughtful writing, I can pick up this concept, and build on it to describe what kind of research we need in the world of election administration in particular, and democracy administration writ-large. I’m referring to very practical applied research to perform right now (not theoretical) for responsible use of AI in elections, and indeed, any part of government service or public service.

And I have more — a bunch more — to say about that in coming posts. I hope you’ll join me in this conversation.

John Sebes

Co-Founder and Chief Technology Officer

Previous
Previous

Towards an AI Research Agenda for Elections and Beyond (Part 1)

Next
Next

Will Election Reform In Arizona Prevent A 2024 Meltdown?