3 min read

Good AI Won’t Be Like Jar Jar Binks

Siri is like our own personal 2017 Jar Jar Binks.

Remember Jar Jar Binks? George Lucas got too excited about CGI that he made an entire character out of the damn stuff. Not only that but he went back and re-edited all his great movies to add in extra tidbits and scenes using CGI. And then he made it impossible to buy the versions without the CGI-extras. We remember these maneuvers. People were real mad about CGI.

But the new Star Wars movies probably have way more CGI than those featuring the Notorious JJB. I’m not sure, I could probably Google this, but I’d put a few bucks on it. The thing is that when CGI is not attempting to replace a human it usually does a pretty great job.

In the same way we only notice AI when AI is bad. And AI is most bad when it is attempting to replace a human. Like, you know, Siri.

To put it a different way, we only notice AI not being great when it’s claiming to be human-ish. The truth is that the great strides forward in AI are replacing code that were probably technically called “AI” 10 years ago. Google Translate has had “AI” to translate text since Google Translate started existing. I actually have to think kind of hard to remember “oh yeah I guess that is AI.” Google Maps, Search, Adwords, etc. etc. etc. is all code that is complexly attempting to solve a problem! You can call that “AI” or you can call that “a shitload of code” or whatever else you want to describe a process trying to be “intelligent”.

This is what Google means when they say AI is everything.

Picture this: a journalist shows up to write an article about strides forward in AI. It’s a primo article so we gotta have some photos! Should we take some photos that show off how much better Google Translate is? NO! How about how Google Search is probably giving you better results due to AI techniques now? DOUBLE NO! That stuff is boring and everyone’s grandparents already use it and no one can understand the subtle improvements! How about some half-baked experimental products that may or may not work and some of them attempt to create a human AI assistant using similar AI techniques as the stuff that actually works? HELL YEAH THIS IS THE FUTURE!

Bing bang boom an article appears explaining the AI future to everyone. And then people read the article and are like “listen I’ve used Siri before, Siri sucks, if this is the future the future sucks, and this AI revolution is dumb.”

But hey no major tech company is going to turn down the opportunity for press, press is good! So now we have a lot of articles about tech products that have sucked the last few years and will suck for the foreseeable future and are having it shoved down our throats that this is the future because it’s the only thing new and shiny enough for journalists to write about (except for the NYTimes article about Google Translate, that was really good).

And the crazy thing is that it probably will be our future! It will just be our future in the subtle ways that the future always ends up happening. The bad shit that doesn’t work will eventually go away, and the good subtle things that are obviously beneficial will slowly accrete in the way that building the future is mostly about in the first place.

We thought Google Glass was the harbinger of the image-sensor revolution but no it turns out it was phone cameras. You know, the image sensor that everyone was carrying with them at all times already. Which, you know, made sense and didn’t require people to wrap a Jar Jar Binks of a camera around their head so everyone on the street thought you were secretly recording them.

I suspect the same thing will happen to AI. We’ll have all the headlines and experimental products that fall on their face that everyone loves or gets mad at. Meanwhile we won’t notice all the subtle improvements coming from AI. Then in ten years we’ll wonder what happened to the “AI Revolution” when really we’ll already be living it.