All

Who Writes the (AI) Rules

ai

Microsoft recently launched Tay, the by now famously raunchy and decidedly politically incorrect chatbot, to attempt to get in with the cool kids on Twitter. Very quickly, they pulled her (I’m already noticing how easy it is to personify our AI overlords, er… assistants) down, essentially saying that they hadn’t provided enough rules.

I have been floating the question of “who writes the rules?” to my fairly educated friends. One of them, a senior software development executive flatly stated, “old white men” without much in the way of irony. Others have looked at me quizzically not quite getting my point.

We live in an age where our cars can kill us (granted, this is in order to save more lives, but the loss of “free will” is clear) and our social media can f*** with us (see Facebook’s experiments in behavior modification through what it shows you in your newsfeed, which will get even more interesting now that it is collecting even more robust semantic data from it’s icon-based like button).

In our field of marketing services, we are utilizing the gains in processing power and AI to more quickly analyze large data sets. This allows us to find patterns, accurately predict sales volumes and investment levels required to hit those volumes and to rapidly generate messages (using behavioral economic theories) that generate desired behaviors.

Corporations are writing rules for AI that govern our daily interactions. This will become even more prevalent as the internet of things (IoT) rapidly expands: More connected devices, more data, more opportunity to provide services (or modify behaviors) through the use of machine learning (ML) and AI. If you’ve ever worn a Fitbit and signed up with your friends you have seen what a little social proof and nudging can do to your daily routine.

Just to tie all this together, these corporations are mostly run by old white men. Ipso facto, my friend’s offhand comment was dead on. Those software guys are smart.

The promise of AI to assist humans is fantastic. We can automate tedious tasks and make huge gains in productivity. Health care will be revolutionized by finding new drug pathways, driving will go away, car “ownership” will follow thereby decreasing harmful emissions and helping with global warming and counteract energy depletion, we can optimize water usage to stave off the impacts of droughts, the list goes on.

We must remember that there are risks in all complex systems. We have already learned from collapses in complex models that there is a “law of unintended consequences” and “Black Swan” events. If the wrong data is being input then the whole model it is built upon is flawed. If the rules of the model are ill defined, as in Microsoft’s case, then the output may be quite shocking.

While we’re making gains in knowledge and productivity through AI, we must learn to pay more attention to the underlying data, models and rules that are governing it. Like our own internal heuristics and innate biases we must not just believe a bot’s model without understanding how it works. This may seem more like philosophy than science and maybe that’s the point.