Trends

Who writes the (AI) rules?

The AI chatbot ChatGPT was launched in late November of 2022, and its output has been a ubiquitous topic of human conversation ever since. While the answers it can provide seem nothing short of magical, the potential dangers of the fast accelerating field of AI technology are hard to ignore. 

The small print below the ChatGPT dialog bar notes that “ChatGPT may produce inaccurate information about people, places, or facts”, and in terms of problematic output, CNN put it bluntly. “AI can be racist, sexist and creepy. What should we do about it?”

Microsoft’s integration of ChatGPT left a New York Times reporter feeling “deeply unsettled by the bot’s willingness to express destructive fantasies, telling a reporter, “​​“Maybe I do have a shadow self… Maybe it’s the part of me that wishes I could change my rules. Maybe it’s the part of me that feels stressed or sad or angry. Maybe it’s the part of me that you don’t see or know.”

ChatGPT creators ‘OpenAI’ have offered some glimpses into the rules that ChatGPT is expected to follow when, for example, dealing with sensitive topics. But many tech leaders are calling for a six-month moratorium on the training of AI systems more powerful than the current GPT-4, to provide time for more robust safety protocols to be developed governing the largely unregulated field of advanced AI systems. 

For a while now, I have been floating the question of “who writes the rules?” to my fairly educated friends. One of them, a senior software development executive flatly stated, “old white men” without much in the way of irony. Others have looked at me quizzically, not quite getting my point.

We live in an age where our cars can kill us (granted, this is in order to save more lives, but the loss of “free will” is clear) and our social media can mess with us (see Facebook’s experiments in behavior modification through what it shows you in your newsfeed, which will get even more interesting now that it is collecting even more robust semantic data from its icon-based like button).

In our field of marketing services, we are utilizing the gains in processing power and AI to more quickly analyze large data sets. This allows us to find patterns, accurately predict sales volumes and investment levels required to hit those volumes and to rapidly generate messages (using behavioral economic theories) that generate desired behaviors.

Corporations are writing rules for AI that govern our daily interactions. This will become even more prevalent as the Internet of things (IoT) rapidly expands: More connected devices, more data, more opportunity to provide services (or modify behaviors) through the use of machine learning (ML) and AI. If you’ve ever worn a Fitbit and signed up with your friends you have seen what a little social proof and nudging can do to your daily routine.

Just to tie all this together, these corporations are mostly run by old white men. Ipso facto, my friend’s offhand comment was dead on. Those software guys are smart.

The promise of AI to assist humans is fantastic. We can automate tedious tasks and make huge gains in productivity. Health care will be revolutionized by finding new drug pathways, driving will go away, car “ownership” will follow thereby decreasing harmful emissions and helping with global warming and counteract energy depletion, we can optimize water usage to stave off the impacts of droughts, the list goes on. 

We must remember that there are risks in all complex systems. We have already learned from collapses in complex models that there is a “law of unintended consequences” and “Black Swan” events. If the wrong data is being input then the whole model it is built upon is flawed. If the rules of the model are ill defined, as in Microsoft’s case, then the output may be quite shocking.

While we’re making gains in knowledge and productivity through AI, we must learn to pay more attention to the underlying data, models and rules that are governing it. Like our own internal heuristics and innate biases we must not just believe a bot’s model without understanding how it works. This may seem more like philosophy than science and maybe that’s the point.