Does AI-driven cloud computing need ethics guidelines?

Just ask any marketing person—its their job to keep demand for a product or service high. So they depend on advertising and other methods to create brand recognition and a sense of demand for what they sell.

These days marketing firms are even more clever recruiting social media influencers who promote a product or service directly or indirectly—sometimes without disclosing that they are a paid lackey. 

[ Read the InfoWorld reviews: Google Cloud AI lights up machine learning | Microsoft Azure AI and Machine Learning aims for the enterprise | AWS AI and Machine Learning stacks up ]

Were getting better at influencing humans either by using traditional advertising methods such as keyword advertising or even scarier by leveraging AI technology as a way to change hearts and minds. Often ’the targets’ dont even understand that their hearts and minds are being changed.

Researchers have discovered a challenge presented by the AI-powered speech generator GPT-2 released by OpenAI in 2019. The AI research labs chat tool excited the tech community with its capability of generating convincingly coherent language from any type of input.

Shortly after GPT-2s creation observers warned that the powerful natural language processing algorithm wasnt as innocuous as people thought. Many pointed out an array of risks that the tool could pose especially from those who might seek to weaponize it to do less-than-ethical things. The core concern was that text generated by GPT-2 could persuade people to break ethical norms that had been established during a lifetime of experiences.

This is not Manchurian Candidate stuff where youll be able to activate a zombie-like killer but really more gray-area decisions. Consider for example a person who will likely not stretch the rules for personal gain such as stealing a customer from another salesperson. Can that moral person be swayed by an AI system thats able to influence human behavior by leveraging its training? 

Cloud computing has made AI systems affordable and easy to leverage as a force multiplier of existing or net-new business applications. For example if a sales processing system could convince buyers to purchase just 2% more using AI influences that could mean as much as a billion dollars in additional profit with minimum investment.

The true question is: Even if we can should we?

Ive been doing AI since my college years and among the reasons its so interesting is how you can set up these systems to learn independently and change behaviors based on their learning over time. For years people have predicted the impending domination of our new robot overlords but AI is still just a tool and should not be a threat—yet.

Although many are calling for guidelines and even government regulation of the potential use and abuse of AI (mostly cloud-based) Im not sure were there yet. I do think well see some questionable uses of this technology much the same as tracking apps on our phones during the past few years but this stuff is largely self-regulating.

If companies or governments are outed for weaponizing this technology in such a way that public reaction is negative public pressure will be the regulating mechanism. As with any technology misuses will have to be looked at over time. I have some confidence that human intelligence will do the right thing with nonhuman intelligence at least for now.