Ensuring that citizen developers build AI responsibly

The AI industry is playing a dangerous game right now in its embrace of a new generation of citizen developers. On the one hand AI solution providers consultants and others are talking a good talk around ’responsible AI.’ But theyre also encouraging a new generation of nontraditional developers to build deep learning machine learning natural language processing and other intelligence into practically everything.

A cynic might argue that this attention to responsible uses of technology is the AI industrys attempt to defuse calls for greater regulation. Of course nobody expects vendors to police how their customers use their products. Its not surprising that the industrys principal approach for discouraging applications that trample on privacy perpetrate social biases commit ethical faux pas and the like is to issue well-intentioned position papers on responsible AI. Recent examples have come from Microsoft Google Accenture PwC Deloitte and The Institute for Ethical AI and Machine Learning.

[ Also on InfoWorld: A brief history of artificial intelligence ]

Another approach AI vendors are taking is to build responsible AI features into their development tools and runtime platforms. One recent announcement that got my attention was Microsofts public preview of Azure Percept. This bundle of software hardware and services is designed to stimulate mass development of AI applications for edge deployment.

Essentially Azure Percept encourages development of AI applications that from a societal standpoint may be highly irresponsible. Im referring to AI embedded in smart cameras smart speakers and other platforms whose primary purpose is spying surveillance and eavesdropping. Specifically the new offering:

  • Provides a low-code software development kit that accelerates development of these applications
  • Integrates with Azure Cognitive Services Azure Machine Learning Azure Live Video Analytics and Azure IoT (Internet of Things) services
  • Automates many devops tasks through integration with Azures device management AI model development and analytics services
  • Provides access to prebuilt Azure and open source AI models for object detection shelf analytics anomaly detection keyword spotting and other edge functions
  • Automatically ensures reliable secure communication between intermittently connected edge devices and the Azure cloud
  • Includes an intelligent camera and a voice-enabled smart audio device platform with embedded hardware-accelerated AI modules

To its credit Microsoft addressed responsible AI in the Azure Percept announcement. However youd be forgiven if you skipped over it. After the core of the product discussion the vendor states that:

’Because Azure Percept runs on Azure it includes the security protections already baked into the Azure platform. … All the components of the Azure Percept platform from the development kit and services to Azure AI models have gone through Microsofts internal assessment process to operate in accordance with Microsofts responsible AI principles. … The Azure Percept team is currently working with select early customers to understand their concerns around the responsible development and deployment of AI on edge devices and the team will provide them with documentation and access to toolkits such as Fairlearn and InterpretML for their own responsible AI implementations.’

Im sure that these and other Microsoft toolkits are quite useful for building guardrails to keep AI applications from going rogue. But the notion that you can bake responsibility into an AI application—or any product—is troublesome.

Unscrupulous parties can willfully misuse any technology for irresponsible ends no matter how well-intentioned its original design. This headline says it all on Facebooks recent announcement that it is considering putting facial-recognition technology into a proposed smart glasses product ’but only if it can ensure authority structures cant abuse user privacy.’ Has anybody ever come across an authority structure thats never been tempted or had the ability to abuse user privacy?

Also no set of components can be certified as conforming to broad vague or qualitative principles such as those subsumed under the heading of responsible AI. If you want a breakdown on what it would take to ensure that AI applications behave themselves see my recent InfoWorld article on the difficulties of incorporating ethical AI concerns into the devops workflow. As discussed there a comprehensive approach to ensuring ’responsible’ outcomes in the finished product would entail at the very least rigorous stakeholder reviews algorithmic transparency quality assurance and risk mitigation controls and checkpoints.

Furthermore if responsible AI were a discrete style of software engineering it would need clear metrics that a programmer could check when certifying that an app built with Azure Percept produces outcomes that are objectively ethical fair reliable safe private secure inclusive transparent and/or accountable. Microsoft has the beginnings of an approach for developing such checklists but it is nowhere near ready for incorporation as a tool in checkpointing software development efforts. And a checklist alone may not be sufficient. In 2018 I wrote about the difficulties in certifying any AI product as safe in a laboratory-type scenario.

Even if responsible AI were as easy as requiring users to employ a standard edge-AI application pattern its naive to think that Microsoft or any vendor can scale up a vast ecosystem of edge-AI developers who adhere religiously to these principles.

In the Azure Percept launch Microsoft included a guide that educates users on how to develop train and deploy edge-AI solutions. Thats important but it should also discuss what responsibility truly means in the development of any applications. When considering whether to green-light an application such as edge AI that has potentially adverse societal consequences developers should take responsibility for:

  • Forbearance: Consider whether an edge-AI application should be proposed in the first place. If not simply have the self-control and restraint to not take that idea forward. For example it may be best never to propose a powerfully intelligent new camera if theres a good chance that it will fall into the hands of totalitarian regimes.
  • Clearance: Should an edge-AI application be cleared first with the appropriate regulatory legal or business authorities before seeking official authorization to build it? Consider a smart speaker that can recognize the speech of distant people who are unaware. It may be very useful for voice-control responses to people with dementia or speech disorders but it can be a privacy nightmare if deployed into other scenarios.
  • Perseverance: Question whether IT administrators can persevere in keeping an edge-AI application in compliance under foreseeable circumstances. For example a streaming video recording system could automatically discover and correlate new data sources to compile comprehensive personal data on video subjects. Without being programmed to do so such a system might stealthily encroach on privacy and civil liberties.

If developers dont adhere to these disciplines in managing the edge-AI application life cycle dont be surprised if their handiwork behaves irresponsibly. After all theyre building AI-powered solutions whose core job is to continually and intelligently watch and listen to people.

What could go wrong?