AI companies are starting to recognize the need to self-regulate, at least in spirit. The Future of Life Institute, a non-profit AI policy group, created 23 principles describing the ethical use of AI called the “Asilomar AI Principles.” The principles have been endorsed by more than 3,500 individuals, including leaders from both industry and academia. Efforts like this are great for achieving consensus around the ethical development of AI and affirming our good intentions, but consensus cannot stop someone from adapting beneficient AI for malicious purposes. That said, it remains to be seen whether these principles will have any impact on how companies use AI or how they allow their AI tools to be used by customers.
[The Asilomar AI Principles] are great for achieving consensus, but consensus cannot stop someone from adapting beneficient AI for malicious purposes.
Self-regulation, however, requires more than good intentions and general principles. Recently, it was discovered that Google had been licensing its AI technology for military purposes. This caused tremendous backlash from both employees and consumers, forcing Google to pledge not to renew the contracts in question. [Update: Since this interview was conducted, Google formed an external advisory panel intended to advise Google on how its AI tools are used. The panel was disbanded one week later after Google employees objected to the social views of one of the panel members.] Given that foreign governments have already declared their intent to build AI-powered weapons – everything from missiles to submarines – one could argue that we are already in the midst of an AI arms race and the most ethical thing we can do is promote the controlled weaponization of AI for defensive purposes.
I think it is a mistake to open source advanced AI – at least until anti-AI protections are put in place.
Another controversial topic that AI companies must address is whether to open source their technology, or share the source code with the general public. Although the open source model has many benefits for both developers and users alike, I think it is a mistake to open source advanced AI – at least until anti-AI protections are put in place. In the early 2000s, we saw numerous incidents where hackers known as “script kiddies” adapted open source code and used it as malware. These individuals were not experts or leaders in the field. Eventually, anti-virus software caught up and stopped the spread of the malware, but not before it caused billions of dollars in damage. As a powerful “force multiplier” of human intention, AI has the potential to enable significantly more damage than this. We will need anti-AI software or AI detection software, similar to anti-virus and anti-malware software. Until then, it’s currently impossible to prevent malicious users from using open source AI for harm.
For now, companies that want to take a proactive stance in self-regulation should take the following steps:
- Reduce the amount of code that is shared via open source;
- Exercise voluntary transparency when it comes to safety standards, including disclosure of any AI performance testing, especially regarding detection of bias and statistically rare “edge cases” where the AI could fail catastrophically;
- Disclose the intended use of AI products, and monitor for violations of intended use by their customers, notifying the public when they have been adversely affected;
- Declare corporate values and objectives around the development and use of AI technology;
- Appoint review boards to independently and objectively document compliance with steps 3 and 4.
These recommendations don’t cover all bases, but I believe they will prepare companies for whatever regulations are eventually established. At the very least, it will help companies build a defensible platform of building AI responsibly and with the broader impact of AI fully in mind.