Safety, Liability, Ethics and Privacy
and the Future of AI

A Conversation between Quorum AI CEO Noah Schwartz and Orrick LLP

In July 2018, Quorum AI CEO, Noah Schwartz, spoke with the emerging technology team at Orrick to discuss the future of AI and the challenges that it will bring, from ethics and privacy to safety and liability.

Orrick, Herrington & Sutcliffe LLP is an international law firm founded in San Francisco, California. It has clients in the technology, energy and infrastructure, and financial industries, and is consistently rated one of the top law firms in the world.

With Orrick’s permission, we present some excerpts from this interview below.

Orrick:
Please tell us a bit about your company and your technology?
Schwartz:

My company is Quorum AI, and we’re an artificial intelligence software company. We've created an AI engine that we call Engram, a term that refers to "memory traces in the brain." Engram is a stand-alone AI system that can run on any type of device. Unlike most AI systems available today, it doesn't rely on third-party, cloud-based services.

We've also built the EVA Platform, or the Environment for Virtual Agents. EVA enables AI systems to interact with one another by sharing data and insights in realtime. Because of this, AI systems running on the EVA Platform can work together to make complex decisions that would be impossible using traditional methods.

One important point worth mentioning – unlike a lot of the AI services available today, we don’t use Deep Learning, reinforcement learning, or even traditional machine learning methods. We use proprietary algorithms that mimic how neurons in the brain change and grow as you learn. My background is in computer science and neurobiology. These algorithms are an extension of the research that I did while working in academic neuroscience for 12 years.

Orrick:
Deep Learning is very popular. Why did you decide to avoid it when building your technology?
Schwartz:

When I launched Quorum AI in 2013, there was a lot of excitement developing around Deep Learning and Big Data, but I also knew that – going back to the 1990s and even earlier – there were serious limitations to what Deep Learning could do. The market has started to rediscover these limitations over the past few years. Geoffrey Hinton, one of the pioneers behind Deep Learning, recently acknowledged these limitations and called for a new approach.

In a nutshell, Deep Learning systems can be very expensive and difficult to build. They are also very fragile and inflexible once they are put into production. If they need to be adjusted, it's easier to start over and rebuild them from scratch. Despite these issues, Deep Learning excels at processing visual images or image-like data. This makes sense because the original methods behind Deep Learning were inspired by the visual areas of the brain. If you need to do more than image processing, however, Deep Learning alone will not get you very far. If Deep Learning represents the visual parts of the brain, we like to think Quorum AI represents what is happening everywhere else in the brain. In a way, Quorum AI picks up where Deep Learning leaves off.

Orrick:
What are Quorum AI’s main advantages over what’s currently in the AI marketplace?
Schwartz:

Compared to other AI services in the market, our technology excels in four different ways, each providing unique benefit for our users.

Advantages of Quorum AI Technology:
  1. Cloud-independent, better privacy
  2. Transparent and human readable, not a "black box"
  3. Highly data-efficient: better performance with less resources
  4. Flexible, able to be retrained without starting over
  5. On-device learning and personalization, tailored to the individual
The first advantage was mentioned earlier: our AI system is independent and works in a stand-alone manner. Instead of putting the AI in a data center, it can live entirely on the device. This not only makes the AI system portable, but it also means the AI responds faster and is more reliable than other systems. To reframe this from the user's perspective, devices that use our AI will not slow down or break if they have a poor internet connection. Being stand-alone also means improved privacy. When the AI makes a decision, it doesn't need to send the user’s data to the cloud or any third-party services. Everything happens locally, and your data never touches outside systems.

We also designed our AI systems to be transparent and human readable, two qualities that are vital when it comes to AI oversight and safety. We’ve heard a lot about Deep Learning systems and how they are "black boxes." When a Deep Learning system makes a decision, it’s difficult to determine how or why it came to that decision. In our AI, every decision can be dissected and understood by non-technical users. If our AI produces an output that appears biased in some way, we can identify exactly where that bias originated. From there, we can retune the AI and reduce the bias without needing to start over or retrain the system.

Our AI systems also process data more efficiently than other AI systems. This means businesses will see more results and better insights from their data, leading to faster and bigger ROI. It also means businesses can use AI without needing to pay the enormous ante of Big Data or third-party computing. This efficiency extends to stand-alone devices I mentioned earlier such as Internet of Things (IoT) devices. If required, our AI can learn from streaming data as it arrives, without needing to store the data at all and without specialized hardware.

        Devices that use our AI will not slow down or break if they have a poor internet connection.
Finally, our AI system can be modified to perform new tasks without retraining, making it much more flexible than other AI systems on the market. Most of the AI systems we see in the marketplace are trained using "supervised learning" to perform a single task or decision. For example, a supervised learning system might learn how to classify an image as a cat or a dog after being given 10,000 pictures of cats or dogs. Once trained, supervised learning systems are limited to the exact task they were trained to perform, even if some new decision or task is very similar or in the same domain. Our AI system uses "unsupervised" methods, learning continuously from all the data it receives. As a result, our AI system is much more flexible and is not limited to any single task or decision. Instead, it can change from one decision to another without retraining, using the information it has already learned.

One final advantage is worth mentioning. When our AI is running on a personal device like a smartphone or autonomous vehicle, it can personalize itself to the individual who is using that device. This is different than other forms of AI and machine learning in which the system learns an aggregate model of all users. One area where personalized AI has a lot of potential is in video games. Instead of the video game responding the same way to each player, our AI can tailor itself to the individual. This personalization would create new challenges for the player and provide opportunities to experience the game in a novel and exciting way.

In the end, these advantages provide tremendous cost savings by reducing the workload of technical personnel as well as the cost of Big Data and cloud-based AI services. They also accelerate ROI by enabling developers to get their AI products to market faster than if they were using traditional methods.

Orrick:
As AI devices become more integrated into our daily lives, they present privacy challenges. How can companies balance the need for consumer privacy with the ability to design AI-based tools?
Schwartz:

Data privacy is a challenging issue for a lot of companies to address, particularly in AI. Most companies need some amount of customer or user data to operate. It’s Marketing 101: the company collects data to learn about their wants and needs of customers. But using customer data to make a business decision is not the issue with AI. The problem happens when companies sell customer data or turn it into a product. This is especially problematic when companies do this without user consent. Unfortunately, AI encourages this type of practice because the data are so valuable.

        [The real problems in privacy happen] when companies sell customer data or turn it into a product without user consent. Unfortunately, AI encourages this type of practice because the data are so valuable.
We are starting to see new policies like the General Data Protection Regulation (GDPR) in the EU and the California Consumer Privacy Act that are addressing data privacy. These policies require companies to be transparent about how they are using customer data. They also give customers the right to request the deletion of their data. These policies encourage privacy but much more than that, they restore control of the data to the customer. We've seen data control policies before with HIPAA and even the National Research Act of 1974. These new data policies are much broader, given how valuable customer data is to companies in general and AI companies in particular.

For AI companies, losing control of the data can mean losing control over the integrity of the AI product. In general, the larger the dataset used to train the AI, the better the system will perform. As a result, AI companies have an incentive to collect as much data as possible. If customers demand the deletion of their data, it could reduce the performance of the AI product the company is trying to develop.

Orrick:
How does your company address GDPR?
Schwartz:

We got a headstart addressing GDPR, but it was almost by coincidence. One of our co-founders lives in Germany where data privacy was already a priority, even when GDPR was still at the level of the European Parliament. Many of the businesses that we met with in those early days emphasized the importance of data protection. As a result, we built privacy and transparency into every component of our AI systems.

Our company also has a unique advantage compared to cloud-based AI providers because our AI can run entirely on-device or on-premise. This minimizes exposure due to data transmission or storage on insecure systems. Our AI is also transparent so we can track single data points through the system from beginning to end. If we need to inspect a decision or remove data belonging to a user or a group of users, we can do that without having to rebuild the system.

We don’t control what a company does with the data they collect, but our AI systems enable companies to do more with what the data they have. And we help them do this without risking data loss or exposure via third-party, cloud-based systems.

Orrick:
What can companies do to address these data privacy issues while not compromising their business?
Schwartz:

At Quorum AI, we help companies reduce this risk by using data-efficient algorithms. These algorithms learn faster and generate insights from fewer data points. Less data means less privacy risk. We also give companies the option of keeping their AI systems in-house or even on-device. This minimizes the amount of data sent to third-party AI systems, including cloud providers that might send data across state borders as part of an ordinary transaction.

        Companies need to go beyond mere compliance when it comes to protecting customer data.
No matter what type of AI systems are used to analyze customer data or to build AI-powered products, companies need to go beyond mere compliance when it comes to protecting customer data. Companies need to treat data collection as a consensual process. This means being transparent at all stages of data collection and adopting tools that show customers how the company used their data. Trust in data collection works both ways. Customers need to trust that companies will respect and protect their data. Meanwhile, companies need to trust that customers want better products and are willing to share their data to make that happen.

Orrick:
Another area of interest concerning AI is product liability. For example, when devices make autonomous decisions, at what point is the company liable versus the individual? How is Quorum tackling this issue?
Schwartz:

Liability will always be a very complicated issue, mostly because nobody wants to take it! From what I've seen, the commercial AI industry is taking a very direct and pragmatic approach to liability. The question isn't "Will someone die because of this AI?" The question companies are asking is "When someone dies because of the AI, what can we do to ensure we are learning and doing as much as possible to prevent that from happening again?"

        The question isn't "Will someone die because of this AI?" The question companies are asking is "When someone dies because of the AI, what can we do to prevent that from happening again?"
This pragmatism translates into a two-phase approach to liability. First, companies start by looking at liability in the same way they would for non-autonmous devices. The AI in an autonomous vehicle, for instance, is subject to quality control checks to ensure it is safe and reliable, just like a company would test a seatbelt or a baby stroller. Companies also create terms of use that define acceptable scope of use and limits on liability. Second, beneath the surface, most autonomous devices store enormous amounts of data about every every action that is taken. The goal behind this data collection is equal parts legal defense and product development. In the event of a liability dispute, the company can use the data to defend any decisions made by the AI. The company can also analyze the data to learn ways to the safety of the product. These improvements can then be applied to future versions of the product, and if possible, sent as a software update to any devices currently in use.

The systems that we build at Quorum AI are particularly vulnerable to this issue because our AI can personalize itself to the individual. Typically, companies who use our AI for personalized applications install a boilerplate AI on each new device. Over time, the AI learns and changes its behavior based on how the individual chooses to use the device. The more the device learns, the less its AI resembles the AI that underwent quality control and safety checks during manufacture. If the AI changes as a direct result of how the individual used the device, is the company still liable for what the AI does? The simple argument would be, the more the AI changes, the less the company is liable for its behavior. However, the AI was sold as an inherently flexible system, so the company is not only liable for the boilerplate behavior, but the full range of behavior the AI is intended to express.

At Quorum AI, we have several measures in place to ensure safety and to address these shifting lines of liability. For starters, we don’t publish our work or open source any of our code. We do this because we want to limit where our AI lives and what it’s tasked to do. We are also very selective about who we work with and how they use our AI. Our license agreements prohibit repurposing the AI for applications that are not approved by us. We also work with companies to test the range of behavior and place limits on how far the AI can adapt. Finally, to assist with oversight and control of the device, we enable safeguards to restrict the actions an AI can take on its own. Over time, as the AI proves itself to be safe, the user can choose to disable these safeguards. In doing so, the user assumes liability for that particular action.

Like all new technology, there are no perfect solutions that will prevent misuse or harm under all circumstances. We're looking ahead with our eyes open to all possibilities, good and bad. And we're doing everything we can to ensure that our AI is used responsibly and with full awareness of its potential.

Orrick:
What do you feel is the primary priority for regulation in the AI industry?
Schwartz:

AI is evolving very quickly, and companies have huge incentives to commercialize the most innovative and transformative technologies as fast as possible. At the same time, legislation to regulate these technologies is slow to develop and enact. We’ve started seeing some effort to address autonomous vehicles (HR 3388, S 1885), and still more that establish study committees and review panels (HR 4625, HR 4829, HR 5356), but it's not enough to keep pace with the speed of AI innovation. In the meantime, regulation falls to a handful of agencies such as the FAA, FDA, and NHTSA on a federal level, as well as state agencies and legislatures. From these disjointed regulations, I suspect we will see the best and most practical ideas take hold. In the meantime, this is an adequate stopgap until broader, more AI-focused legislation is created.

        [When crafting AI legislation] the highest priority should be regulating the actions that an AI system is allowed to take.
As legislation takes shape, the highest priority should be regulating the actions that an AI system is allowed to take. Regulating AI actions is especially important if the AI is a part of an autonomous system that operates without direct human supervision. If an AI system operates autonomously to perform an action, and that action increases the probability of harm or injury, then the AI should be subject to regulation. It's also important to realize that the harm or injury could be felt in a variety of ways: physical, economic, social, psychological, etc.

Ideally, I think we need a public-private partnership to oversee the regulatory process, similar to the review process we see in the FDA for medical technology or through HHS and FDA for research, food, and pharmaceuticals. These regulatory models were designed to be broad and to apply to any nature of technology that involves human consumers or participants. These models also include a staged review process in which a mixture of public and private representatives evaluate the risks and benefits of proposed new technology. For AI, it could mean that companies would apply for a license to produce commercializable AI technology. Once licensed, the company would send new technology through the review process to ensure that it is safe, effective, and minimizes uncontrolled risk.

        California is in the best position to lead this type of [AI regulation] effort, not only for the United States but for the world at large.
This type of process will not be easy to establish. It’s more likely that we will continue to see individual states take the lead by testing a variety of regulatory options, and once best practices are discovered, we will see broader adoption at the federal level. As home to many AI companies, California is in the best position to lead this type of effort, not only for the United States but for the world at large. We saw this with AB 32 and climate change regulation, and I believe something on a similar scale will eventually be needed to effectively regulate the AI industry.

Orrick:
Since those regulations are still being developed, how can companies which develop AI devices and systems do so in the most ethical way?
Schwartz:

AI companies are starting to recognize the need to self-regulate, at least in spirit. The Future of Life Institute, a non-profit AI policy group, created 23 principles describing the ethical use of AI called the “Asilomar AI Principles.” The principles have been endorsed by more than 3,500 individuals, including leaders from both industry and academia. Efforts like this are great for achieving consensus around the ethical development of AI and affirming our good intentions, but consensus cannot stop someone from adapting beneficient AI for malicious purposes. That said, it remains to be seen whether these principles will have any impact on how companies use AI or how they allow their AI tools to be used by customers.

        [The Asilomar AI Principles] are great for achieving consensus, but consensus cannot stop someone from adapting beneficient AI for malicious purposes.

Self-regulation, however, requires more than good intentions and general principles. Recently, it was discovered that Google had been licensing its AI technology for military purposes. This caused tremendous backlash from both employees and consumers, forcing Google to pledge not to renew the contracts in question. [Update: Since this interview was conducted, Google formed an external advisory panel intended to advise Google on how its AI tools are used. The panel was disbanded one week later after Google employees objected to the social views of one of the panel members.] Given that foreign governments have already declared their intent to build AI-powered weapons – everything from missiles to submarines – one could argue that we are already in the midst of an AI arms race and the most ethical thing we can do is promote the controlled weaponization of AI for defensive purposes.

        I think it is a mistake to open source advanced AI – at least until anti-AI protections are put in place.

Another controversial topic that AI companies must address is whether to open source their technology, or share the source code with the general public. Although the open source model has many benefits for both developers and users alike, I think it is a mistake to open source advanced AI – at least until anti-AI protections are put in place. In the early 2000s, we saw numerous incidents where hackers known as “script kiddies” adapted open source code and used it as malware. These individuals were not experts or leaders in the field. Eventually, anti-virus software caught up and stopped the spread of the malware, but not before it caused billions of dollars in damage. As a powerful “force multiplier” of human intention, AI has the potential to enable significantly more damage than this. We will need anti-AI software or AI detection software, similar to anti-virus and anti-malware software. Until then, it’s currently impossible to prevent malicious users from using open source AI for harm.

For now, companies that want to take a proactive stance in self-regulation should take the following steps:

  1. Reduce the amount of code that is shared via open source;
  2. Exercise voluntary transparency when it comes to safety standards, including disclosure of any AI performance testing, especially regarding detection of bias and statistically rare “edge cases” where the AI could fail catastrophically;
  3. Disclose the intended use of AI products, and monitor for violations of intended use by their customers, notifying the public when they have been adversely affected;
  4. Declare corporate values and objectives around the development and use of AI technology;
  5. Appoint review boards to independently and objectively document compliance with steps 3 and 4.

These recommendations don’t cover all bases, but I believe they will prepare companies for whatever regulations are eventually established. At the very least, it will help companies build a defensible platform of building AI responsibly and with the broader impact of AI fully in mind.

Orrick:
What do you see as the most exciting way AI will impact our lives in the next five years?
Schwartz:

We are going to see a lot of positives and negatives coming out of AI over the next five years. The positive impact will come from breakthroughs in three areas of research. First, edge-based AI that lives on our smart devices will become a lot more powerful and more personalized. Second, we will start to see more “cognitive AI” systems that can perform complex reasoning and understanding. And third, AI-to-AI communication standards will begin to emerge. This will enable AI systems to interact with one another and leverage each other to perform more complicated and elaborate tasks. These innovations will change not only how we work and live, but how we interact with each other and the world around us.

Unfortunately, we are also likely to see AI used as a tool for illegal or unethical activities. Over the past year, we have started to see more and more "deepfakes.” These are fake video recordings produced using AI to combine images of one person with the video of another person. The end result is a very realistic but fake video of the first person performing the actions of the second person. We have also seen an audio version of this technique used to generate fake speech that is indistinguishable from genuine speech. Using this technique, the potential for fraud is enormous, ranging from simple identity theft to political propaganda intended to destabilize social systems and overthrow governments.

        We must remember that the intentions of the creator mean absolutely nothing in the hands of the user.
This issue highlights the real problem we face when building AI and looking toward the future. The person who invented the method behind deepfakes probably didn't have fraud in mind when developing it. AI has the potential to improve the human condition for so many people in so many ways, but in the wrong hands, it has the power to cause immense harm. No matter how beneficent our intentions may be when developing AI, we must remember that the intentions of the creator mean absolutely nothing in the hands of the user. How we prevent misuse of AI in the future is going to depend on us and whatever regulations and safeguards we put into place today.




Back to Top