(Page 2 of 2)
tech companies to promise customers that their information is protected.
Privacy and how you use data will be key discussion topics going forward. If the industry as a whole is going to evaluate how it treats consumer data, the first step is holding big tech accountable.
I think that contract is even more important now than it was with the Web. So I would call it a Contract for the Age of Intelligent Machines. A.I. is a new emergent power that is way beyond what humanity has organized itself around before. A quote that I have heard is, “there’s nothing artificial about A.I.” It’s inspired by people; it’s created by people; and—most importantly—it impacts people. It is a powerful tool we are only just beginning to understand, and that is a profound responsibility. These A.I. systems are self-learning systems and have accelerated their impact on a substantial portion of the world’s population, with several startling and destructive cases of “rogue A.I.” emerging.
For example, recently, Amazon’s machine-learning specialists uncovered that their new recruiting engine did not like women. The company realized its new system was not rating candidates for software developer jobs and other technical posts in a gender-neutral way. That is because Amazon’s computer models were trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period. Most came from men, a reflection of male dominance across the tech industry.
We will, in the pretty near future, hit a moment when it will be impossible to course-correct. It is critical to take a human experience and values-first view with A.I.; therefore, it’s the company’s and regulator’s job to ensure things are responsibly produced and done in a way that starts with human privacy and data rights, and the company’s core values, and see if your A.I. is enabling that. Specifically, these contracts (formal and informal) need to ensure that A.I.-powered systems are designed and introduced to cover at least these core characteristics of responsibility:
1. Delivery of positive business and societal outcome and impact.
2. Augmentation and scaling of human intelligence and experience.
3. Cultivation of transparency, accountability, and trust.
Responsible A.I. is an area in which I have been spending a lot of time with businesses and governments, and I plan to accelerate my engagement in the months ahead.
This may be the greatest question of our age. How will we evolve what it means to be human, as machines and Web services continue to take over many of the tasks being done today by us? We will have to experience together what I’m calling an evolution of trust. As we transfer more and more of what we do every day over to machines, we will have to establish a level of trust in A.I. and emerging technology that we don’t have today. Berners-Lee’s Contract for the Web addresses this.
It’s interesting—we are the first civilization that has ever had to contemplate co-evolving with machines and technology. So, in a way, we are living in this experiment now, and while we don’t have the level of trust that we need to go forward with this evolution, we will bridge to that trust by creating contracts like this.
It’s already happening today. We all use maps, Alexa, Siri, Cortana, and other narrow A.I.s in our everyday lives, and we trust them to do what they are tasked to do. As we become more and more comfortable with A.I.-led automation, we will evolve to trust A.I. The question is how far will we allow technology to penetrate our lives and our experience of our lives? I don’t know … let’s see what happens!
[Editor’s note: This is part of a series of posts sharing thoughts from industry and technology leaders about 2018 trends and 2019 forecasts.]