![berkman klein center ai berkman klein center ai](https://miro.medium.com/max/1400/1*vhuHtwXDkXontDotJ7LyAw.jpeg)
One aspect is transparency about the use of AI tools. How can companies maintain transparency in AI development? Last year we ran a program called the Challenges Forum where we invited organizations of all sizes to present their ethical issues with AI to panels of scholars and other experts. Academics can be a good resource for early stage startups, particularly ones that have a strong social justice or social benefit mission. Then they analyze the relevant legal issues and produce a report that executives use to make actionable decisions.įacilitate Collaboration Between IT Operations Management and Security Operations with AIOps How can other companies do this on their own?įor startups, it can be a big challenge. They come in and gather information via documents and interviews. A consulting firm called Business for Social Responsibility ( BSR) did the Google assessment. Typically, organizations bring in outside consultants.
![berkman klein center ai berkman klein center ai](https://cyber.harvard.edu/sites/default/files/styles/image_large/public/2020-01/PrincipledAI_Cover.jpg)
Impact assessments like that are going to be huge in terms of helping organizations get their arms around what the challenges are, because the impacts will vary tremendously. Google did a human rights impact assessment before releasing the technology, and released a summary of that assessment with the product announcement. For example, Google recently released its first facial recognition tool, which allows enterprise customers to recognize the faces of famous people in press photos. There is a lot of work to be done, but I see some organizations using AI impact assessments as a way of ensuring new services are responsible and respect human rights. How are companies starting to put ethics into practice? We’re starting to see the results of several decades of organizing on digital rights and privacy now informing how companies are approaching AI. Tech companies are starting to embrace that and internalize those functions. It’s been increasingly clear that the tech sector has a strong impact on human rights. How did the private sector prove you wrong? Our hypothesis had been that governments would be more likely to reference human rights and private sector organizations less likely. We also collected information on whether the documents referenced human rights. It was surprising to see so much recognition that there should be accountability for AI, especially from the private sector. On what issues was there the most convergence? The meta-chatter was that responsible AI was not a field anyone had figured out yet. It wasn’t clear where the common threads were.
![berkman klein center ai berkman klein center ai](https://cdn-images-1.medium.com/max/1200/1*dOV0mkw9lpB5DD3PTScIDw@2x.png)
When we started, there were principles documents coming out hot and heavy from the private sector, governments, and advocacy groups. That there was such a convergence around general themes. What was your biggest takeaway from the study? Workflow sat down with Jessica Fjeld, the study’s lead author and assistant director at the Berkman Klein Center’s Cyberlaw Clinic, for her thoughts about moving socially responsible AI from theory to practice. In January they published their findings in a white paper and in a graphic model of ethical AI principles. They found significant consensus around core issues such as privacy, transparency, and bias.
![berkman klein center ai berkman klein center ai](https://images.squarespace-cdn.com/content/v1/5ace6485f8370a8efca4d468/1528819597077-CL29CPXARTNU0H6B0RYA/org-bkc.png)
Researchers at Harvard’s Berkman Klein Center for Internet & Society studied 36 of these documents. In recent years, numerous AI “principles documents” have emerged from governments, private companies, and advocacy groups. Businesses and governments around the world face a complex challenge: How should they implement artificial intelligence (AI) in ways that respect human rights, avoid bias, incorporate diverse perspectives, and yield safe, socially beneficial products?