Artificial intelligence magnifies the ability to analyze personal information in ways that may intrude on privacy interests. In fact many of the most interesting data sets for AI are those with a great deal of personal information. But as more countries and regions around the world solidify increasingly stringent privacy laws, companies that use AI must take those legal protections of privacy into account when setting up their systems. To avoid legal trouble and ensure public trust, AI users must seriously consider the varying privacy laws discussed below before launching any new program.
- Many European countries, including the United Kingdom, have signed a Declaration of Cooperation on Artificial Intelligence, with a legislative proposal expected to follow in the first quarter of 2021. But as the General Data Protection Regulation (GDPR) established a comprehensive privacy law that covers all personal data, regardless of type or context, the creation of an AI data-sharing community will need to follow strict guidelines.
- An EU Commission passed a vote in October 2020 for an ethics framework governing AI and privacy so future laws should be made in line with the following guiding principles: a human-centric and human-made AI; safety, transparency, and accountability; safeguards against bias and discrimination; right to redress; social and environmental responsibility; and respect for privacy and data protection.
- Following that, in November 2020, a Proposal for a Regulation on Data Governance Act was introduced to make public sector data available for re-use, in situations where such data is subject to rights of others; prevent remuneration if data is shared among businesses; allow personal data to be used with the help of a “personal data-sharing intermediary” designed to help individuals exercise their rights under GDPR; and permit data use on altruistic grounds.
- These proposals are part of a larger strategy to make Europe a leader in the adoption of AI, by encouraging public and private sectors to utilize the new technology and ensure there is an appropriate ethical and legal framework in place.
- In the United States there is a complicated patchwork of privacy laws, with regulations varying by state, sector, and type of information. The California Consumer Privacy Act (CCPA), which went into effect on January 1, 2020, most closely tracks GDPR in terms of requirements and information safeguarding. The CCPA includes an extremely broad definition of personal information intended to include the sort of robust consumer profile and preference data collected by social media companies and online advertisers.
- Privacy laws focus on personal information, so some companies are examining if AI programs can be used without personal information, which would cause most of the privacy issues to evaporate. For GDPR, there is a distinction between anonymization, which is the process of permanently removing personal identifiers that could lead to an individual being identified, and pseudonymization, which replaces or removes information in a data set that identifies an individual but it can be re-identified. And under the CCPA, personal information does not including consumer information that is deidentified, which means information cannot reasonably identify a particular consumer, and aggregate consumer information, which means information that relates to a group of consumers that cannot be reasonably linked to any consumer or household. These exceptions may contain the keys to a solution on how AI can access personal information to better train their systems without violating privacy laws.
More information can be found in this webinar, originally part of Morgan Lewis’s AI Boot Camp.