TECHNOLOGY, OUTSOURCING, AND COMMERCIAL TRANSACTIONS
NEWS FOR LAWYERS AND SOURCING PROFESSIONALS

In this month’s Contract Corner, we are highlighting considerations for drafting an up-to-date privacy policy. In Part 1 of this series, we provided background on the general legal landscape for privacy policies in the United States and general issues that need to be addressed for an up-to-date policy. In this Part 2, we will provide some specific pointers on drafting, updating, and disclosing such policies.

Additional Information to Include

In addition to the list of items that should generally be covered in every privacy policy we provided in Part 1, the following are additional items you may need to set out in your specific privacy policy:

  • Directions for customers to access and update data (e.g., password resets, contact information updates, and mechanisms for unsubscribing)
  • Contact details or other means of reaching persons in your organization that can address user queries or concerns
  • Information regarding notifications when the privacy policy is updated (see below for considerations when reviewing and updating your policy)
  • Mechanisms for users to agree to and accept the terms of the privacy policy, as well as means for users to opt out

Drafting and posting a clear, concise, and accurate privacy policy is one of the most important tasks when creating a company’s website, particularly given today’s legal and regulatory environment. Privacy policy legal requirements are becoming more stringent and shortcomings less tolerated, and consumer sensitivity to privacy concerns are at an all-time high.

Despite these concerns, many companies’ policies are seemingly insufficient. A recent opinion piece published as part of the New York Times’ Privacy Project assessed 150 privacy policies from various companies and found that the vast majority of them were incomprehensible for the average person. At best, these seem to have been “created by lawyers, for lawyers” rather than as a tool for consumers to understand a company’s practices.

In this month’s Contract Corner, we will highlight considerations for drafting an up-to-date privacy policy. Part 1 of this month’s Contract Corner will provide background on the current legal landscape for privacy policies in the United States and general issues that need to be addressed.

Does your website or application collect user data? Does your company sell that user data to other third parties, such as advertisers? Does your company disclose this practice to your users in a privacy policy or terms or use? If you answered yes to these questions, you are most certainly not alone. But is your disclosure sufficient? That is the question a new challenge is poised to answer.

As 2018 comes to a close, we have once again compiled all the links to our Contract Corner blog posts, a regular feature of Tech & Sourcing @ Morgan Lewis. In these posts, members of our global technology, outsourcing, and commercial transactions practice highlight particular contract provisions, review the issues, and propose negotiating and drafting tips. If you don’t see a topic you are interested in below, please let us know, and we may feature it in a future Contract Corner.

In Part 1 of this series, we provided an overview of data (or knowledge) commons and some key issues to consider, but how does one actually create and manage a data commons? To find your feet in this budding field, build on the theoretical foundation; address the specific context (including perceived objectives and constraints); deal with the thorny issues (including control and change); establish a core set of principles and rules; and, perhaps most importantly, plan for and enable change.

You may have heard of the “tragedy of the commons,” where a resource is depleted through collective action, but knowledge is different from other resources—knowledge can be duplicated, aggregated, integrated, analyzed, stored, shared, and disseminated in countless ways. Given that knowledge is a critical resource for seemingly intractable problems, the opportunity of the commons (or the tragedy of the lack of commons) is worth thoughtful consideration.

Imagine that you or a loved one is suffering from a terminal or debilitating disease and that data and knowledge are out there, waiting to be combined and harnessed for a cure or a transformational treatment. Imagine that self-interest (including attribution), legal restrictions (including intellectual property protections), inertia, complexity and difficulty of collective action, and other weighty forces are between you and that breakthrough discovery. Though not a new concept, commons have been garnering attention lately as an alternative framework for catalyzing groundbreaking research and development, particularly when relevant data and knowledge are scattered and particularly in the life sciences community. But before we all throw away our patents and data-dump our trade secrets, there are some thorny aspects to governing a data (or knowledge) commons. For example:

  • A commons is essentially its own society. Anyone who has been part of a homeowners’ association knows that collective governance is almost always muddy. Aligning incentives, objectives, and values can be challenging.
  • Founders may have trouble relinquishing control or enabling change. Participants may become confused or upset if rules or priorities change.
  • Commons are not as well understood and tested. They must coexist with, and within, other systems that may be more rigid and rules-based. Participants may be logistically, intellectually, and otherwise tied to traditional methods and may prefer semi-exclusive zones rather than open collaboration.
  • It may be difficult to measure the effectiveness or value of commons.
  • Policing activities (e.g., authentications or restrictions) may be burdensome. And once the cat is out of the bag, it’s difficult to undo uses or disclosures.
  • Commons managers may not be willing to take on certain responsibilities or liabilities that would make participants more comfortable.
  • Different types of information and tools have different levels of sensitivity and protection. Certain information, like personal data, is highly regulated.

Scholars have taken theoretical frameworks built for natural resources and adapted them to the data commons setting. Key findings include that data commons must be designed to evolve and that communities with high levels of shared trust and values are most likely to succeed. Whereas governance through exclusivity (e.g., patents) is useful when trust levels are low, a resource sharing governance model (e.g., commons) can be effective when trust levels are high.

If you’d like to know more:

  • We will be hosting a webinar with one of the aforementioned scholars—Professor Michael J. Madison, faculty director at PittLaw—on Tuesday, December 18, 2018, from 12:00 pm to 1:00 pm ET. Register and join us for the discussion.
  • In a subsequent post, we will provide some tips and considerations with respect to drafting policies, standard terms, data contribution agreements, and other governing documents for data commons.

Knowledge sharing has long been an important element of academic research. And now collective sharing and governance of data assets throughout the scientific community, including for-profit participants, is gaining momentum. During their webinar, Out in the Open: The Knowledge Commons Framework, Emily Lowe, Ben Klaber, and Professor Michael J. Madison, faculty director at PittLaw, will discuss issues related to knowledge commons. Topics will include the following:

  • A fundamental overview of knowledge commons, including the framework’s strengths and weaknesses
  • Standard requirements regarding data contribution, access, use, sharing, protection, and attribution
  • How to decide if a knowledge commons framework is right for your business, and if so, how to implement it successfully

Earlier this week, the US Federal Trade Commission (FTC) settled a complaint against the operator of an online talent search company, asserting that the talent search company’s collection and disclosure of children’s personal information violated the Children’s Online Privacy Protection Act (COPPA) by failing to obtain parental consent, failing to provide adequate notices, and failing to implement the appropriate restrictions in compliance with COPPA. Under the terms of the settlement, the company agreed to pay $235,000.

The FTC’s complaint asserted that the company collected the information of more than 100,000 children under age 13, but failed to disclose to parents or the public how that data was collected, used, or disclosed. Though the website privacy policy stated that the company would not knowingly collect personal information from children under 13, according to the FTC’s complaint, the company imposed no restrictions on users who indicated they were under the age of 13 and did not take steps to verify whether legal guardians were creating the children’s accounts. According to the complaint, much of the information collected was available on publicly visible user profiles.

Galvanized by a confluence of charged factors—like privacy, cybersecurity, children, and the Internet of Things (IoT)—and sparked by recent assertions of Children’s Online Privacy Protection Act (COPPA) regulatory power, the US Federal Trade Commission (FTC) entered into a pioneering settlement with electronic toy manufacturer VTech regarding a breach of children’s personal information. The FTC’s message to companies is crystal clear: when it comes to kids’ data, transparency and security are elemental.

Scarce Insulation from COPPA

The COPPA Rule explains what operators of websites and online services must do to protect children’s privacy and safety online, and the FTC serves as the enforcer. As we previously discussed, the FTC released updated guidance in response to concerns about the security of data collected and used by internet-connected products geared toward children. The FTC noted that COPPA defines “website or online service” broadly and specifically listed connected toys and IoT devices within the COPPA Rule’s purview. Although the FTC released a policy that permits collecting a recording of a child’s voice without parental consent in certain situations, such circumstances are narrowly limited to the sole and limited purpose of replacing written words—say, an instruction—and the recording must be immediately destroyed.

Upcoming Webinar

October 17, 2017

On October 25, A. Benjamin Klaber, a lawyer in our technology, outsourcing, and commercial transactions group, will be co-presenting a CLE-webinar, “Drafting Website and Mobile App Terms of Use, Privacy Policy, and IP Protections.” The webinar will offer guidance on drafting and enforcing terms of use, privacy policies, and IP protection language for websites and mobile apps to effectively mitigate business risk.

The webinar will take place from 1:00–2:30 pm EDT. 

Additional information regarding the webinar, registration, and CLE credits can be found here.