In light of Hively, Evans, and Zarda, this Feature argues that Title VII’s bar to discrimination “because of sex” applies to LGBT individuals. This interpretation follows from Title VII's ordinary meaning, particularly in light of its purpose to entrench a merit-based workplace, in addition to its statutory history.
The orthodox view is that statutory captions and titles should not inform interpretation. However, a more nuanced method distinguishes between Congress’s codification choices and those that the Office of the Law Revision Council makes. While the latter are rightly disregarded, judges should use the former to determine congressional intent.
Earlier this year, the FTC’s staff released a series of blog posts entitled Stick with Security that updated and expanded upon the prior Start with Security best-practices guide for information security practices. The Stick with Security series draws from FTC complaints, consent orders, closed investigations, and input from companies around the country to provide deeper insights into the ten principles articulated in the Start with Security guide. These guidelines serve as a set of minimum recommended standards for “reasonable” data security practices by organizations with access to personal data (i.e. information related to consumers and employees), although they can be applied to other types of data as well. The recommendations are not legal requirements, of course, but it can be useful for companies to consider the views of the FTC’s staff on the practices that are likely to be seen by the FTC as “reasonable.” This post summarizes the recommendations made by the FTC’s staff in the Stick with Security series.
Access and Authentication
Require credentials to authenticate users and securely store credentials on systems. Webpages and other connected systems that store or process personal data should reside behind a network layer that requires authentication and not be directly accessible without credentials from the Internet or less sensitive parts of a network. Credentials should not be stored in plain-text (e.g., storing passwords in documents or email folders), and companies should train employees to avoid disclosing credentials in response to phishing schemes and other requests (e.g., over-the-phone password changes).
Implement complex password requirements and prevent brute force attacks. Companies should require strong, unique passwords and establish a system to monitor for and prevent brute force attacks to minimize the risk of password cracking by attackers. Companies should also immediately change default passwords after installing new software, applications, or hardware.
Limit privileged access throughout the enterprise. Privileged or administrative access should be granted only to a small number of users. Such users should each have individualized login credentials that provide limited privileged access to the systems, processes, or data which is necessary to perform a legitimate business purpose.
Require multi-factor authentication for accounts with access to personal data. Companies should not solely rely on username and password credentials for permitting access to personal data; rather they should require a second form of authentication (e.g., an authentication application, a key fob, a USB security key, or a code received via a voice call or text message) for users accessing personal data or systems that can access personal data.
Only grant access as needed for the performance of job duties. Companies should grant specific user accounts access to personal data (or systems that process personal data) and only based on the minimum access levels necessary to satisfy business needs.
Immediately revoke access upon change of circumstance. When an employee leaves or moves positions, a vendor’s contract expires, or specific types of access are otherwise no longer needed, that access should be immediately revoked to prevent unauthorized access.
General Data Security
Understand the lifecycle of personal data throughout your network and apply appropriate security measures at each stage. Each company should be aware of how data enters and exits, moves within, and is stored throughout the company in order to implement appropriate security protections at each stage. Companies should also consider the level of care appropriate when transferring personal data and whether specifically to encrypt personal data in transit and/or at rest within a corporate network along with inbound and outbound transmissions.
Properly configure industry-tested and accepted security methods. With many security options available in the market, companies should consider choosing options that are consistent with industry standards and not necessarily unique. Additionally, companies should configure security controls in a manner that is consistent with manufacturer specifications and that has been properly configured and tested, including following major platform security guidelines for developers.
Only collect and use data as needed. Limiting data collection and use to what is necessary to meet business needs not only minimizes cybersecurity risks, but may also reduce the cost and logistical complexities of storing and maintaining large quantities of data.
Periodically review, assess, and (if needed) securely delete data. To ensure personal data is not unnecessarily retained, a company should periodically review the data it holds to assess whether the data is still necessary for a legitimate business need, and if the data is no longer needed, securely delete the data from all applicable systems. Secure deletion methods should pr..
The existing approaches to conflicts of state search-and-seizure laws are either theoretically or practically flawed. When a search implicates multiple states’ laws, courts should undertake a two-step analysis. First, they should determine whether a conflict exists; and second, they should apply the law of the officer who performed the search.
FCC Chairman Ajit Pai announced today that at its December 14 open meeting, the FCC will vote on an overhaul of the net neutrality framework adopted by the prior Administration in 2015. The full text of the draft order will be released tomorrow, but Chairman Pai has made certain key details known today. The order envisions an expanded role in oversight of Internet Service Providers (“ISPs”) by the Federal Trade Commission—a move which Acting FTC Chairman Maureen Ohlhausen welcomed.
First, as anticipated, Internet Service Providers (“ISPs”) will again be classified as providers of “information services” under Title I of the Communications Act, rather than “telecommunications services” under Title II. In many ways, in recent years the net neutrality debate in the U.S. has been as much—or some would say, more—about this statutory classification question than it has been about specific net neutrality rules.
Indeed, it was following the Title II classification that the FCC adopted rules governing ISPs’ privacy practices, which were repealed by Congress earlier this year. The Title I classification, in contrast, will, if adopted, put ISPs within the privacy jurisdiction of the Federal Trade Commission, as Chairman Pai emphasized in today’s announcement.
Second, perhaps reflecting the difficulty of sustaining bright-line net neutrality rules under Title I (following court decisions in 2010 and 2014), the draft Order will repeal the “no blocking,” “no throttling,” and “no paid prioritization” rules. It also will repeal the general conduct standard for assessing other practices that might implicate net neutrality on a case-by-case basis.
Third, the draft Order would retain some form of transparency requirements for ISPs. The notion would be that ISPs will be under an obligation to make certain public disclosures about their practices. If an ISP violates those disclosures, the Federal Trade Commission presumably would be able to pursue an enforcement action under its general Section 5 authority to police unfair or deceptive acts or practices—though the extent of the FTC’s authority to take such action against entities that operate a common carrier affiliate currently is being examined by the 9th Circuit.
It therefore appears that under the draft Order ISPs’ net neutrality practices would be regulated in a manner very similar to the regime that would govern their privacy practices. That is, there would be no specific rules governing what an ISP can or cannot do in managing traffic on its network. The FTC, however, may be able to step in if the ISP fails to keep a promise about how it manages traffic on its network.
Whether such a framework will or will not be sufficient to maintain an open internet will be hotly debated in the lead-up to the FCC’s vote on December 14, and beyond.
The White House released on November 15, 2017 the Vulnerabilities Equities Policy and Process for the United States Government (“VEP”) — the process by which the Government determines whether to disseminate or restrict information about new, nonpublic vulnerabilities that it discovers. This release was motivated by criticism following the allegations that significant cyber-attacks have exploited vulnerabilities withheld by the Government, concerns that the Government is exploiting vulnerabilities instead of alerting vendors to fix them, and general calls for transparency in the process.
According to the newly-released documents, the VEP is overseen by an Executive Secretariat (a role filled by the National Security Agency) and the final decision about whether to disseminate or restrict vulnerability information is made by an interagency Equities Review Board (“ERB”). The VEP is initiated when an agency submits a newly discovered and not publicly known vulnerability and provides its recommendation on whether to disseminate or restrict the information. Any other agencies claiming an equity in the vulnerability must concur or disagree with the recommendation. The ERB considers the opinions, renders a final decision, and the vulnerability is either disseminated or restricted.
The ERB’s determinations are based on the balancing of four groups of equities: (1) defensive; (2) intelligence, law enforcement, and operational; (3) commercial; and (4) international partnership. Specific considerations include: whether and how threat actors will exploit the vulnerability, the potential harm caused by exploitation, the likelihood of effective mitigation, whether the vulnerability can be exploited to serve an intelligence or law enforcement purpose, and risks to the Government’s relationship with industry and international relations.
The Federal Trade Commission (“FTC”) is soliciting public comments on a petition filed by Sears Holdings Management (“Sears”) to reopen and modify a 2009 FTC order regarding the tracking of personal information on their software apps. The petition is notable for a number of reasons. First, the Sears consent order was a seminal order in the development of the FTC’s privacy jurisdiction, standing for the proposition that a company cannot “bury” disclosures that consumers would not expect in long privacy notices. Second, the concept of modifying 20-year consent orders is an important one in light of changes over time. Third, the petition seeks to correct the unintended consequences that a consent order can have on future technologies when such an order regulates present ones.
In the 2009 FTC order, Sears settled charges that it failed to disclose adequately the scope of consumers’ personal information it collected via a downloadable software app. As part of that 20-year consent order, Sears agreed to make certain disclosures and obtain consent in connection with its downloadable software app and future ones that “monitor, record, or transmit information.” The petition argues that the 2009 FTC order should be modified to update its existing definition of “tracking application,” presently defined as:
any software program or application . . . that is capable of being installed on consumers’ computers and used . . . to monitor, record, or transmit information about activities occurring on computers on which it is installed, or about data that is stored on, created on, transmitted from or transmitted to the computers on which it is installed.
The petition seeks to modify this definition to exempt information about “(a) the configuration of the software program or application itself; (b) information regarding whether the program or application is functioning as represented; or (c) information regarding consumers’ use of the program or application itself.”
The petition argues that this modification is necessary for three reasons. First, changed circumstances in the mobile app arena have rendered the 2009 FTC order’s broad definition of “tracking application” impracticable. The FTC’s original administrative complaint targeted Sears’ desktop software application, which could track users’ activities outside of its boundaries. Since then, software distribution has overwhelmingly shifted from desktop to mobile apps, which are distributed through two main online marketplaces (Apple’s App Store and Google Play). These marketplaces control “the manner and form” of disclosures to consumers relating to apps and impose restrictions on the collection of information from consumers, in concert with the FTC’s goals. According to Sears, the desktop software that led to the 2009 FTC order “would be impermissible under the rules of the two dominant mobile app stores,” but the additional disclosure requirements imposed on Sears by the order are onerous given that the app stores have a “standardized workflow” to allow consumers to review the app provider’s data collection, use, and sharing policies before downloading the apps.
Second, Sears argues that modifying the 2009 FTC order is in the public interest. Sears argues that while the order was “intended to protect consumers from undisclosed and invasive tracking of consumers outside of” its software, the obligations it imposes upon Sears “are poorly adapted to today’s mobile app ecosystem.” Under the 2009 FTC order, a user of multiple Sears apps must read and consent to nearly identical disclosures in each of those apps, and “no other competitor uses a similarly disruptive approach to mobile app disclosures.” Similarly, modification of the order’s definition would reflect the commonplace practices of data collection and intra-app activity sharing in today’s marketplace. Sears’ mobile apps share data with remote servers to fulfill consumer requests and collect data to support app security. Such practices, the petition asserts, are consistent with the FTC’s 2012 privacy report.
Third, the petition argues that the requested modification is consistent with more recent FTC precedent and priorities. The petition cites two FTC orders from 2012 and 2013 that exempted the specific types of information collection enumerated above. Modifying the 2009 FTC order to exempt tracking that is “necessary for the basic operation of mobile apps” would be consistent with consumer expectations and recent FTC guidance and regulations. Indeed, the petition claims that modifying the definition of “tracking application” would leave intact the order’s “core continuing mandate—to provide notice to consumers when software applications engage in potentially invasive tracking.”
The petition will be subject to public comment through December 8, 2017. After that time, the Commission will decide whether to approve Sears’ petition to modify the definition of “tracking application” in the 2009 FTC order.
By John G. Buchanan and Marialuisa S. Gallozzi
Although the National Cybersecurity Awareness Month of October has come to a close, it is not too late for corporate counsel and risk managers to be thinking about cyber-risk insurance — an increasingly essential tool in the enterprise risk management toolkit. But a prospective policyholder purchasing cyber insurance for the first time may be hard put to understand what coverage the insurer is selling and whether that coverage is a proper fit for its own risk profile. With little standardization among cyber policies’ wordings, confusing labels for their covered perils, and little interpretive guidance from case law to date, a cyber insurance buyer trying to evaluate a new proposed policy may hardly know where to focus first.
After pursuing coverage for historically major cyber breaches and analyzing scores of cyber insurance forms over the past 15 years, we suggest the following issues as a starting point for any cyber policy review:
Push your limits. Although total cyber limits up to $500 million are reportedly available in the insurance marketplace, many major companies’ cyber programs top out at much less. Our experience teaches that even limits of $100 million might fall far short of the total losses from an historically major data breach. Tip: If your company’s principal concern is protection against catastrophic cyber exposures, then consider a higher self-insured retention and build the highest tower of limits above that retention that you can afford.
Beware of sublimits. Many cyber policies cap particular kinds of loss at amounts less than the total policy limit. For example, some insurers sublimit coverage for regulatory and Payment Card Industry (PCI) expenses; in a claim for a major payment card breach, these sublimits can generate disputes over how various expenses are characterized and can complicate the timing and presentation of losses. Tip: Some primary insurers are willing to set full-policy limits for all or most of the coverage grants principally involved in a typical payment card breach. Negotiate as few sublimits as commercially feasible. Trap: Some endorsements purporting to cover ransomware are effectively exclusions masquerading as coverage grants with small sublimits. Ransomware already falls within the scope of “cyber extortion” coverage grants in many cyber forms; don’t accept a ransomware-specific endorsement without reviewing both the policy and the endorsement carefully.
Push back the Retro Date. Network intrusions are latent injuries: a hacker may be lurking on your system for months before you discover the breach. Most cyber policies exclude loss arising from events happening before a specified “retroactive date,” regardless of when loss is discovered. Tip: The default setting for the retro date is the first inception date of cyber coverage, but some insurers are willing to set it up to a year earlier. Negotiate the earliest retro date you can.
Get your cyber application right. Cyber-risk insurance applications typically consist of detailed and highly technical questionnaires, and many cyber policy forms expressly recite that statements in the application are incorporated by reference into the policy, material to the risk, and relied upon in issuing the policy. Trap: An insurer bent on denying a claim may pore through those questionnaires looking for misstatements that might provide a basis to void the policy. For example, the insurer’s complaint in Columbia Cas. v. Cottage Health (C.D. Cal., filed May 31, 2016) alleged that misstatements in the “Risk Control Self Assessment” included in the insured’s cyber insurance application provided grounds to rescind the policy. Tip: Cottage Health illustrates the importance of a careful application process. The company’s legal department, with the assistance of outside counsel as needed, should play an active role in coordinating IT and risk management input into the cyber application, which requires expertise from both functions. A particular challenge in many cyber insurance applications is the disclosure of prior cyber incidents, with attendant privilege concerns.
Mind the (coverage) gap, please. A policyholder must look across its entire insurance portfolio to consider whether significant gaps exist, and if so where. The connectedness of the Internet of Things is a prime example of the potential disconnectedness among common insurance programs. Most cyber policies exclude physical bodily injury and property damage, because traditionally conventional property and general liability policies covered such physical harms. Trap: Over the past decade cyber-related exclusions or restrictions have proliferated in standard property and liability policies. Tip: Major property insurers now commonly offer upgraded versions of their policies with cyber-related coverage extensions. More recently, specialty policies covering liability for “cyber-physical” losses have entered the marketplace. If the..
Ashden Fein’s Cybersecurity practice focuses on counseling clients who are preparing for and responding to cyber-based attacks on their networks, assessing their security controls and practices for the protection of data and systems, developing and implementing cybersecurity programs, and complying with federal and state regulatory requirements. Ashden has specifically been the lead investigator and crisis manager for multiple complex cyber and data security incidents, including data security breach matters involving millions of affected consumers, advanced persistent threats targeting intellectual property across industries, state-sponsored theft of sensitive U.S. government information, and destructive attacks.
Before joining the firm, Ashden served for thirteen years in the United States Army, first as a military intelligence officer and later as a Major in the Judge Advocate General’s Corps. While on active duty, he specialized as a military prosecutor, gaining significant experience investigating and prosecuting crimes related to national security and cybersecurity. In addition, Ashden served as the Chief of the Criminal Division for a command of 17,000 soldiers and as a legal advisor for an Army Aviation organization deployed in Iraq. He currently serves as a Judge Advocate in the U.S. Army Reserve.
While in the Army, you specialized as a military prosecutor where you gained significant experience in cybersecurity. For example, you were the lead trial attorney in the prosecution of Private Chelsea Manning for the unlawful disclosure of classified information to WikiLeaks. How did your time in the Army help inform your work on cybersecurity matters in private practice?
To fully investigate an insider threat who gained access to data across multiple enterprises, I had to develop an expertise in areas that are relevant to the issues faced by the firm’s clients today.
First, our team worked side-by-side with digital forensic investigators every day for nearly fifteen months to understand the different artifacts that existed and how they related to each other. Piecing together these artifacts—ranging from exploited digital images to decrypted memory drives to firewall and Internet search logs—allowed us to build a clear picture of what illicit activities took place that were attributable to Private Manning. Similarly, for cybersecurity attacks, it is critical for counsel to understand the evidence collected and push forensic firms and IT security departments to ensure all reasonable leads are investigated or a defensible judgment is made not to pursue such leads.
Second, to understand the evidence, I had to immerse myself in the different networking and cybersecurity technologies within the Department of Defense and those technologies generally leveraged by attackers on the Internet and dark web. Without this technical understanding, it would have been impossible to develop a deep understanding of the artifacts described above and to translate expert testimony from digital forensic investigators so that it could be easily understood by a trier of fact. In private practice, we routinely advise clients on the legal risks involved with deploying certain technologies, or making configuration changes to existing technology, which may impact regulatory compliance or potentially violate the law. Having an in-depth understanding of such technologies is crucial to providing advice that reasonably anticipates potential issues.
Third, our team had to develop close working relationships with many law enforcement and intelligence community organizations to assist in the investigation and the coordination to use classified information, including witness testimony, at trial. These relationships were vital in efficiently navigating the defense and intelligence interagency to receive appropriate approvals and support. After a cybersecurity incident, many clients find themselves voluntarily notifying different federal and state entities, including pursuant to DoD mandatory disclosure requirements covering defense contractors, or receiving information requests related to an incident. In those cases, having experience working with such government organizations with our government contracting team has benefited our clients.
Yan Luo advises clients on a broad array of regulatory matters in connection with cybersecurity and data protection rules in China. With previous work experience in Washington, DC and Brussels before relocating to Beijing, Yan has fostered her government and regulatory skills in all three capitals. She is able to strategically advise international companies on Chinese regulatory matters and represent Chinese companies in regulatory reviews in other markets.
Over the past two years, Yan has provided practical advice to clients on nearly all aspects of China’s Cybersecurity Law. She continues to help them navigate the complex and quickly evolving regulatory regime, including on issues arising out of personal information protection, cross border data transfers, and various cybersecurity requirements.
What provisions of China’s Cybersecurity Law have caused the greatest concern for U.S. companies? What advice do you have for these companies when it comes to compliance?
China’s Cybersecurity Law, the country’s “fundamental law” in the area of cybersecurity, was passed on November 7, 2016 and took effect on June 1, 2017. Many provisions of the Law have the potential to profoundly impact multinationals’ operations in China. However, Article 37, which discusses cross-border data transfers, may cause the greatest concern.
Article 37 requires that operators of Critical Information Infrastructure (“CII”) store “citizens’ personal information and important data” collected or generated in the course of operations within China. If offshore data transfers are necessary for operational reasons, a security assessment must be conducted by designated agencies, unless otherwise specified by laws and regulations. On the basis of this provision, the Cyberspace Administration of China (“CAC”) issued a draft implementing regulation, Measures on Security Assessment of Cross-Border Data Transfer of Personal Information and Important Data (the draft “Measures”), that extends certain cross-border transfer obligations to “network operators,” a much broader term than “CII operators.” “Network operator” is defined to include “owners and managers of networks, as well as network service providers.”
According to the draft Measures, companies that may potentially be classified as “network operators” will likely be obliged to conduct a security assessment analyzing risks arising from the transfer(s) of data collected in China to other countries. Regulators may potentially review such assessments from companies to determine whether Chinese data is offered adequate post-transfer protection. In order to avoid a potential disruption of data transfers, it is important for companies to perform a security assessment of cross-border data flows out of China and be ready for a regulator’s review, if and when it is required.