Category Archives: data privacy law

The Uniform Personal Data Protection Act Is Here

In July 2021, the Uniform Law Commission (“ULC”) voted to approve the Uniform Personal Data Protection Act (“UPDPA”). The UPDPA is a model data privacy bill designed to provide a template for states to introduce to their own legislatures, and ultimately, adopt as binding law. 

The UPDPA 

The UPDPA would govern how business entities collect, control, and process the personal and sensitive personal data of individuals. This model bill has been in the works since 2019 and includes the input of advisors, observers, the Future of Privacy Forum, and other stakeholders. This is significant because the ULC has set forth other model laws, such as the Uniform Commercial Code, which have largely been adopted across the states. 

Interestingly, the model bill is much narrower than some of the recent state privacy laws that have been passed, such as the California Privacy Rights Act and Virginia’s Consumer Data Protection Act. Namely, the model bill would provide individuals with fewer, and more limited, rights including the right to copy and correct personal data. The bill does not include the right of individuals to delete their data or the right to request the transmission of their personal data to another entity.  The bill also does not provide for a private cause of action under the UPDPA itself, but would not affect a given state’s preexisting consumer protection law if that law authorizes a private right of action. If passed, the law would, consequently, be enforced by a state’s Attorney General. 

Applicability 

The UPDPA would apply to the activities of a controller or processor that conducts business in the state or produces products or provides services purposefully directed to residents of this state and: 

(1) during a calendar year maintains personal data about more than [50,000] data subjects who are residents of this state, excluding data subjects whose data is collected or maintained solely to complete a payment transaction; 

(2) earns more than [50] percent of its gross annual revenue during a calendar year from maintaining personal data from data subjects as a controller or processor; 

(3) is a processor acting on behalf of a controller the processor knows or has reason to know satisfies paragraph (1) or (2); or 

(4) maintains personal data, unless it processes the personal data solely using compatible data practices. 

The UPDPA defines “personal data” as a record that identifies or describes a data subject by a direct identifier or is pseudonymized data. The term does not include deidentified data. The bill also defines “sensitive data” as a category of data separate and apart from mere “personal data.” “Sensitive data” includes such information as geolocation in real time, diagnosis or treatment for a disease or health condition, and genetic sequencing information, among other categories of data. 

The law would not apply to state agencies or political subdivisions of the state, or to publicly available information. There are other carve-outs, as well. 

Notably, the model bill also contains several different levels of “data practices,” broken down into three subcategories: (1) a compatible data practice; (2) an incompatible data practice; and (3) a prohibited data practice. Each subcategory of data practice comes with a specific mandate about the level of consent required—or not required—to process certain data. For example, a controller or processor may engage in a compatible data practice without the data subject’s consent with the expectation that a compatible data practice is consistent with the “ordinary expectations of data subjects or is likely to benefit data subjects substantially.” Section 7 of the model bill goes on to list a series of factors that apply to determine whether processing is a compatible data practice, and consists of such considerations as the data subject’s relationship to the controller and the extent to which the practice advances the economic, health, or other interests of the data subject. An incompatible data practice, by contrast, allows data subjects to withhold consent to the practice (an “opt-out” right) for personal data and cannot be used to process sensitive data without affirmative express consent in a signed record for each practice (an “opt-in” right). Lastly, a prohibited data practice is one in which a controller may not engage. Data practices that are likely to subject the data subject to specific and significant financial, physical, or reputational harm, for instance, are considered “prohibited data practices.” 

The model bill has built in a balancing test meant to gauge the amount of benefit or harm conferred upon a data subject by a controller’s given data practice, and then limits that practice accordingly. 

What’s Next

After final amendments, the UPDPA will be ready to be introduced to state legislatures by January 2022. This means that versions of this bill can, and likely will be, adopted by several states over the next couple of years—and perhaps, eventually, lead to some degree of uniformity among the states’ privacy laws. 


Krishna A. Jani, CIPP/US, is a member of Flaster Greenberg’s Litigation Department focusing her practice on complex commercial litigation. She is also a member of the firm’s cybersecurity and data privacy law practice groups. She can be reached at 215.279.9907 or krishna.jani@flastergreenberg.com.

Cybersecurity & Data Privacy Legislative Updates

Since the passage of the CCPA in 2018, there has been a flurry of proposed state laws aimed at regulating the areas of cybersecurity and data privacy in the absence of federal comprehensive legislation. Additionally, there has been a renewed focus on legislation at the federal level. Here’s an overview of some recently proposed pieces of federal legislation, and recently proposed and passed state laws that may actually have a shot at success.

Federal Privacy Legislation

Information Transparency and Personal Data Control Act (2021)

This Act is the first of its kind to be introduced in 2021. The Act would create protections for the processing of personal information. Under the Act, businesses would be required to utilize an opt-out consent mechanism for consumers for the collection, processing, and sharing of non-sensitive information. For the collection, sale, sharing, or other disclosure of sensitive personal information, however, companies would be required to obtain an “affirmative, express, and opt-in consent” from consumers. 

The proposed law defines “sensitive personal information” as financial account numbers and authentication credentials, such as usernames and passwords; health information; genetic data; any information pertaining to children under the age of 13; Social Security numbers and any “unique government-issued identifiers;” precise geolocation information; the content of oral or electronic communications, such as email or direct messaging; personal call detail records; biometric data; sexual orientation, gender identity or intersex status; citizenship or immigration status; mental or physical health diagnoses, religious beliefs; and web browsing history and application usage history.

Notably, information that is classified as deidentified, public information, and employee data would not fall under the definition of “sensitive personal information.” Written or verbal communication between a controller and a user for a transaction concerning the provision or receipt of a product or service would also not be considered sensitive data.

Additionally, data controllers would be responsible for informing processors or third parties about the purposes and limits to the specific consent granted but would not be liable for processors’ failure to adhere to those limits.

Moreover, the law would provide additional rulemaking authority to the Federal Trade Commission to devise requirements for entities that collect, transmit, store, process, sell, share, or otherwise use the sensitive personal information of members of the public.

This Act would not provide consumers with a private right of action. Instead, it directs the Attorney General to notify controllers of alleged violations and provide them with 30 days to cure non-willful violations of this Act before commencing an enforcement action.

For more information on recently-proposed federal legislation, including those crafted to address the COVID-19 pandemic, see my pieces on the Exposure Notification Privacy Act, The Public Health Emergency Act, and the COVID-19 Consumer Data Protection Act.

State Privacy Legislation

Unlike comprehensive national laws like the GDPR, which generally applies to all data in all settings, state laws in the U.S. typically carve out exceptions for certain types of data, such as health information already subject to HIPAA, for example. The laws outlined below largely follow this pattern.

The following states have recently passed, or proposed, cybersecurity and data privacy laws.

StateLegislationStatusPrivate Right of Action
CaliforniaCalifornia Privacy Rights ActPassed by ballot initiative in November 2020Limited
VirginiaConsumer Data Protection ActSigned into on March 2, 2021No
WashingtonWashington Privacy ActPendingNo; Not in 2021 version
FloridaFlorida Privacy Protection ActPendingYes
New YorkNew York Privacy Act; Biometric Privacy ActPendingYes; Yes
OklahomaComputer Data Privacy ActPassed by HouseNo

The CPRA is a ballot initiative that amends the CCPA and includes additional privacy protections for consumers. It was passed in November 2020 and the majority of the provisions therein will enter into force on January 1, 2023 with a look-back to January 2022.

Virginia’s law is similar to the still-pending Washington Privacy Act and includes provisions that are akin to the CCPA.

Other states like Oregon and Minnesota have also proposed privacy and security legislation in recent months.

Don’t forget to catch Krishna Jani’s presentation at PBI’s upcoming Cyberlaw Update on Thursday, April 29, 2021!


Krishna A. Jani, CIPP/US, is a member of Flaster Greenberg’s Litigation Department focusing her practice on complex commercial litigation. She is also a member of the firm’s cybersecurity and data privacy law practice groups. She can be reached at 215.279.9907 or krishna.jani@flastergreenberg.com.

Disinformation, Mob Mentality, And Federal Privacy Legislation

Will the disinformation that led to a mob surrounding the Capitol Building help drive federal privacy legislation?

Here’s why I think it will.

Disinformation

It is no secret that the internet is rife with information—some legitimate, and, inevitably, some not. In many ways, social media and the rise of new and emerging platforms on which to share information, contribute to the spread of disinformation. Disinformation is false information that is intended to mislead, unlike misinformation, which is false information that is spread, regardless of intent to mislead.

Disinformation can be damaging to both individuals and businesses because it can be difficult to discern the difference between evidence-backed information and disinformation. This very issue arguably resulted in thousands of people surrounding the Capitol Building on January 6, 2021 in Washington, D.C.

The Role of the Internet and Social Media

Though many platforms likely contributed to the widespread disinformation that led to a mob storming the Capitol Building, certain platforms have a significantly greater impact. For example, with more than two billion users worldwide, Facebook has unprecedented reach, and that reach has created a near-monopoly on certain types of information and the sharing of that information. For instance, small businesses often rely on Facebook to find customers. Content creators use Facebook to create visibility for their work. Software developers seek to attract customers on the platform. Media outlets use the platform to share news articles. The list goes on.  

Platforms like Facebook employ the details of personal profiles to gauge which content it believes a particular user will find enticing. Then, the platform will calibrate the user’s feed according to this process in an effort to maximize the amount of time that the user stays online. The result is that the information that appears in our feeds is informed, to at least some degree, by what our friends and network contacts post and consume. It is shaped, by a much larger degree, by the platforms’ algorithm.

This is precisely the point at which data privacy, personal autonomy, and democracy intersect.

The Problem and Ways to Avoid the Spread of Disinformation

Disinformation can harm businesses in a myriad of ways. Incorrect news, negative social media posts, and even overtly false consumer reviews can adversely impact a company’s bottom line.

Successful companies understand their markets, their customers, and their partners. They also need to understand how their brand is perceived by users of social media. This can be achieved by using in-house technology or hiring an outside firm. By doing so, companies can get advance warning of an individual’s or group’s efforts to spread disinformation about a given brand. To the extent a business participates in e-commerce and has a social media presence, the business should aim to establish verified accounts on major platforms and use them regularly to establish their markets.

Other tools businesses can use to avoid the spread of disinformation are: self-assessing, preparing for incident response, and communicating directly with their customers. In addition, data ethics should be incorporated into decision-making along with business motivation, technological practicality, and legal compliance.

How Federal Privacy Legislation Could Help

The federal government has no organization to regulate or help quell the spread of disinformation, and there is no one particular person within the government in charge of an overall disinformation policy. The United States needs a comprehensive approach to risk generated by data. Accordingly, any effective federal privacy regime must take into account the process of data throughout the whole lifecycle of data governance.

The business industry has plenty of reasons to support federal privacy legislation. For one, a single piece of comprehensive legislation reduces confusion surrounding compliance. Second, one law to rule them all would likely preempt many of the piecemeal legislative efforts of various states. Lastly, in the wake of the Schrems II decision, passing a commercial privacy law would help the atmosphere considerably as negotiations go forward with the European Union with regard to transborder data flows.

It is also worth noting that some of the largest markets in the world are moving toward comprehensive data protection laws, such as China, India, Brazil, and Canada. The adoption of a similar comprehensive law in the United States would solidify the United States’ position as a world leader in data privacy.

The goal of any federal privacy legislation should be to preserve the most beneficial aspects of social media platforms while simultaneously protecting individuals and businesses from the platforms’ more harmful impacts. Most pending federal legislation include the basics: data access, deletion rights, and portability. The next steps will be to incorporate protections against disinformation.

Krishna A. Jani is a member of Flaster Greenberg’s Litigation Department focusing her practice on complex commercial litigation. She is also a member of the firm’s cybersecurity and data privacy law practice groups. She can be reached at 215.279.9907 or krishna.jani@flastergreenberg.com.

%d bloggers like this: