top of page

Data Privacy development to look out in EU for 2023

Updated: May 23, 2023


Data Privacy development to look out in EU for 2023
Data Privacy development to look out in EU for 2023

Virtually every country has enacted some sort of data privacy law to regulate how information is collected, how data subjects are informed, and what control a data subject has over their information once it is transferred. Failure to follow applicable data privacy laws may lead to fines, lawsuits, and even prohibiting a site’s use in certain jurisdictions. Navigating these laws and regulations can be daunting, but all website operators should be familiar with data privacy laws that affect their users.

For the EU, General Data Protection Regulation remains the law of the land, but new data privacy-related laws have been passed in the EU recently — notably, the Digital Services Act and Digital Markets Act. There are several proposals to be aware of in 2023 as well.

Digital Services Act (DSA)


The new regulation addresses illegal and harmful content by compelling platforms such as Google and Facebook to remove content that doesn’t meet certain standards. The primary principle is “what is illegal offline must be illegal online,” according to the Council of the EU. The Digital Services Act (DSA) entered into force on November 16, 2022. Different provisions of the law will become effective at different times, with the law coming fully into force on February 17, 2024.

It applies to four categories of businesses:

  • Intermediary services offering network infrastructure, such as ISPs

  • Hosting services, such as cloud and web-hosting services

  • Online platforms that bring sellers and consumers together, such as online marketplaces, social platforms, and app stores

  • Very large online platforms, which are defined as online platforms that reach more than 10% of the 450 million consumers in Europe

Each category faces different requirements.

All of the above categories must:

  • Engage in transparency reporting on court orders and actions taken, content moderation efforts, and more

  • Update terms of service to account for fundamental rights

  • Cooperate with national authorities

  • Establish points of contact for authorities and, when necessary, legal representatives

Hosting services, online platforms, and very large online platforms must:

  • Provide a notice-and-action mechanism enabling users to note potential illegal content for the business to remove

  • Report criminal offenses

Online platforms and very large platforms must:

  • Implement a complaint and redress mechanism

  • Identify trusted flaggers whose expertise adds special weight to their content notices

  • Take measures against abusive notices and counter-notices

  • If they have a marketplace feature, take special actions, such as vetting third-party suppliers’ credentials, adhering to compliance-by-design principles, and more

  • Not target advertisements to children or target advertisements based on users’ special characteristics

  • Provide transparency into content recommendation systems

  • Provide user-facing transparency into online advertising practices

Very large platforms must:

  • Adopt risk management practices and establish crisis response protocols

  • Acquiesce to external, independent auditing, establish an internal compliance function, and be publicly accountable

  • Provide users the choice to not be subject to content recommendations based on profiling

  • Share data with authorities and researchers

  • Adhere to self-drafted codes of conduct

  • Cooperate with authorities during crisis response situations

EU data protection authorities may access, obtain information from, and inspect service providers to inform orders and sanctions. If a business is found to be in violation, it may be fined up to 6% of annual global turnover during the preceding financial year. If an information obligation under the DSA is violated, the maximum penalty is limited to 1% of the previous year’s income or global turnover.


The Digital Markets Act


The Digital Markets Act (DMA) covers the largest digital platforms, known as “gatekeepers,” which include companies like Facebook, Apple, Microsoft, and Google. The DMA aims to level the playing field for digital companies and prevent gatekeeper companies from imposing unfair conditions on their competitors. For example, a company like Amazon isn’t allowed to rank products on its site in a way that gives Amazon’s own products and services an advantage.

A company is considered a gatekeeper if it:

  • Has a strong economic position, significant impact on the EU market, and is active in multiple EU member states

  • Has a strong position as an intermediary linking a large user base to a large number of businesses

  • Has or will soon have an entrenched position in the market, which is determined by whether or not the company met the two previous criteria in the last three financial years

Under the DMA, businesses that qualify as gatekeepers must:

  • Not engage in self-preferencing, where the gatekeeper promotes their own products and services over an equivalent third-party product or service on the gatekeeper’s platform

  • Not reuse users’ data outside of the context in which it was originally collected without consent

  • Not track users outside of the gatekeepers’ platform for the purpose of targeted advertising without consent

  • Permit communication and content access between businesses and end users

  • Ensure price and fee transparency in advertising intermediation services

  • Provide access to marketing or advertising performance data on the platform to users

  • Make it easy for users to change their default settings and uninstall software

  • Ensure third-party technology can interoperate with the gatekeeper’s own

  • Ensure end users’ data is portable to other systems

  • Provide businesses with real-time access to their data on the gatekeeper’s platform

  • Not prevent users from making complaints to authorities

  • Not require user registration to additional services as a condition of accessing a given service

  • Not use businesses’ non-public data to compete against them

And more

Gatekeepers that violate the DMA may be subject to fines of up to 10% of annual global turnover or up to 20% in the case of repeated violations. What’s more, repeated violations may result in non-financial remedies, such as forced divestitures.

There are other proposals to watch out for in 2023:


The EU-U.S. Data Privacy Framework


Although it isn’t a law per se, the EU-U.S. Data Privacy Framework is an important factor to be aware of.

Previously, businesses transferring EU citizens’ data into the U.S. relied on a framework called the Privacy Shield to ensure the data was sufficiently protected, but that framework was deemed invalid during the Schrems II court case. Since then, businesses have relied on standard contractual clauses approved by the European Commission to provide legal protection for data transfers.


However, these clauses are somewhat shaky; U.S. businesses aren’t supposed to rely on them if they are subject to the Foreign Intelligence Surveillance Act’s (FISA’s) Section 702, which allows U.S. intelligence services to conduct searches of foreign communications, which includes EU citizens’ data. The intricacies of Section 702 are outside of the scope of this blog, but the critical thing to know is that it isn’t always clear when a business is subject to Section 702 or not. Thus, the SCCs are risky to use, but there isn’t an alternative legal framework for international data transfers between the EU and U.S.

Until recently, that is.


On October 7, 2022, President Biden issued an Executive Order on Enhancing Safeguards for United States Signals Surveillance Activities. The order outlined the new EU-U.S. Data Privacy Framework, including additional security measures, a redress mechanism for EU and U.S. citizens who feel their rights have been violated, and greater protections for foreign citizens’ data that has been transferred to the U.S. Additionally, the framework requires intelligence agencies to make updates to surveillance-related policies and procedures, followed by a review by the Privacy and Civil Liberties Oversight Board.


Currently, the proposed framework has seen its first draft by the European Commission with input from the European Data Protection Board. There will likely be criticisms from European privacy advocacy groups, but if the framework survives, it could be the method businesses use to transfer data between the EU and U.S.


E-Privacy Regulation


The e-Privacy Regulation (ePR) has been a long time coming. It aimed to come into force alongside the EU’s General Data Protection Regulation in 2018 but has stalled for years. In March 2022, the EU Council agreed on a draft, but regulation isn’t expected until at least 2023. Furthermore, if the ePR does enter into force during 2023, there will be a 24-month transition period. So, at the very earliest, businesses will have to become compliant by 2025.


The ePR, if passed, would create privacy rules for traditional electronic communications services and entities that weren’t covered by the former law, the ePR, such as WhatsApp, Facebook Messenger, and Skype.


It would create stronger rules on electronic communication’s privacy, and it would apply to communications content and “metadata,” that is, data that describes other data. Under ePR, service providers and electronic communications networks must get prior consent from the user before processing their electronic communications metadata.


It would also, importantly, create more straightforward rules on cookies. It would allow users to consent or deny tracking cookies at the browser level, and it would also clarify that websites do not need to get consent for what is called “non-privacy intrusive cookies.” Those cookies allow website features like “shopping carts” to track what a user has ordered. It would also require that organizations enable end-users to withdraw their previously granted consent at least once per year.


AI Act


The EU’s Artificial Intelligence Act would apply to any company doing business in the EU that develops or adopts “high-risk” AI systems. These systems affect employment, credit, health care, and other critical domains.


The Act was introduced in 2021 and is currently up for consideration in the European Parliament. As of this writing, the Act is up for a vote sometime in the first quarter of 2023. However, given the complexity of AI, this vote may be delayed to incorporate further amendments.


The AI Act would apply extraterritorially, meaning the law will cover companies based elsewhere if they have customers or users inside the EU, effectively making it a global regulation.

Under the Act, businesses with applicable AI systems would have to:

  • Conduct impact assessments, keep records, and meet transparency obligations

  • Not develop systems that can be used to manipulate a person’s behavior in a manner that could cause mental or physical harm.

  • Not develop systems that can be used to exploit the vulnerabilities of a specific group due to their age, physical or mental disabilities, or behavior in a manner that could cause psychological or physical harm.

  • Not develop systems that could exploit vulnerable groups based on age, or physical or mental disability.

  • Not develop systems that provide real-time remote biometric data in publicly accessible spaces by law enforcement.

27 views0 comments
bottom of page