The terms of service for a major platform are not written to be read. They are written to be clicked through. This is not a side effect of legal complexity. It is the design goal.
Researchers at Carnegie Mellon University calculated that reading every privacy policy you encounter in a year would take roughly 76 work days. That number is old and probably conservative now. The companies writing those documents know this. Their legal teams know this. The product managers who decide where to place the “I Agree” button know this. The entire architecture of digital consent is built on a foundation that the people being asked to consent will never engage with the actual terms. That is not a bug. It is the product.
The Interface Is the Argument
Dark patterns in consent flows work because friction is directional. Companies carefully control which actions feel easy and which feel hard. Accepting all cookies takes one click. Customizing your preferences requires navigating sub-menus, toggling individual categories, and sometimes re-doing the process on every visit because the settings don’t save properly. The Norwegian Consumer Council documented this systematically in a 2018 report examining Facebook, Google, and Microsoft, showing how each company used interface design to funnel users toward the maximum-data-sharing option.
This is not a neutral design choice. It reflects a deliberate allocation of engineering resources. Someone decided how many clicks each path requires. Someone approved the gray color on the “Reject All” button and the blue on “Accept All.” Consent UI is A/B tested like any other product feature, which means companies have measured exactly how much agreement rates drop when they make the opt-out path easier. They chose not to make it easier.
The Legal Fiction of Informed Consent
Contract law has traditionally required that both parties understand what they are agreeing to. Browsewrap and clickwrap agreements have steadily eroded this standard in digital contexts, with courts generally upholding terms-of-service agreements even when users demonstrably could not have read them. The legal system has largely accepted the fiction that clicking “I Agree” constitutes meaningful assent.
This matters because the terms themselves have become increasingly aggressive. Mandatory arbitration clauses buried in these documents block users from joining class action lawsuits. Data licensing terms grant companies rights to user content that most people would reject if the terms were explained plainly. Some agreements contain provisions that allow companies to change the terms unilaterally, with continued use of the service constituting acceptance of the new terms. The user is bound by changes they were never asked to review.
GDPR in Europe pushed back on the most egregious practices by requiring that consent be specific, informed, and as easy to withdraw as to give. The result was predictable: companies built technically-compliant consent banners that buried the opt-out options in layered menus while making “Accept All” the path of least resistance. Compliance in form, defiance in function.
Scale Makes This a Structural Problem, Not an Individual One
The standard response from companies is that users have a choice. Read the terms, or don’t use the service. This argument collapses under its own weight when the services in question are utilities. Opting out of Google’s terms means opting out of Gmail, Google Maps, Google Docs, and Android. Opting out of Meta’s terms means losing contact with the social network where your family coordinates. The “just don’t use it” response was always a deflection. At scale, with network effects locking users in, it becomes insulting.
The problem compounds because the data collected under these agreements is used in ways that weren’t disclosed at the time of collection. Cambridge Analytica harvested data through Facebook’s platform in ways that were technically within the terms of service as written. The 87 million users whose data was used did not consent to political profiling because nothing in the user experience communicated that possibility. The terms permitted it. Nobody knew.
The Counterargument
The strongest version of the counterargument is that long, complex terms are unavoidable. Platforms operate across dozens of jurisdictions, each with its own legal requirements. Lawyers write for legal precision, not readability. Simplified terms would introduce ambiguity that courts would interpret unpredictably, creating more risk for users and companies alike.
This is a real constraint, but it doesn’t justify the interface manipulation. The length of a document and the design of the consent flow are separate problems. A company could present genuinely simplified summaries alongside the full legal text, with clear links between the two. Many companies do this poorly or not at all, not because it’s technically impossible, but because it would increase the number of users who understand what they’re agreeing to, and some of them would refuse. The complexity of the legal text is a legitimate problem. Using that complexity as cover for manipulative UI is a choice.
What Would Honest Consent Look Like
The current system treats consent as a liability management exercise. Meaningful consent would require that the interface be as neutral as the agreement is supposed to be. The opt-out path should require no more friction than the opt-in path. Material changes to terms should require explicit re-consent, not passive acceptance through continued use. Data practices that users would reject if explained plainly should be explained plainly.
None of this is technically difficult. It is commercially inconvenient. Companies have structured entire business models around data rights that users would not voluntarily grant if the transaction were transparent. The “I Agree” button is not a contract. It is a liability shield that has been engineered to look like one, and the gap between those two things is where most of the internet’s business model lives.