We give safety ratings to cars and restaurants. Why not do the same to protect our digital lives?

The American economy runs on what are known as heuristics, a diverse array of mental short-cuts that help consumers make a dizzying number of choices to navigate the wild complexity of everyday life. These shortcuts help us select the restaurants we may choose to patronize, the cars we drive, the food we purchase, and the schools we attend and to which we send our children. We rely on scoring systems, certifications, and ranking methodologies to consider what movies to see, what music to listen to, and whether to purchase fair-trade products. These shortcuts come in many forms, from the complex (like the tools used to rate bonds and other financial products) to the straightforward (like the letter grades that many municipalities generate to inform consumers whether a particular restaurant follows safe food-handling practices).  

Sometimes these systems are managed and operated by the government, like the National Highway Traffic and Safety Administration’s system for grading automobiles and trucks for their performance in crash tests, but often by private entities, like Consumer Reports. Sometimes the ratings are purely peer-to-peer and aggregated, like the ubiquitous “five-star” rating systems for ride-hailing companies or delivery services. In the end, consumers rely on these systems every day to make decisions great and small, to help make sense of a complex world where we are too easily prone to information overload.

One area that cries out for a methodology that would provide consumers with critical details about the products and services they are using is one that is largely devoid of these types of shortcuts: our online life.   

We search, scroll, bank, shop, talk, text, stream, post, like, stan, and even hook up in the digital world. And we enter sites, download apps, communicate over platforms, access our financial information, and provide intimate details about our health and welfare without the slightest clue about what the entities with which we share such information do with it. The truth is, most will use it for their own profit and often sell it to data brokers: the third-party entities that, in turn, pass it along to other companies that might then use and abuse it, selling us products, pushing content to us we may not want, and perhaps even getting us to engage in behavior we might otherwise avoid if we were truly educated consumers about the uses and abuses of our digital data. AI only amplifies that influence.

But what if there was a way to use the power of heuristics to protect our digital privacy through simple shortcuts that could give consumers basic information about how different sites, apps, and platforms were exploiting the digital activities they harvest from us?  

At present, some American states and the European Union have created rules of the road for the sunny information “superhighway,” as it was once called so quaintly in the 1990s. Instead of an information superhighway where consumers can travel at will, free of harm or surveillance, when we enter the digital world today, a better metaphor is the “Upside Down”: the shadowy, parallel world from the hit TV series Stranger Things, where entities with access to our digital lives create replicants of us that follow us around, always just below the surface, waiting to do us harm.

We are already living in a world where we get asked to “accept” a particular company’s “cookies” policy or its terms of service. These relatively “light touch” disclosure regimes are the product of laws and regulations passed around the world. The European’s General Data Protection Regulation (GDPR) has largely set the global standard because tech companies do not want to have to ascertain when a particular consumer is subject to those regulations or not. And it is the GDPR, and the European Union, that we have to thank for those ubiquitous pop-ups that ask us to accept the company’s cookies policy.  

But those rules actually mask what is going on under the hood. Companies can comply with the disclosure requirements by giving consumers the option of accepting their practices or not, and burying those disclosures in user agreements that are unintelligible to the average user. As a result, current practices in the digital world require a far more robust regulatory response than that which the relatively weak disclosure regimes that presently exist currently offer. 

Consumers are also routinely presented with complex terms of service, which few will read to the end, and even a smaller number will completely understand. Indeed, rare is the consumer who ever actually reviews these policies prior to entering a site or download an app. If they did, they would likely find few privacy-protective policies, if any. Instead, more likely than not, a review of those policies would reveal that the company engages in cross-site tracking, sells consumers’ information, and forces such consumers to go to arbitration even for violations of those very terms of service policies, among other things.

What legal protections do exist on the internet actually largely protect companies, and not consumers. Laws like Section 230 of the Communications Decency Act insulate many companies that engage in activities online from being sued for the content on their sites. Courts, too, following federal law, largely enforce the terms of service that require that disputes about a company’s actions must be resolved, not through the courts, but through arbitration. All of this is a result of a powerful tech lobby that not only fights any meaningful regulation of their activities but also complains that any government intervention will stifle innovation and the economic benefits and convenience these companies generate. 

Enter the Zone

But there is another way, one that does not require the heavy hand of government, that can still foster innovation and put the power in the hands of consumers to drive business behavior and not the other way around. A more robust regulatory regime for the digital world could draw on the power of grading systems to send a clear message to consumers about the risks that particular apps or sites may pose to our digital privacy. It would provide this information to consumers in an easy-to-understand format that does not require a deep dive into the bowels of a company’s end-user agreement, or a certificate in legalese. Instead, whenever a consumer accessed a site, app, or platform, that service would communicate whether it is protective of the consumer’s privacy or not.  

While there are many ways that a company can protect, or violate, a consumer’s privacy, and engage in activity that makes it unaccountable to that consumer should it breach their privacy, a simple, easy-to-understand system would grade companies on how well they do in terms of protecting their customers’ privacy or routinely violate it. That information would be communicated through one letter, a grade, that the company would have to reveal prominently as any consumer accessed the service. The consumer would then know, immediately, whether this is an entity that looks out for consumer privacy and which tends to exploit it. But where would such grades come from?

Some grading systems are opaque, with the ultimate grade issued by a government agency, like the restaurant letter grades in New York City.  One can assume that an “A” grade means that the restaurant meets basic quality standards. And it’s hard to find a restaurant worth their salt that does not have that A grade. In fact, anything less is usually enough to ward off many customers.

In a regime for the digital world, one could adopt a type of digital “zoning” modelled after land-use restrictions in IRL. In land-use zoning, certain uses are permitted and others are excluded in particular areas or zones. You generally don’t have a power plant or waste treatment facility abutting single-family homes. That’s because of zoning.

If an area is “zoned” for particular uses, individuals and businesses that wish to engage in those uses are free to do so within it. Developers, government regulators, commercial establishments and residents can easily find out what is permitted and what is not from a predetermined description of particular zones. Anyone can comply with those restrictions, or find themselves facing litigation, fines, an order to stop what they are doing, and perhaps even dismantle any illegal development that has occurred.

Zoning in the digital world could work much the same way. Privacy-protective uses will be clustered in the best zone; let’s call it “Zone A.” In that zone, companies would not track a consumer’s activities on their site, not even keep personally identifying information unless it was necessary for their own purposes, and certainly would not sell such information to third parties. They would agree to stiff punishments for violations of their consumers’ privacy and allow those disputes to be resolved in a court of law, instead of forcing individuals to go through business-friendly arbitration settings of those businesses’ choosing, as many companies choose to do today. Ultimately, a company agreeing to provide this suite of privacy-protective practices by operating within Zone A would be able to market to its customers that they are doing so by displaying an “A” prominently on their home page, their app’s site on an app store, or whenever a consumer starts to enter that site from their smartphone.  

If a company failed to provide these sorts of privacy protections, it would not receive that grade. Instead, it could choose from a number of different zones that would offer a different suite of protections along a spectrum, from best to worst. When a company provides some privacy protective measures, that would justify it displaying a higher grade, even if not an A. The system would cluster an array of practices—covering search, sale of data, monitoring user behavior, etc.—and grade companies on the extent to which they meet the more privacy-protective practices or are more likely to take advantage of their customers. Those companies that are least protective of their customers’ data would earn an “F.”

All companies would have to display their grade prominently whenever a consumer engages with that company’s site, service, app, or platform. Consumers would have an immediate read on whether the company is looking out for the customer or abusing their data for its own benefit.

While disclosure-based regimes are sometimes themselves abused, by, for example, companies making it difficult to understand what their policies are, or burying the important disclosure in legalese, a disclosure regime that is clear and easy to understand will put the power back in the hands of the consumer. Such a regime could create a race to the top, with companies vying to be more protective of their consumers’ data because they have to be completely transparent about their data privacy practices. 

Instead of stifling innovation and competition, digital zoning could actually encourage both, prompting companies to find ways to deliver their products and services in ways that are more protective of their customers’ interests and not less. Moreover, companies have a clear choice within this regime: no particular grade would be mandated. Companies would be free to do as they please with their customers’ data—provided they are open and honest about their practices.

What are the exact contours of this system and who would get to begin to cluster the different practices that determine the grade companies would receive? All of us. Legislators, technology companies, online safety and security experts, and consumers could engage in a dialogue around these issues to start to chart a course forward when it comes to our digital life that will encourage innovation that is protective of our privacy and does not simply see privacy as, at best, something to get around, or, worse, something to exploit.

This type of robust and meaningful disclosure can occur without heavy-handed government intervention. Government will certainly have a hand in helping to write the rules of the road and setting the contours of the zones, with extensive input from a wide range of stakeholders, but it will not need to engage in extensive regulation of private companies. Of course, there will be a need to police company practices to make sure they are complying with the requirements of the letter grade they say they deserve, but that can be accomplished by stiff penalties, fines, and damages actions when companies misrepresent the types of protections they afford their customers. Such policing can come from state attorneys general and consumers themselves. It will also require strong whistleblower protections so that employees are free to come forward if the companies for which they work are not following the law, as well as stiff penalties for companies that engage in this sort of fraudulent behavior.

Digital zoning would establish a clear and easy-to-understand approach to online privacy, empowering consumers while promoting corporate transparency and accountability. It could create a market-driven system that makes clear to consumers which companies protect their privacy and which might violate it. And it can enlist the government to police the boundaries of the zones, and not necessarily impose command-and-control policies from on high. Such a market-driven approach would place the consumers in the driver’s seat and give them a clear sense of the rules of the road—and who is following them around.  

As technology becomes more and more present in our lives, it’s important we have a clearer way to know if the companies we do business with are harvesting our data or selling it to those who will use it for purposes we don’t know, and would never accept if we knew it was happening. The time is right for us to better understand how technology serves us, rather than having such technology serve us up to anyone eager to exploit our data.


Adapted from The Private Is Political: Identity and Democracy in the Age of Surveillance Capitalism by Ray Brescia. Published by NYU Press. Copyright © 2025 by Ray Brescia. All rights reserved.

https://www.fastcompany.com/91276889/is-your-app-an-a-or-an-f-how-digital-zoning-could-protect-our-rights?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Creată 4d | 16 feb. 2025, 12:50:06


Autentifică-te pentru a adăuga comentarii

Alte posturi din acest grup

Playing by the rules of AI

The Fast Company Impact Council is a private membership community of influential leaders, experts, executives, and entrepreneurs who share their insights with our audience. Members pay annual

20 feb. 2025, 02:40:06 | Fast company - tech
The iPhone 16E is here. Here are 5 ways it’s different from the iPhone SE

Apple has introduced its first new product of 2025: the iPhone 16E. The new iPhone replaces the iPhone SE from Apple’s lineup—the company’s entry-level, budget iPhone. But the iPhone 16E is more t

19 feb. 2025, 19:40:06 | Fast company - tech
Malaysia is looking to data centers to boost its economy, but experts warn of risks

Winson Lau has always had contingency plans. But he wasn’t prepared for

19 feb. 2025, 17:30:03 | Fast company - tech
AI-generated images can be art. They just can’t be photos

As a kid of the 1970s, I was fascinated by a short-lived art movement known as photorealism. The

19 feb. 2025, 15:10:08 | Fast company - tech
This AI trend lets TikTok users relive history’s best—and worst—moments

Ever wondered what it would be like to wake up in Pompei on eruption day? How about how it would have felt to be a passenger on the Titanic? Now you don’t need to. A new TikTok trend lets you trav

19 feb. 2025, 15:10:07 | Fast company - tech
This AI tool could help curb domestic violence

A new technology can pinpoint victims of intimate partner violence four years earlier

19 feb. 2025, 12:50:04 | Fast company - tech