Privacy Lessons and Takeaways for AdTech from the Privacy Enhancing Technologies Symposium 2025
A Guest Blog by Chloe Cowan | August 13, 2025
The 2025 Privacy Enhancing Technologies Symposium (PETS 2025) held at Georgetown University in Washington, DC, July 14–19, brought together leading academics and privacy technologists to discuss key international research initiatives related to data privacy and security. Some of the insights drawn from recent research show how Privacy Enhancing Technologies (PETs) are supporting the advertising ecosystem, which is currently facing substantial regulatory scrutiny that reflects widespread consumer confusion and distrust, often stemming from the opacity of current data processing that drives digital advertising.
The following are summaries of five new research papers and key takeaways from each that can help inform and promote a more privacy-oriented future for digital advertising. Overall, these papers presented at PETS 2025 provide a technological roadmap of “dos and don’ts” for ad-tech companies and others across the digital advertising ecosystem to utilize PETs to strengthen privacy.
1. Tracking tools are often misconfigured
Presenting their paper “Tracker Installations Are Not Created Equal: Understanding Tracker Configuration of Form Data Collection,” researchers from New York University and Northeastern University examined how popular advertising tools like Meta Pixel and Google Tag Manager collect user data through form fields on websites. What they found was that the tools themselves do not collect personal data by default, but their setup flows and interface designs might encourage website owners to turn those data collection features on when they may not realize the full privacy implications.
Following are key findings that can inform areas of improvement:
- Meta Pixel includes Form Data Collection (FDC) as one of its four basic setup steps. Once turned on, it automatically selects all personal fields, such as name, email, or other data from contact forms, by default–resulting in the least private configuration without requiring explicit web admin intent.
- 62.3% of websites using Meta had FDC enabled, compared to only 11.6% for Google, showing how interface design can drive high-risk configuration without an admin’s explicit permission.
- Meta and Google both rely on hashing as a form of privacy enhancement. However, the researchers note that hashing alone has a limited effect as a privacy solution, especially when the hashed values are unique and potentially linkable (such as email addresses).
Meta’s setup interface was found to be a primary vector to “nudge” web administrators toward more immersive privacy choices, effectively outsourcing legal responsibility for data protection. Alongside these interface-based nudges, they found that documentation provided to website owners may result in deployment of FDC in ways that minimize privacy and possibly even lead to non-compliance with various legal requirements; especially since this technology does not provide controls for platforms to identify and prohibits the collection of sensitive health or financial data–rather, this is a function for websites to self-identify as sensitive based on a range of variables. This self-designation process is not clear to many website operators and not sufficiently-explained in the documentation for FDC, which can potentially contribute to web administrators unknowingly circumventing intended safeguards.
Overall, this research presents an important finding about how effective implementation of technology can improve user privacy: advertising technologies are often not inherently private or threatening, rather, it is the way they are configured and deployed across various digital properties that determines their privacy impact. Tools like Meta Pixel and Google Tag may not collect personal data by default, but their installation flows, interface designs, and permissive defaults are examples of how ad tech can nudge website administrators into enabling intrusive features, sometimes without full admin awareness.
Businesses that engage in digital advertising, particularly those that are integrating ad tech must pay close attention to potential impacts with a mind towards implementations that maximize privacy. Further, steps should be taken to avoid “dark patterns” from resulting, as some ad-tech solutions are not necessarily set up with privacy-conscious defaults in setup flows. Sometimes configurations that enhance data privacy may induce additional friction. These are key findings because long-term user trust and platform credibility is often more valuable than short-term data collection, not to mention potential legal risk from configurations that collect more data than necessary or intended.
2. PII “opt-out” services aren’t living up to their goals
According to researchers, in their study “Measuring the Accuracy and Effectiveness of PII Removal Services,” after evaluating what happens when users attempt to remove their personal data through four major personally identifiable information (PII) removal companies against 659 companies identified as “data brokers,” the results were underwhelming. This paper presents the first large-scale study of commercial PII removal systems–commercial services that claim to improve privacy by automating the removal of PII from data broker databases. These services are also often referred to as consumer privacy “authorized agents.”
Key findings from this research include:
- Only 41.1% of records matched the user’s actual data, meaning most “removals” were based on false positives or irrelevant records.
- Even after a month, the average success rate for confirmed removals was just 48.2%.
- The best performer, Incogni, removed 76.6% of user-verified records. Mozilla Monitor achieved the highest accuracy (57%) but covered fewer brokers overall.
- Removal services rely on user-submitted PII, and they usually rely on the data broker’s APIs, which are often poorly documented, leading to inconsistent, opaque results.
- Most users are ultimately not informed by the agents about what was removed or retained–transparency and auditability for data removal outcomes are low.
These findings challenge the conception that commercial consumer privacy services are meaningful under current conditions–when consumers pay for privacy, they typically do not get what they expect. The study demonstrates that even the most widely-used PII removal services frequently fail to fully delete accurate records, often operate without transparency, and provide limited coverage across data brokers. As the authors note, these shortcomings highlight both the technical and procedural flaws in the services themselves and the systemic gaps in regulation and accountability. The authors argue for stronger data governance frameworks–including standardized opt-out APIs, improved broker registries, and enforceable removal protocols–to improve individual control, and that without structural reform, users may continue to pay for privacy services that fail to meaningfully protect them, undermining trust in the data ecosystem.
3. “Acceptable ads” still deliver inappropriate and deceptive content
In theory, ad block users who enable “acceptable ads filters” should receive high-quality, respectful ads that avoid invasive tracking or manipulation. However, New York University researchers in their study “Sheep’s clothing, wolfish impact: Automated detection and evaluation of problematic ‘allowed’ advertisements” showed that many of these approved ads still include deceptive claims, manipulative design patterns and age-inappropriate content, particularly for underage audiences.
Some of the most important takeaways are:
- A manual and LLM-assisted analysis of ads shown to U.S. and German users found that 13.6% of “acceptable” ads contained problematic content.
- Categories of harm included: misleading health or finance claims, political propaganda, dark patterns (like fake exit buttons), and excessive stickiness or autoplay videos.
- VLMs (Vision-Language Models) trained on an annotated dataset of flagged ad categories matched human judgements with high accuracy, offering promise for scalable detection.
- Users of privacy-focused tools like Adblock Plus (ABP) were sometimes served more problematic ads due to loopholes in allow-lists (lists of ads or domains allowed to bypass ad blocking) and ad exchanges’ behavior that exploits privacy-conscious consumer profiles.
Based on this research, companies should rethink efforts to deploy “acceptable” filters that fail to prevent harmful or misleading ads. To address these risks, the authors recommend that “acceptable” ads be held to measurable standards based on content criteria, not just aesthetic guidelines. These include amending:
- Manipulative UX patterns such as countdown timers or fake buttons;
- Deceptive or unverified claims, especially for sensitive industries such as health or finance;
- Age-inappropriate content for underage users; and
- Lack of advertiser transparency.
Crucially, the authors argue that privacy-aware users may face differential treatment by receiving harmful ads due to diminished ad targeting value because of limited user profiling resulting from privacy tools that block data collection. This suggests that current privacy tools may unintentionally increase consumer vulnerability, raising important concerns about effectiveness in adtech’s privacy infrastructure. The authors conclude by arguing for the use of scalable LLM-automated detection tools, independent auditing of ad exchanges, and improvements to filtering mechanisms within privacy tools.
4. Behavioral fingerprinting can still re-identify individuals
A common privacy myth for users is that deleting cookies or using private browsing ensures anonymity. However, the study “Rethinking Fingerprinting: An Assessment of Behavior-based Methods at Scale and Implications for Web Tracking,” presented by researchers from Georgetown University and Carnegie Mellon University, demonstrated that behavioral fingerprints, which are patterns in how users browse, click, or scroll, can sometimes link activity back to individuals within just a few sessions. The paper highlights how deployment of PETs to prevent traditional web tracking technologies may still allow for user-identification, and it identifies a range of techniques privacy technology can use to greatly reduce identifiability of users.
The study’s major findings show:
- Behavioral fingerprints can be very distinctive: among 150k users, the model could reliably differentiate a given user from around 141,930 others–reducing effective anonymity by 94.6%.
- Even with only a single prior browsing session, the model could link a new session to the correct user with 84–95% accuracy.
- When behavioral fingerprints were combined with browser fingerprinting signals (such as device specs, operating system, language), the model created highly persistent and unique user profiles.
- After a discontinuity (such as clearing cookies), observing more than 10-15 days of post-reset browsing allowed the model to correctly link sessions to users with 71.5% accuracy, and achieve near-total recognition (99.9%) in 87.9% of sessions.
This research illuminates the prevalence of reidentification through behavioral tracking, which is harder to detect and control than cookies. Even well-meaning platforms could overstep user expectations if they rely too heavily on this method. To ensure targeted advertising remains ethical and adaptive to new privacy concerns, privacy-preserving technologies should implement concrete limits on behavioral tracking such as shorter tracking windows, cohort-based targeting that avoids individual profiles, and user-facing disclosures about how consumer behavior is being used in targeting.
5. Cross-app and SDK-Based tracking needs more guardrails
In the study “Your Signal, Their Data: An Empirical Privacy Analysis of Wireless-scanning SDKs in Android,” researchers investigated Android apps using beacon-based software development kits (SDKs)–small code packages that collect location data by monitoring nearby WiFi and Bluetooth signals. These SDKs were sometimes found to collude across apps, share identifiers, and bypass platform restrictions.
Key findings here reveal:
- Out of nearly 10,000 apps, researchers identified 52 beacon SDKs, collectively installed more than 55 billion times.
- 86% of these apps collected at least one sensitive data type, including location, service set identifiers (SSIDs), and advertising IDs.
- Some SDKs exploited known vulnerabilities–for example, the Vizbee SDK, present in apps with over 164 million installs, leveraged the side-channel vulnerability CVE-2020-0454 (reported by the NIST) that allowed apps to read WiFi SSIDs on Android 9 and below without user permission.
- 14% of the analyzed SDKs in the study linked resettable ad IDs with persistent identifiers–a practice known as ID bridging– defeating OS privacy protections and enabling long-term cross-app tracking.
- Only 5 of the 52 SDKs embedded in analyzed apps implemented permission rationale dialogs, and in most cases the host apps failed to present these explanations to users.
This backend tracking architecture operates invisibly, and therefore has the capacity to undermine both user trust and mobile platform policy. To build trust with mobile audiences, the authors propose measures such as independent audits of SDK behavior conducted before app release and periodically at runtime, public SDK registries, and stronger platform oversight to ensure responsible data practices.
Looking Ahead: Privacy is the Path to Sustainability
With rising regulatory standards and growing public awareness, the ad-tech ecosystem has an opportunity to lead by embracing higher levels of accountability, transparency, and user control—building stronger trust and a more sustainable future. PETS 2025 shared insights that encourage the industry to evolve targeted advertising practices, address design gaps, close technical loopholes, and improve longstanding habits that negatively impact user trust. Advertisers and platforms have the opportunity to shape a stronger digital economy by:
- Focusing on privacy by design in SDKs, interfaces, and defaults setting,
- Championing quality and transparency in ad content, and
- Supporting regulatory and technical standards that ensure fair practices.
By embracing these principles, the industry can build lasting trust and ensure targeted advertising thrives in a privacy-conscious future.
About the Author

Chloe Cowan is a current intern for the NAI and second-year graduate student at Heinz College, Carnegie Mellon University, pursuing a M.S. in Information Security and Policy Management. She is a recent graduate of Northeastern University, where she earned a B.S. in Computer Science and Sociology. She is interested primarily in leveraging data analytics to examine surveillance efforts and address collective data privacy concerns.