Proposal: Treat FLoC as a security concern
source link: https://news.ycombinator.com/item?id=26854073
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
All of this has been happening with tracking cookies, fingerprint tracking, pixel tracking and so on. And will continue to happen.
I find it so bizarre it took Google to talk about phasing out 3rd party cookies and replacing it with a much lesser technology in the face of FLoC, for people to suddenly be all up in arms about it.
FLoC is a new thing which is just being rolled out, so it's a lot easier for people to resist adding a new thing that makes the internet more crappy and less private.
I think it's unnecessarily fatalist to say that all of this will continue to happen so what's the point of resisting it. Public awareness and negative opinion of the pervasiveness and creepiness of internet tracking continues to grow, and advocacy against tracking mechanisms helps create the type of groundswell which could actually shift public policy to forbid such tracking.
Google specifically is catching some heat for potential antitrust problems, so raising a ruckus about Google abusing its dominant browser position to cram FLoC into the internet is more likely to have positive effect than ever before.
Not true, FireFox and Safari have had them off by default for over a year now. Additionally Chrome had planned to turn them off last year but then cried "covid" which for some reason = delay... because... think of the adverts! i mean covid!
Anyway, I'm pretty sure any large websites relying on 3rd party cookies for functionality will have already experienced users blocking issues. We have and have already been forced to change, we don't do ads, but our use case is a bit esoteric in that it needed 3rd party cookies for sessions.
Not quite. They will block some 3rd party tracking cookies that fall on their tracking blacklist. If you want to block all 3rd party cookies you have to explicitly disable them.
We're working to understand what legitimate use cases are broken without 3p cookies so we can work with Google to backfill them. FLOC helps ad trackers track but doesn't help with any of the legitimate uses of 3p cookies like auth.
I guess the tradeoff being made here is just leaning into our reliance on the certificate authority system. Whereas before, with third party cookies, you might have had more flexibility with how you structure your domains.
Front channel logout is also broken, part of the OIDC spec. It opens iframes to sites, but the sites don't get their cookies and can't write them.
I have sympathy for anybody with a legitimate use for third-party cookies whose life has been made more difficult by bad actors abusing features to maximise their own profits.
Same tragedy that finally caused origin-based cache partitioning at the expense of some performance.
https://twitter.com/jason_kint/status/1383391849902075911 (video in thread)
I don't see really any non-tracking reason for FLoC
What exactly about FLoC is making the internet less private though? Every time I see the technology it seems that it's deliberately built to keep private data in your Chrome browser and leak significantly less than anything else Ad related right now.
Also, why is WordPress allowing so many ad and anlytics tracking plugins and not considering those as security issues?
Also what's this "predatory targeting of unsophisticated consumers" about? You don't need targeting for this. Heck you don't need anything for this. The way it's usually carried out is you hack some sites and redirect them to you landing page about "this one magic trick to riches, banks hate her".
Just because we can't create an ironclad solution doesn't mean that there isn't real value in reducing the usable surface area for profiling and tracking users across the web.
FLoC is an additional means of tracking users that is presently being pushed by a giant corporation who controls a significant web browser, of course anybody who is pro-user-privacy would rail against its rollout. It's ridiculous to call that an irrational reaction.
Just curious. It undoubtedly works, but I've always wondered why it's so pervasive.
Here we find people saying (through legislative and regulatory action) that they want to end the use of 3rd party cookies because the bad behaviors they enable, and they are rightly outraged at the efforts to comply with the letter of the law while running roughshod over the intent of the laws.
At some point in the past year it seems the privacy concerns reached a critical mass. The Mozilla foundation is responding in ways to keep its browser share relevant.
And FLoC is... well, it's new.
It's not pretty but on the other end... ads do serve a function. And not all ads are bad. The focus should be on getting rid of the scammy ones, and not tracking won't help much with that.
As other comments point out, it would be more difficult for people to "be all up in arms" about third party cookies because third party cookies is not an issue that is easily attributable to one entity nor one set of documentation.
The idea that you can explain FLoC, third party cookies or any other digital advertising technology to the public is crazy talk.
Ad tech is extremely complex and constantly evolving - “the public” includes children, the intellectually disabled, the mentally ill, the elderly and the illiterate.
No amount of documentation is going to help these people reach a point where they could actually be considered as giving “informed consent” to their “use” of an ad-tech stack that often involves dozens of different legal entities and software components.
#####
“why it is not a threat to privacy”
The entire purpose of FLoC is to maximise profit for Google by minimising my privacy.
Only me, but I actually find the docs insulting for the reasons you stated.
1. People are starting to get fed up with tracking of all kinds (including third-party cookies). This is happening gradually, but is increasing. A consequence of this can also be seen from the legal side in the form of GDPR.
2. Google sees the tides on tracking are turning and tries to preempt by proposing a superficially less problematic alternative (FLoC) that will be least detrimental to their tracking business.
3. People dislike FLoC because it is not sufficiently lessened in this new normal and is therefore also unacceptable.
"Kill it before it lays eggs." but do we worry about what evolves from this if it dies?
Nothing really evolves here - status quo is what stays. You continue to be tracked head to arse on everyones servers, the media keeps adding 150 trackers to every webpage and the internet moves on.
Thinking that one of the biggest profit making industries in US will just go away if you scream loud enough on HN is utterly naive and will require a better push. This approach is inherently negative and just STOPS a process - but it doesn't IMPROVE on the current state and that will require more work.
I'm not quite sure what that work would be though - it seems that current approach is "this gigantic multibillon industry must be banned and completely destroyed" which is great on a personal level, but I don't feel like it's realistic on a purely political level.
Either they realize third party cookies are on a (regulated) dead end. Or they realize there is a bigger moat. Or something else that helps them.
But in any case, seeing the current Google, this is not something benefitting their users(products?) primarily. Unless some benefits accidentally aligned.
So, pushing back towards the broken status quo may be the right thing, if you know, or believe, how Google is going to benefit from the new FLoC.
I cannot evaluate that. But Googles track record does not offer me confidence their new tech is going to help me overcome the issues I have with the status quo.
Is that really so hard to figure out? Google wants to continue running their Ad network and they've been under attack for collecting data.
FLoC seems to significantly reduce the need to data collection while still enabling their core business to continue.
I might be wierd, but this still seems like a major win to me - since I care about my data not being stored somewhere off my devices and FLoC keeps it on my terminal equipment. And it allows my Firefox to send customized or faked data.
Which industry? Online advertising? Or the whole sector with Google at the front? I think it's a mistake to assume that tracking and the massive trading in personal information that takes place now is somehow foundational to either industry. Advertising worked before that was a thing and it will continue to work after. The amount of money flowing into advertising won't be dramatically changed because advertising is necessary.
It might be that if online advertising was significantly dumber, money would be shifted from online to print/tv/whatever, but that doesn't mean it's somehow "gone".
Also, if dumber ads are the only ads you can buy, then dumber ads will cost more. Now clever ads cost money (ads with fraud prevention mechanism, conversion tracking, fantastic targeting) costs a lot money. A dumb ad shown to every visitor to a website without any targeting or followup wouldn't bring much money per visitor. But if that dumb ad was what you could do and your other option is a bus stop ad - then you might have to pay a premium for that too. The loss of the ability to track people wouldn't change the laws of supply and demand for advertising space.
There are a half-dozen plugins one can add to Ungoogled Chromium to browse the web in (relative) safety. It's not a nation-state level undertaking: six or seven figures.
The problem really comes from apps, which are loaded to the gills with spyware.
Any government action will be a compromise by necessity. Which is why I think EFF doesn't really push for it - even GDPR doesn't ban targeted advertising in full.
The best outcome is to come up with a fundamentally better business model. Something that satisfies seller's desire to promote their products and customers desire to feel respected and important. Preferably cutting out a middleman and reducing costs of doing business at the same time.
A fundamentally better business model already exists: make users into customers. Google should charge users directly for the services they use. Then they wouldn't need to resort to all these underhanded tactics to try to monetize their valuable services. They could just monetize them directly.
Of course this is highly unlikely to happen now that everyone is conditioned to expect valuable services like Google's to be available for "free". But they're not free and never have been: the only question is how we pay the costs. Right now we pay those costs with our personal data and our attention, plus the time and effort we have to spend to try to push back against our personal data being monetized and our attention being incessantly competed for by advertisers. I would gladly pay in money to make those non-monetary costs go away. Perhaps I am an outlier and not many people would. But that just means we pay the costs in other ways that end up being even more costly than the direct money costs would be.
That’s why we need regulation. Under these market conditions, Google’s business model does appear to be the best for them.
In the long run it's better for everybody. But it is true that "the long run" can be pretty long.
> Do you think Google has never considered that business model?
I think Google probably considered it early on but found it easier to go the way they actually went. But "easier" is not the same as "best in the long run".
> their current business model will let them extract the most money from their products
Google doesn't have products, they have services. And of course, since their services are free to users and users are now addicted to that, they can obviously extract more money with their current business model since they have made a concerted effort to make the "users as customers" business model impossible.
However, their current business model was being built during the same time period when "Don't be evil" was still the company's motto and still apparently taken seriously by company leaders. Which means those leaders were either very disingenuous or delusional. Because addicting people to a free service and then exploiting them and their personal data in order to make the money they can't make from the users directly, as customers, is evil. And trying to keep their current business model propped up in the face of users becoming increasingly aware of the ways in which they are being exploited, is only going to force Google to be more and more evil. Sooner or later, if it doesn't change, it will kill Google as a company.
> That’s why we need regulation.
Regulation won't fix this problem. Corporations can always either buy their way around regulations (oh, another million dollar fine because we broke regulation XYZ about exploiting user data? just rounding error in our accounting) or buy enough influence to get the regulations written so they don't actually impose a burden on them (but do impose a huge burden on potential competitors, the new startups that would otherwise be finding ways to disrupt Google's current business model, since users are clearly becoming dissatisfied with it).
The only thing that will fix this problem in the long run is for users to realize that there is no such thing as a service that is (a) free and (b) valuable. We are going to pay the costs somehow. The simplest way to pay them--with money--is also, in the long run, the best.
It sounds like your reasoning is “users will eventually wake up!” which I would bet a lot of money will never happen.
Because long term, users will realize that letting their data be sold is bad for them and will stop considering it acceptable. Indeed, that is already happening. And so, as I said, Google will have to continually become more and more evil to try to prop up their business model by further obfuscating what they are doing, until it becomes unsustainable and they crash.
> It sounds like your reasoning is “users will eventually wake up!”
More like: when enough users have suffered serious harm from having their data sold (which is only a matter of time--plenty of users already have suffered harm due to Google's incessant seeking after data--see for example all the furor over the "real names" policy, which was not just Google but they took plenty of flak for it), it will stop being considered acceptable. (Users who already correctly foresee such harms, like me, are already taking whatever precautions we can to avoid providing the data in the first place. I don't use Facebook, I don't use Twitter, I don't use any other social medial "platforms", the only Google services I use are search and maps, and I never click on ads. And I would be glad to pay Google directly for search and maps, if only they would let me do so in order to avoid having what data I do provide them sold to third parties. In fact, given that "freemium" is now a recognized business model, I don't see why they aren't trying it.)
> which I would bet a lot of money will never happen.
Then I assume you are long Google?
It's happening at the fringes, in communities that are already more privacy-conscious. I haven't seen any signs of a groundswell of support for this position.
> Then I assume you are long Google?
No, I was speaking figuratively; I don't have any desire to profit off surveillance capitalism. I simply see regulation as a far more likely solution than users demanding an ethical business model with their wallets.
In the short term, I agree we're far more likely to get regulation than a significant change in user demand. I do not, however, think any regulation we get will be a "solution".
What does this have to do with Google? Yes, my ISP charges me a flat monthly fee for Internet access, and I prefer that pricing structure to being billed by the kilobyte (not that any ISP I'm aware of tries that any longer). But I'm still paying directly for the service. My ISP doesn't give me my Internet connection for free and then try to monetize it by showing me ads.
> A pay per use or monthly quota would outright discourage curiosity and add in another mental fatigue of tracking costs
Google could use the same pricing structure my ISP does: a flat monthly fee, with no limit on usage, billed to my credit card. No more mental fatigue than "do I have a working Internet connection?", which is exactly how much mental fatigue it takes for me to use Google now.
Would this be a challenge to achieve at scale? Sure. But a company that really took the motto "don't be evil" seriously would be taking on exactly this kind of challenge, precisely because it's a problem that someone is going to have to solve sooner or later, and is worth a lot to whoever solves it because it makes things better for everybody. Who better to do it than Google? But instead of hiring smart engineers to solve this problem, they're hiring smart engineers to figure out better ways to capture users' eyeballs. It's insane.
> Keeping information access gated behind wealth
In a sane society, resources like Google search would be made more widely available by the standard method taught in economics classes: price discrimination. The price they charge for their services would vary according to what the particular customer can easily afford. People in first world countries, like me, might pay $10 or $20 a month. People in the poorest countries, where Internet access itself is not guaranteed, might pay nothing, as they do now. Google has about four billion users and about $180 billion in annual revenue; that works out to an average of $45 a year per user. That seems feasible, if they are allowed to price discriminate. But of course price discrimination is considered "evil", even though it's not--it delivers more value to more people when it is allowed to happen.
> Your data being monetized by others is not an intrinsic harm.
While there are of course ways in which my data can be monetized that don't harm me, I think the actual evidence clearly shows that the ways in which our data is being monetized do carry a high risk of causing harm. "Traffic surveys" is not a good proxy for what most data harvesters are actually doing.
[1] https://www.statista.com/statistics/183523/online-advertisem...
The very idea that a user needs to be tracked from site to site and a profile built around his/her web activity is dystopian and depressing.
With respect to eg brand advertising: even if you get past an inability to measure impact, once you break most of the ad infra, ad buyers simply aren't going to negotiate / buy with small sites. It's not worth their time or money. Small here is probably less than millions of uniques per day.
With respect to direct response advertising, you've mostly lost the ability to track a conversion. So it becomes pointless.
The advertising before extensive tracking was a different time: way way less money, way fewer ads, way less ad blindness amongst viewers, way way way fewer publishers, etc.
Will some advertising persist? Absolutely. eg the branded / source trackable referral codes that podcast advertising uses. But there will be an enormous falloff in dollars pointed at publishers.
And to be clear, I'm not a fan of 3rd party tracking. But we should be deliberate before we end the ad-supported internet.
And banning extensive user tracking doesn’t mean “ending the ad supported internet”, that’s sensationalist to the max!
To suggest that ending tracking would mean that sites have to individually negotiate ads with individual websites isn’t true either - ad networks have and will always be a thing, regardless of the ability to track.
Second, if this will truly cause a drop in advertising spend.... then that money will be spent somewhere else, which might boost a different industry.
I don't think this would really change advertising spend, though... it would just change the type of advertising and how it is tracked/paid for.
Advertisers still want to get their ads in front of people, and the amount of content to advertising demand wouldn't change.
In fact, I think a change to content based advertising will help with content quality. With user based advertising, an advertiser doesn't care if the valuable person is viewing good content or not. Content creators just need to attract the valuable eyeballs, and can use as much click bait and useless content as possible to get them.
With content based advertising, the advertiser will spend on quality content, because that is the only metric they have to try to reach quality users.
I could see bigger sites expending a lot of energy trying to bring the tracking and inference in-house, and even federating these efforts, creating a kind of soft-paywall that requires you to "pay" by validating an email address or some other stable identity marker in exchange for temporary access to content, so they can watch what you browse and build a shared model of you that they can feed back into the ad networks. I could see the NYT continuing to manipulate and fine-tune its headlines and graphics, trying to sort its visitors into cohorts based on what appeals to them to squeeze every last cent out of a pageview.
At the same time, so much content discovery and consumption happens in the belly of the beast (Facebook, Google, Youtube) that most ads will continue to be targeted based on the considerable information those websites have about you, regardless of what browsers do or what happens to third-party tracker networks.
Having been around for the last 30 years (yes, even before the 90s!) I can say that the quality of news and the NUMBER of news sources is far greater. The amount of data online has only increased and has increasingly been sourced. Imagine the world before, when the only source of news was to interview someone in person or read the copy that particular source was using in the narrative.
We just need to continue to make it increasingly impractical and expensive to track users until it stops being considered a viable business strategy.
The FLoC proposal (and others) are happening now because of the coming cookiepocalypse.
(Disclosure: I work on ads at Google, speaking only for myself)
If only Firefox was removing cookies, that would be a problem, because Chrome could just ignore them. But with Safari on board as well, and with the entire iOS market at stake for sites that try to ignore the policy...
If Chrome doesn't remove third-party cookies, they will be the only browser anywhere not to do so. Chrome's original stance might have been conditional on finding a replacement, but I'm not sure they still have a choice at this point. I don't think Google is going to hand that selling point to Apple, and you're seeing yourself in these comments that a lot of the people following this issue didn't accept Chrome's original promise as conditional.
And maybe Chrome is confident enough in their market position that they're willing to take that hit and they think it won't matter. Maybe they're even right. From my perspective, breaking Chrome's dominance on the web is a necessary thing that needs to happen eventually for the health of the web, so every time that Chrome makes their browser worse in a highly public way, that's a win.
Remember that Firefox and Safari are already blocking the majority of third-party cookies online, and those browsers still work today, the web hasn't broken for them. So every year that Chrome spends delaying that deprecation is another year where people like me can point out that they're lagging behind literally the entire market on privacy.
nit: Safari was ahead of Firefox here, with ITP 1.0 blocking most third-party cookies by default in 2017.
And the other minority browsers are also on board now. Edge and Brave and such are also preferring privacy-friendly default configurations.
It's simple: We force Google to stop tracking us, or we stop using Google products.
[1] https://github.com/WICG/privacy-preserving-ads/blob/main/Par...
[2] https://webkit.org/blog/8943/privacy-preserving-ad-click-att...
Apple's solution doesn't look like it provides user interests or demographics, does it?
There's a lot of cooperation here, and similar goals; I'm not sure why you think Microsoft and Google can't find an API they both like?
If you're using a non-user-hostile browser, these strategies are already heavily limited by default and are already not a concern. Every Firefox release is making significant improvements on reducing the fingerprinting footprint of the browser, and several user-hostile API features proposed by Google have been rejected by them and Safari to prevent expanded fingerprinting.
Chrome's original announcement about phasing out third-party cookies is explicit about new technologies like Privacy Sandbox (which includes FLoc) being how third-party cookies will no longer be needed:
"After initial dialogue with the web community, we are confident that with continued iteration and feedback, privacy-preserving and open-standard mechanisms like the Privacy Sandbox can sustain a healthy, ad-supported web in a way that will render third-party cookies obsolete. Once these approaches have addressed the needs of users, publishers, and advertisers, and we have developed the tools to mitigate workarounds, we plan to phase out support for third-party cookies in Chrome. Our intention is to do this within two years." -- https://blog.chromium.org/2020/01/building-more-private-web-...
(Disclosure: I work on ads at Google, speaking only for myself)
If there are people who are (a) ok with personalized ads, providing they can be done sufficiently privately and (b) do not like FLoC, then I'd love to read what they have to say!
(Still speaking only for myself.)
I doubt many people object to ad networks and real time bidding; it's just that the user's personal information shouldn't be exposed in the process. Yes, that means the only signals you'd get are the current page, and maybe high-level OS/browser/device info.
(a) ok with personalized ads, providing they can be done sufficiently privately
My opinion, which I think is fairly common around here, is that what you're describing is fundamentally impossible. Much like the incessant government demands for encryption backdoors that don't compromise security.
> what you're describing is fundamentally impossible
I guess the question is what you would consider to be sufficiently private? For example, would it be sufficient for the advertiser to be completely unable to distinguish you from a sufficiently large group of people with similar behavior?
Why shouldn't personalized ads exist?
> advertisers would make just as much money without it if it were illegal
It depends very much on the advertiser. Advertisers with broad interest or close matches to specific publication types, sure, but that's not everyone. One way to think of this would be to imagine a world in which advertisers couldn't even choose where their ad appeared -- they would make less money then, without the ability to target contextually, right? There are many valuable transactions that only happen because the right information is given to the right person, and the less well-targeted ads are the more of those you lose.
The story is even worse for publishers. There are major kinds of publishing with negligible commercial tie-in. Historically, the expense of producing a newspaper or magazine meant that you were never holding a single article, and it could be treated essentially as one unit for advertising purposes. But now it is very common for articles to be shared in isolation, which means this cross-subsidy disappears.
That sounds... optimistic since you needed Google to form that "we".
This seems disanalogous. FLoC requires browser cooperation. The user can simply use a browser other than Chrome.
This feels like something that should get more attention/discussion. It flew for Samesite because "better security defaults" is a good argument. Not sure it works that way for FLOC.
Despite being involved in the Samesite rollout I hadn't quite made the same connection as that commenter, as I am not as connected to the FLOC work.
If you load Google Analytics on your site, I feel like you have no right to complain that it may track users...
I've always been so interested in learning about the next best thing that I hadn't given Wordpress much thought.
Now, using it all the time, it's popularity is very understandable as an interface for people who are not technically savvy to maintain their own website.
I feel like the Wordpress community isn't the loudest, but it is certainly a force. I think, as a brand, this move definitely has me more excited about working with their software.
I wonder if AdWords will require use of floc headers
I don't know, but I guess they won't. Instead, you'll just get worse targeting on your site if your users don't send the headers. Which I think may also not be very popular with WordPress users, but I guess the proof will be in the pudding.
Glad to see WP taking a stand - I never knew that FLOC would be so bad. The WP proposal made it clear that it’s a discriminatory technology.
So if Googles find that too many people uses the header, they can just decide to ignore it from now on. Who is going to prevent them to do that ?
Someone please chime in if I'm wrong here. I'm no lawyer but do take these things seriously (I'm trying my best to provide a tracking-free website.)
It will take a few years but they're going to get hit very very hard by EU privacy regulators.
Mweh if it doesn’t break anything. But terrible if it breaks something.
that seems unlikely.
It appears that Google is trying to rewrite the rules of how browsers and the Web work, with the appearance of being on the side of privacy, but actually introducing an alternative method of surveillance that is going to be less favourable to almost everyone except Google. How many of the huge-audience sites are potentially going to lose out from that, not least because they rely on advertising themselves for the lion's share of their revenues?
This whole discussion started with a proposal from a platform that is supporting nearly half of the sites people are visiting. That puts WP in a unique and potentially very powerful position here as well, and evidently they're interested in trying to force the issue.
And finally, the SOPA experience has shown that it is not entirely implausible for large numbers of sites to collaborate in this way if they feel the threat is serious enough. So if FLoC is as bad as the critics are suggesting, it doesn't seem entirely out of the question. There seem to be quite a few powerful organisations that would have a variety of motivations for wanting to give Google a bloody nose over this one.
Why do you think Google hasn't prevented adblockers from running on it? If they did so, it would sink the browser so quickly.
You seriously underestimate the power of inertia.
This blows my mind every time. Even though I know it.
It looks like it’s based on the top ten million websites by traffic, but weighted equally. Maybe there are lots of low-traffic WordPress sites?
And many, many more high traffic websites. There's even some Facebook landing pages running WordPress and other many high profile sites[1].
By domains or by visits?
Currently, for A/B testing, FLoC is automatically opting-in 0.5% of sites that serve ads, but that's only for a small testing population, the idea is that FLoC history contribution will be opt-in exclusively. (There's a proposal that you have to contribute to FLoC history calculations to get access to a user's FLoC identifier)
You send me a bunch of data, including headers, and I'm more or less free to do with that what I want within the privacy of my own browser. I don't have to listen to any of your headers if I don't want to.
Through this proposal, we believe we can substantially improve end-user privacy while retaining the ability for sites to sustain their businesses through ad funding. We propose a new set of APIs that leverage a browser-trusted service to develop a sufficient understanding of user interests and therefore enable effective ad targeting without allowing ad networks to passively track users across the web. This approach removes the need for cross-site identity tracking, enables new privacy enabling methods to support monetization, and maintains ad auction functionality within existing ad tech ecosystems."
If they are not benefiting and Google is benefiting they may pass on that.
Users: We hate cookies, because they are abused to hurt our privacy by allowing advertisers to build a profile about us
Google: We have a great idea! We can get rid of 3rd party cookies and instead make your browser build profile about you and share it with everyone.
So while it’s not the holy grail it does appear to be a small step in the right direction from the status quo.
Do I understand the situation correctly? Genuinely curious.
For the moment 3p cookies are better, they can easily be erased, blocked or even isolated, and are restricted to a single browser.
The problem is that it is an unstable strategy. FLoC is strictly worse for privacy-conscious users if trackers don't change their strategy, and it is also strictly worse for trackers to stop using their current tracking techniques.
It is a bit like the prisoner dilemma.
> If I go to thing W, X, Y, and Z (where those are distinct elements with distinct fans), people within those cohorts will be indistinguishable but I will likely be the only person who has been to all 4. Therefore, you can easily identify individuals. FLoC is a crock of shit. At least you could block 3rd party cookies
See: https://github.com/WICG/floc/issues/103#issuecomment-8218146...
Then again, FLoC is only enabled "in a limited set of circumstances" anyways.
For details on said circumstances, see: https://github.com/WICG/floc#qualifying-users-for-whom-a-coh...
>Tracking people via their cohort
>A cohort could be used as a user identifier. It may not have enough bits of information to individually identify someone, but in combination with other information (such as an IP address), it might.
Whose purpose is:
>A FLoC cohort is a short name that is shared by a large number (thousands) of people, derived by the browser from its user’s browsing history.
I wonder if it's possible to define a large enough number X that people are OK with the idea. (Cookies are effectively "1" and nothing is "3,010,000,000" ie on the internet)
Could the cohort minimum size be configurable?
Given the IP address can be known today: what's the existing accidental "FLoC proxy" or "How unique are you online?" Or "online finger print" (something I'd not thought of before: my timezone can significantly narrow down who I am) You can try using yourself on: https://amiunique.org/fp
this is a good idea, but unfortunately it would just lead to MORE ways to track users, since "size of cohort" is now a (probably very, very high entropy, given how many users never configure anything) source of information
While the whole framing of EFF et. al. is put in a way that does not allow for even a small doubt that the proposal is just the worst thing ever with no redeeming qualities. That framing disallows working within this feature to modify browsers to send the required headers.
The cohort you're in currently determined by 1) third-party cookies 2) fingerprinting techniques. Removing third-party cookies and introducing FLoC will probably reduce the entropy provided by the user. Recall that the FLoC proposal aims to put each user in a group of several thousand other users. That's about 12 bits of entropy. A third-party cookie would probably provide more, though I don't know the number off the top of my head. You only need log2(3 billion internet users) = 32 bits to identify every internet user hyper-precisely.
So, moving to FLoC probably reduces the tracking entropy provided by the user. But it still leaves fingerprinting as a viable way to identify users. Even if both third-party cookies and FLoC were eliminated, there would still be fingerprinting.
So, I think the Google approach is "provide a minimum tracking entropy via FLoC, and try to bound maximum entropy by limiting fingerprinting." Privacy advocates want a world where browsers try aggressively to limit tracking entropy, perhaps ideally eliminating it altogether.
See the "privacy budget" mentioned here for a similar idea: https://blog.chromium.org/2019/08/potential-uses-for-privacy...
Disclaimer: I work at Google.
This is not quite an opt-in. But a blanket opt-out isn't necessary either.
Then again: "final design is still subject to change based on [Origin Trial] feedback".
We've reverted the title in keeping with the site rule: "Please use the original title, unless it is misleading or linkbait; don't editorialize." (https://news.ycombinator.com/newsguidelines.html).
However, this does have more gravitas than a random blog post elsewhere, as those with the ability to publish are contributors to the project who have made significant contributions.
Take this post as if it’s an emailed proposal to a project’s mailing list.
"The WordPress core development team builds WordPress! Follow this site for general updates, status reports, and the occasional code debate."
Seems like it is to me.
That said, only significant contributors get access to post to Make.
Or facebook saying "we have this idea that would improve the experience on our platforms, and we think it's a great idea despite hurting our ability to grow, show ads and our short term bottom line. It actively discourages 'engagement'".
If I had any stock in either company I'd still be delighted about these. I think it's the best long term growth strategy they can have. Focusing not on growth but on users and goodwill.
For websites, FLoC cohort computation only triggers if you call the document.interestCohort API or load ads - these actions are considered an opt-in. (https://github.com/WICG/floc/issues/103)
For users, it's sort of opt-in, too: You must be logged into a Google account, must have enabled Chrome history data sync, must not block third-party cookies, must have enabled Google web activity tracking and must have enabled ad personalization. (https://github.com/WICG/floc#qualifying-users-for-whom-a-coh...)
Also, you can disable FLoC via chrome://settings/privacy or chrome://flags. (https://github.com/WICG/floc/issues/103#issuecomment-8218146...)
It's not a perfect opt-in, but it's also not malware.
User agents, for example, or even cookies, are not malware by any reasonable definition of the term. They present risks to the user and must be managed, but this is bounded.
I tend to roll my eyes at the blind hatred of corporations, but we also have to have both feet firmly on the ground, that these products and services are strictly tied to long-term plans for ROI. What kind of a ROI would the biggest advertising network have? Tracking, profiling and serving profiled ads.
I absolutely do not agree to google using anything from gmail to generate a stable user id for advertising, or e.g. show me ads in google search results or youtube videos, based on analysis of email content.
If Google can't provide what I expect (a free mail service paid only by ads on gmail dot com) they should tell me that they want $X per year and I'd happily pay it. It's not that I don't want to fund the operations of services, it's that I'm always assumed to rather pay with my information than with dollars.
> WordPress powers approximately 41% of the web – and this community can help combat racism, sexism, anti-LGBTQ+ discrimination and discrimination against those with mental illness with four lines of code:"
function disable_floc($headers) {
$headers['Permissions-Policy'] = 'interest-cohort=()';
return $headers;
}
add_filter('wp_headers', 'disable_floc');
If you seriously think this is going to make a difference in racism, of all things... I mean... do people seriously think that? Do you know what racism is anymore?
https://news.ycombinator.com/newsguidelines.html
Cherry-picking a detail you find most triggering in an article and importing it here to express how provoked you feel is a way of setting the thread on fire—no doubt unintentionally [1], besides which the greater part of the problem is caused by the upvotes such things attract—but still, we don't want threads-on-fire. We're trying for something different than that.
Readers should leave tangential provocations where they find them, and commenters should comment on what gratifies their intellectual curiosity, as the guidelines ask.
Edit: also, please don't use HN primarily for political or ideological battle. It's not what this site is for, and it destroys what it is for, so we ban accounts that cross that line [2], and your account's recent history seems to have crossed it. Fortunately that seems to be a recent development so it should be easy to fix.
[1] https://hn.algolia.com/?dateRange=all&page=0&prefix=true&sor...
[2] https://hn.algolia.com/?sort=byDate&dateRange=all&type=comme...
But it really makes me distrustful of the whole proposal when people make wild claims like that and don't feel like they need to make even the briefest attempt to back it up. It seems a lot more like they're just taking the currently trending social cause and co-opting it to support their own unrelated agenda.
> Observers may learn that in general, members of a specific cohort are substantially likely to be a specific type of person. For example, a particular cohort may over-represent users who are young, female, and Black; another cohort, middle-aged Republican voters; a third, LGBTQ+ youth. This means every site you visit will have a good idea about what kind of person you are on first contact, without having to do the work of tracking you across the web.
I could be wrong of course, if so, please explain how.
This is the digital equivalent of trying to be “race blind.” You can’t just remove the race column in your db and assume that’s it fine to torture your data for patterns secure that your results won’t correlate to race.
A lot of this is reminiscent of the hyperbole over AMP.
Not quite? Maybe this will add more bits that will be useful for fingerprinting, but this seems like an absurd way for google to go about making it easier to fingerprint browsers, considering that most browsing happens over Chrome where Google can see what pages everyone visits anyway. And Google is currently proposing adding anti-fingerprinting measures [0] that observe how many bits of information a website has gathered and block API access after it reaches a certain threshold.
A straightforward analysis of Google's motivations makes sense here: they want to keep their ad business profitable while improving their reputation on privacy. FLOC allows targeted ads, keeping their business profitable, and doesn't rely 3rd parties observing your browser history, improving privacy.
From https://web.dev/floc/ :
> With FLoC, the browser does not share its browsing history with the FLoC service or anyone else. The browser, on the user's device, works out which cohort it belongs to. The user's browsing history never leaves the device.
> There will be thousands of browsers in each cohort.
A further privacy improvement is that they're designing it to avoid leaking whether you're a member of a "sensivitive category":
> The clustering algorithm used to construct the FLoC cohort model is designed to evaluate whether a cohort may be correlated with sensitive categories, without learning why a category is sensitive. Cohorts that might reveal sensitive categories such as race, sexuality, or medical history will be blocked. In other words, when working out its cohort, a browser will only be choosing between cohorts that won't reveal sensitive categories.
[0]: https://techcrunch.com/2019/08/22/google-proposes-new-privac...
Recommend
About Joyk
Aggregate valuable and interesting links.
Joyk means Joy of geeK