Policy talk:Privacy policy

From Wikimedia Foundation Governance Wiki
Revision as of 06:49, 26 September 2014 by Nemo bis (talk | contribs) (Undo revision 10008588 by 112.72.13.49 (talk))

Template:Privacy policy talk header User:MiszaBot/config

Note on Labs Terms / Response to NNW

Hi, NNW: If you are asking here about the change from Toolserver to Labs about when “profiling tools” are allowed, we made the change because the edit information has always been transparently available, so the Toolserver policy was not effective in preventing “profiling” - tools like X edit counter could be (and were) built on other servers. As has been suggested above, since the policy was ineffective, we removed it.
However, this change was never intended to allow anarchy. The current Labs terms of use allows WMF to take down tools, including in response to a community process like the one that occurred for X edit counter. Would it resolve some of your concerns if the Labs terms made that more obvious? For example, we could change the last sentence of this section from:
If you violate this policy ... any projects you run on Labs, can be suspended or terminated. If necessary, the Wikimedia Foundation can also do this in its sole discretion.
to:
If you violate this policy ... any projects you run on Labs, can be suspended or terminated. The Wikimedia Foundation can also suspend or terminate a tool or account at its discretion, such as in response to a community discussion on meta.wikimedia.org.
I think this approach is better than a blanket ban. First, where there is a legitimate and widely-felt community concern that a particular tool is unacceptable, it allows that tool to be dealt with appropriately. Second, it encourages development to happen on Labs, which ultimately gives the community more leverage and control than when tools are built on third-party servers. (For example, tools built on Labs have default filtering of IP addresses to protect users - something that doesn’t automatically happen for tools built elsewhere. So we should encourage use of Labs.) Third, it encourages tool developers to be bold - which is important when encouraging experimentation and innovation. Finally, it allows us to discuss the advantages and disadvantages of specific, actual tools, and allows people to test the features before discussing them, which makes for a more constructive and efficient discussion.
Curious to hear what you (and others) think of this idea. Thanks.-LVilla (WMF) (talk) 00:02, 24 December 2013 (UTC)[reply]
Is there a need in distinguishing WMF's role in administering Labs tools? I would only stress the requirement of Labs Tools to obey this policy, here, and link to a Labs policy on smooth escalation (ask tool author; discuss in community; ask Labs admins; ask WMF). Gryllida (talk) 05:14, 24 December 2013 (UTC)[reply]
WMF is called out separately in the policy because WMF employees ultimately have control (root access, physical control) to the Labs servers, and so ultimately have more power than others. (I think Coren has been recruiting volunteer roots, which changes things a bit, but ultimately WMF still owns the machines, pays for the network services, etc.) I agree that the right order for conversation is probably tool author -> community -> admins, and that the right place for that is on in the terms of use but an informal policy/guideline on wikitech. -LVilla (WMF) (talk) 17:15, 24 December 2013 (UTC)[reply]
Yah, I just wanted to propose that the policy references both concepts (WMF's ultimate control, and the gradual escalation process) so the users don't assume that appealing to WMF is the only way. Gryllida (talk) 08:38, 25 December 2013 (UTC)[reply]
As I mentioned elsewhere on this page, the talk about "community consensus" raises questions such as "which community?" and "what happens when different communities disagree?" Anomie (talk) 14:30, 24 December 2013 (UTC)[reply]
Right, which is why I didn't propose anything specific about that for the ToU- meta is just an example. Ultimately it'll have to be a case-by-case judgment. -LVilla (WMF) (talk) 17:15, 24 December 2013 (UTC)[reply]
I would perhaps remove the "on Meta" bit then since it bears no useful meaning. «... such as in response to a community discussion.» looks complete to me. There doesn't even have to be a discussion in my view: a single user privately contacting WMF could be enough, granted his report of abuse is accurate. «... such as in response to community feedback.» could be more meaningful. Gryllida (talk) 08:38, 25 December 2013 (UTC)[reply]
This is meant as an example ("such as"), so I think leaving the reference to meta in is OK. Also, this is in addition to the normal reasons for suspension. For the normal reasons for suspension, a report by a single person would be fine, but I think in most cases this sort of discretion will be exercised only after community discussion and consultation, so I think the reference to discussion is a better example than saying "feedback".-LVilla (WMF) (talk) 22:28, 31 December 2013 (UTC)[reply]
I am referring to this argument from above: we made the change because the edit information has always been transparently available, so the Toolserver policy was not effective. The position that any analysis that can be performed by a third party should also be allowable on WMF servers with WMF resources is not convincing. It is clearly possible for a third party to perform comprehensive and intrusive user profiling by collating edit data without the user's prior consent. We could (and should!) still prohibit it on our servers and by our terms-of-use policy. (A different example: it's clearly possible for a third party running a screen scraper to construct a conveniently browsable database of all edits that have ever been oversighted; this doesn't mean WMF should allow it and finance it.) Now, why should this kind of user profiling be prohibited by WMF? Because WMF lives on the goodwill of its editors, and editor NNW above put it best: "I want to create an encyclopedia, not to collect money for spying on me." AxelBoldt (talk) 18:15, 24 December 2013 (UTC)[reply]
You're right, but I think removed (oversaught) edits are out of question here. Whatever else is available is available, and allowing to collect freely available information programmatically sounds reasonable to me. Gryllida (talk) 08:38, 25 December 2013 (UTC)[reply]
It's not reasonable if the editors don't want it and if it doesn't further any identifiable objective of the foundation. In fact it is not only unreasonable but it's a misuse of donor funds. AxelBoldt (talk) 22:28, 25 December 2013 (UTC)[reply]
You should be interested in contributing to the #Tool_settings section below. Gryllida (talk) 01:56, 28 December 2013 (UTC)[reply]
Hello LVilla (WMF)! Your suggestion means that any tool that will be programmed in future has to be checked and – if someone things that it is necessary – has to be discussed individually. My experiences until now: "the community should not have any say in the matter" and a quite short discussion "Technically feasible, legally okay... but want tools do we want?" started at lists.wikimedia.org. If we want it that way we will have to define who is "community". Is it the sum of all users of all WMF projects? Can single projects or single users declare to keep a tool (e.g. en:WP voted for no opt-out or opt-in for X!'s Edit Counter but that would mean that my edits there will be used in that tool although I deny it completely for my account)? Which way will we come to a decision: simple majority or best arguments (and who will decide then)? Does a community vote for tool X mean that there is no chance for a tool Y to try it a second time or do we have to discuss it again and again?
We have to be aware of our different cultures of handling private data or even defining what's private and what's not. Labs "doesn't like" (nice term!) "harmful activity" and "misuse of private information". US law obviously doesn't evaluate aggregating data as misuse, I do. We discuss about necessary "transparency" but do not have a definition for it. The time logs of my edits five years ago seem to be important but you don't want to know my name, my address, my sex, my age, my way how I earn my money… which would make my edits, my intentions and my possible vandalism much more transparent than any time log. Some say "the more transparency the better" but this is a discussion of the happy few – but dominating – who live in North America and Western Europe. I think we also should think of those users who live in the Global South and want to edit problematic topics (religion, sexuality…). For those aggregated user profiles may become a real problem and they will always be a minority in any discussion. NNW (talk) 17:56, 28 December 2013 (UTC)[reply]
Everyone involved is aware that privacy values vary a great deal from community to community; but it seems very ill-advised to give the most restrictive standards a veto over the discussion, in practice and in principle. A clear parallel with the discussion over images can be drawn: while it would have been possible to restrict our standards to the subset deemed acceptable by all possible visitors, to do so would have greatly impoverished us. The same goes for usage of public data: we should foster an encourage new creative uses; not attempt (and fail) to preemptively restrict new tools to the minuscule subset nobody could raise an objection to. This does not preclude acting to discourage or disable a tool the community at large objects to – and the Foundation will be responsive to such concerns – but it does mean that this is not something that can be done with blanket bans.

To answer your more explicit questions, the answer will generally be "it depends" (unsatisfying as this sounds). Ultimately yes, the final arbiter will be the Foundation; but whether or not we intervene is dependent entirely on context as a whole; who objects, why, and what could be done to address those concerns. MPelletier (WMF) (talk) 00:48, 1 January 2014 (UTC)[reply]

So for programmers sky's the limit, it's to the community to find out which tool might violate their rights and to discuss this again and again and again because every tool has to be dealt anew. The community has to accept that in the end a RFC like for X!’s Edit Counter is just a waste of time and that programmers – of course – are not interested in any discussion or compromise because it might cut their tools. WMF is in the comfort position that Meta is in the focus of only very few users and the privacy policy does not apply to Labs. It would be fair to admit that under these circumstances WP:ANON becomes absurd and in near future – with more powerful tools – a lie. I understood "The Wikimedia Foundation, Inc. is a nonprofit charitable organization dedicated to encouraging the growth, development and distribution of free, multilingual, educational content" as "free and multilingual and educational content" but a user profile generated with my editing behaviour isn't educational. NNW (talk) 13:50, 4 January 2014 (UTC)[reply]
Unfortunately - it is. Just think of the possibilities for scientific research... Alexpl (talk) 08:14, 29 January 2014 (UTC)[reply]
A body donation would be great for scientific research, too. NNW (talk) 08:50, 29 January 2014 (UTC)[reply]
I think that's already covered by «Depending on which technology we use, locally stored data can be anything [...] to generally improve our services». Please be sure not to bring your organs close to the servers. ;-) --Nemo 08:57, 29 January 2014 (UTC)[reply]
One could squeeze a few Doctor titels out of in-depth research on contributors identity in combination with their WP work. Compared to that, a body donation is somewhat trivial. So I do agree we have to identify and neutralise every attempt to collect user data as fast and effective as possible. Alexpl (talk) 09:47, 29 January 2014 (UTC)[reply]

Questions from Gryllida

Implementation as Extension

This requests to conceal time of an edit. Would any of the supporters of the appeal be willing to demonstrate a working wiki with the requested change implemented as an Extension which discards edit time where needed? If sufficiently safe and secure, it could be added to a local German wiki by request of the community, and considered by other wiki communities later on. Many thanks. Gryllida (talk) 04:43, 24 December 2013 (UTC)[reply]

Tool settings

Have you considered requesting the Tool author to add an opt-out (or opt-in, as desired) option at a suitable scope? Gryllida (talk) 04:45, 24 December 2013 (UTC)[reply]

Example: editor stats:
«Note, if you don't want your name on this list, please add your name to [[User:Bawolff/edit-stat-opt-out]]».
--Gryllida (talk) 02:14, 28 December 2013 (UTC)[reply]

FYI: The tool address is here. It is not mentioned in the appeal text. (I have notified the tool author, Ricordisamoa, of this discussion and potentially desired feature.) Gryllida (talk) 02:20, 28 December 2013 (UTC)[reply]

User:Ricordisamoa deliberately ignored the idea of an opt-in or opt-out and there is no chance to discuss anything: There's no private data collection, and only WMF could prevent such tools from being hosted on their servers: the community should not have any say in the matter. For complete discussion read Talk:Requests for comment/X!'s Edit Counter#Few questions. NNW (talk) 16:29, 28 December 2013 (UTC)[reply]
@Gryllida and NordNordWest: of course I accept community suggestions (e.g. for improvements to the tool) but the WMF only is competent about legal matters concerning Wikimedia Tool Labs. If there should be any actions, they will have to be taken by the WMF itself. See also [1]. --Ricordisamoa 03:04, 29 December 2013 (UTC)[reply]
Ricordisamoa, would you not be willing to add an opt-out? I would desire it be solved without legal actions or escalation, as it appears to be something within your power and ability, and many users want it. (It seems OK to decline OPT-IN feature request.) Gryllida (talk) 09:07, 29 December 2013 (UTC)[reply]
@Gryllida: No. --Ricordisamoa 16:44, 30 December 2013 (UTC)[reply]
Ricordisamoa, I understand your view. It might make sense to document that in FAQ, if not already, at leisure. I appreciate you being responsive. Gryllida (talk) 07:17, 31 December 2013 (UTC)[reply]
As long as WMF wants to encourage programmers to do anything as long as it is legally there is no reason for programmers to limit the capabilities of their tools. "Community" is just a word which can be ignored very easily when "community" wants to cut capabilities. Only "improvements" will be accepted and "improvements" mean "more, more, more". NNW (talk) 14:00, 4 January 2014 (UTC)[reply]

Discussion on same topic in other locations

Note that this issue has also been discussed in #Generation_of_editor_profiles and #Please_add_concerning_user_profiles. For a full history of this topic, please make sure to read those sections as well. —LVilla (WMF) (talk) 00:36, 8 January 2014 (UTC)[reply]

Opt-in

There is the possibility for a compulsary opt-in for generating user profiles at Labs. By this we would return to the Toolserver policy which worked fine for years. No information would be reduced, fighting vandalism would still be possible, programmers still could write new tools and of course there will be lots of users who are willing to opt-in (like in Toolserver times). On the other hands all other users who prefer more protection against aggregated user profiles can get it if they want to. I see no reason why this minimal solution of the appeal couldn't be realized. NNW (talk) 13:43, 13 January 2014 (UTC)[reply]

As has been stated elsewhere, this only gives a false sense of security. There are other websites that allow profiling anyway, and there's no way to stop them, so there's no clear reason to pretend that you have a choice. //Shell 20:56, 13 January 2014 (UTC)[reply]
As has been stated elsewhere something that is done somewhere doesn't mean we have to do it, too. NNW (talk) 21:32, 13 January 2014 (UTC)[reply]
Toolserver policy was only enforced upon user request. There's a lingering worry that some upset user slap a tool author with a take-down request; this is demoralizing to authors after spending many hours developing the software. This discouraging effect is why we don't see many community tracking tools, like the Monthly DAB Challenge. I've got cool and interesting ideas, but wont waste my time. Dispenser (talk) 19:04, 21 January 2014 (UTC)[reply]
With an opt-in there would be no reason for any complaint. Everybody can decide if her/his data gets used for whatever or not and there will be still lots of users who will like and use whatever you are programming. Please think of those authors who spent many hours to create an encyclopedia and find themselves as an object of spying tools afterwards. Believe me: that's demoralizing. NNW (talk) 23:19, 21 January 2014 (UTC)[reply]
Users never spying on each other? I read enough ArbCom to know that's Fucking Bullshit. This goes beyond edit counters and affect community management. English Wikipedians do not want to watch over 2,000 articles for a month to understand what's happening at a WikiProject. Now I cite the DAB challenge as w:User:JustAGal was completely unknown to us until we expanded the data analysis. We've subsequently redesigned tools to work better for her.
Postscript: Dabfix, an automatic disambiguation page creation and cleanup script, only has a single user and may never recouped the hundreds of hours spent programming and testing it. If a tool is never used then I've wasted time that I could've done something useful. Dispenser (talk) 02:56, 10 February 2014 (UTC)[reply]

Alternative Labs terms proposal: per-project opt-in

The discussion above has been pretty wide-ranging, with some voices in support of opt-in; others in support of opt-out. It is also clear that, for any global proposal, defining who should be consulted is a key challenge. With those two things in mind, Coren and I would like to propose a per-project opt-in; i.e., if a particular project (e.g., Deutsch Wikipedia) wants to require it, then extraction of data from that project will require a per-user opt-in. This gives control directly to specific communities who have been most concerned about the issue, while still preserving flexibility for everyone else. Thoughts/comments welcome. —LVilla (WMF) (talk) 01:13, 4 February 2014 (UTC)[reply]

So 797 communities will have to discuss if they want to have an opt-in. Quite a lot talk, I think, especially for those who are active on several projects and dislike the idea of aggregated user data at all. I have got 100 or more edits in 20 projects although I don't speak all those languages. How can I vote in such a complex matter when I am not able to understand these languages? Am I allowed to vote in every project in which I have edits or do I have to meet some criteria? Why should a community control an opt-in/no opt-in when it is much easier that everybody takes control over his/her own data? It will lead to much more discontent among users when it will be a decision of projects instead of single users. Not everyone at de:WP thinks data aggregation is a bad thing, not everyone at en:WP likes to see data aggregated. NNW (talk) 10:10, 4 February 2014 (UTC)[reply]
@LVilla (WMF): A question of clarification: Does your proposal mean that in case a project makes this decision and a single user does not opt-in, the user's data will be excluded from the data pool which is accessable for developers on labs and external developers? (Which would be much more than just a declaration of intention but a technical barrier to analyze that user's data at all.) And could you explain the necessity of the intermediate level of legitimation by the respective project? I'm not sure if I understand what it's good for when you at the same time acknowledge that the single user himself has to make the decision to opt in. Wouldn't that be a shift of responsibility that no longer matches the reality? Why not just skip that step? User activity does not in general only take place on one single wiki, in times where contributors use their home wiki + commons + (sometimes) meta or wikidata, it seems to ignore the interdependencies we've built over the years. Alice Wiegand (talk) 23:26, 9 February 2014 (UTC)[reply]
@Lyzzy: If I understand your question correctly, then yes: if the project (such as WM-DE) opts out, then tools whose purpose is to build individual user profiles could not access the data of a user of that project who does not opt-in. The idea of doing this on a per-project basis is primarily because the objection to these sorts of tools appears to be highly specific to one project. (Not to say that everyone else on every other project loves it, but it seems undeniable that the bulk of the objection appears to be from one project.) Secondarily, it is because this rule is primarily symbolic (as discussed elsewhere in the page), so making it a per-project thing allows projects who care to make the symbolic statement without overly complicating the situation for others. Finally, it is because people objected to making it per-tool, because it was unclear what level of community discussion would be sufficient to force an individual tool to become opt-in. By making it per-project, we make it quite clear what sort of community discussion is necessary. This does lead to some inefficiencies, particularly for people who participate on meta and other projects. But none of the proposed solutions are perfect - all of them require discussions in a variety of places and inconveniencing different sets of users. Hope that helps explain the situation and the proposal. —LVilla (WMF) (talk) 02:54, 11 February 2014 (UTC)[reply]
@LVilla (WMF): I'm not sure you understood the question correctly. Would the non-opted-in user's data be somehow hidden in the Tool Labs database replicas as a technical restriction (which seems like it could be a significant performance hit for those wikis and would damage other uses of that data), or would this just be a policy matter that tool authors would be required to code their "data aggregation" tools to decline to function for the non-opted-in user on those wikis? Anomie (talk) 14:37, 11 February 2014 (UTC)[reply]
@LVilla (WMF):, I still don't understand if your proposal includes a technical solution. In part of your statements it reads as if the data of a user who does not opt-in after a project decided to go the opt-in-line will not be accessible to any tool on labs. That's entirely different from anything we talked about earlier (labs specific self-commitment) and it's also different from "there's a tag on the record, so please tell your tool not to analyse it". And because there is some kind of ambiguity, clarity about what the proposal is about is essential. Alice Wiegand (talk) 22:48, 17 February 2014 (UTC)[reply]
@Lyzzy: Sorry about the lack of clarity. The proposal does not include any technical measures. There are two types of technical measures possible:
(1) Publish less information. As described previously, this is inconsistent with how we have always done things, and would break a variety of tools.
(2) Audit individual tools on Labs. Given that most tool developers on Labs are likely to respect the policy, this would introduce a very high cost for a very low benefit.
So, yes, this would be a self-commitment, but the operations team at Labs would be able to kick off specific tools that violate the policy if/when the violation is discovered. Hope that helps clarify. —LuisV (WMF) (talk) 23:19, 18 February 2014 (UTC)[reply]
It does, thanks! Alice Wiegand (talk) 14:21, 19 February 2014 (UTC)[reply]
Edit counters will and have existed with or without Labs adopting the Toolserver policy. What about just letting the DE Wikipedia Luddites block tool links they don't like? Dispenser (talk) 03:11, 10 February 2014 (UTC)[reply]
You might take a look at Requests for comment/X!'s Edit Counter and check where the opt-in supporters come from. It is a bit more complex than de:WP vs. the rest of the world. NNW (talk) 09:02, 10 February 2014 (UTC)[reply]
@Dispenser: I've pointed out repeatedly that I think this is a mostly symbolic policy. We're trying to strike a balance that allows some communities who particularly care to make their symbolic statement. Not ideal, I know, but none of the solutions will please everyone here.—LVilla (WMF) (talk) 02:54, 11 February 2014 (UTC)[reply]
I'm not sure I support opt-in in any case, but this compromise is obviously intended for dewiki IMO. The privacy (if you consider analysis of aggregate data to be private) of users who edit on most wikis would still be gone. PiRSquared17 (talk) 02:59, 11 February 2014 (UTC)[reply]
This supposed privacy never existed in the first place. All the necessary data is already public. All this debate is about forcing people who want to create these tools to do so on third-party servers rather than on Tool Labs. Anomie (talk) 14:40, 11 February 2014 (UTC)[reply]

This discussion fell into sleep a while ago, unfortunately. Right now there is a RFC at en:WP about an edit counter opt-in which will hurt EU law when a community decides if data of single users will be aggregated and shown. I still think that it is not the right of a community to decide this but only the concern of everyone for him-/herself. NNW (talk) 11:31, 10 April 2014 (UTC)[reply]

For anyone still interested in this, I've opened a discussion at en:Wikipedia talk:Requests for comment/User analysis tool. SlimVirgin (talk) 23:13, 9 May 2014 (UTC)[reply]

Slight/Major (depending on POV) changes to definition of PII, this policy, and data retention policy as a result of question about headers from Verdy_p

verdy p asked a question about HTTP headers on the data retention policy, and so we did some final reviews of our language on that issue. As part of that review, we realized Friday that there was a sentence in the main privacy policy that was poorly drafted (not future-proof) and inaccurate. It prohibited collecting four specific browser properties. This is bad drafting, because it isn't future-proof: what if new browser properties are added by browser manufacturers? What if we realize other existing properties are problematic? It also was inaccurate because some of this sort of data may be collected (but usually not retained or logged) for useful, non-harmful purposes. For example, it could be used to help determine how to make fonts available efficiently as part of our language work.

Reviewing this also made us realize that we'd made a similar drafting mistake in the definition of PII- it was not flexible enough to require us to protect new forms of identifying information we might see in the future.

We think the best way to handle this is in three parts, and have made changes to match:

  1. Broaden the definition of PII by adding "at least", so that if we discover that there are new types of identifying information, we can cover them as necessary. This would cover these four headers, for example, but could also cover other things in the future. (change)
  2. Added headers specifically as an example in the data retention policy, so that it is clear this sort of data has to be protected in the same way all other PII. (change)
  3. Delete the specific sentence. (change)

We think, on the whole, that these changes make us more able to handle new types of data in the future, while protecting them in the same way we protect other data instead of in a special case. Please let us know if you have any concerns. -LVilla (WMF) (talk) 18:47, 13 February 2014 (UTC)[reply]

Slight changes? Not in my view! This is MAJOR change. Revoking a commitment, the DAY BEFORE debate is scheduled to close, that browser sniffing is incompatible with this Policy is no slight change. I'm trying not to blow my lid, but I'm really pissed off! The deadline needed extension because of the change and needs extension, retroactively, now. Although it appeared at first that this MAJOR change was slipped in under the wire, I understand that it was prompted by verdy_p's questions starting 1/15. Still, asumming all that LVilla says is valid regarding future-proof-ness, that in no way justifies total removal of the commitment from the policy. The policy is now, once again, a blatant lie. I had fixed it. The time for considering such radical changes was back in December when this was discussed AT LENGTH. @LVilla (WMF): what about that discussion? I'm disappointed that no one else involved in the December discussion said a thing about this troubling change!
  1. Change 1 is awful; see the December discussion. I said then, "Let's not set a bad example and be deceitful about what we collect…" With the changes LVilla has made, if adopted, Wikimedia WILL BE setTING a bad example and beING deceitful about what IT collectS. If that happens, I'll be ASHAMED to be associated with it!
  2. Change 2 is awful for the same reasons.
  3. Change 3 … slight? Yeah, and nothing Snowden blew the whistle on was illegal.

I think the community is owed an apology and I think the changes need to be revisited. We need to stop lying to our users. Lying to our fellow users is inexcusable. If anyone wants to talk to me about this offline, let me know. --Elvey (talk) 06:44, 19 March 2014 (UTC)[reply]

@LVilla (WMF):, involved in the December discussion:@Geoffbrigham:, @Drdee:, @Stephen LaPorte (WMF): No response to my comment above? If this isn't going to be addressed, I guess I can ping the board directly to let them know, before they vote. --Elvey (talk) 02:24, 25 March 2014 (UTC)[reply]

We didn't respond because I don't think your criticisms are accurate, and your tone suggests you do not want to have a constructive conversation. In particular, the change you've characterized as "deceitful" allows us to add more things, but not take them away, from the list. I think most people would agree that, as we mentioned above, this is a pro-user and pro-future-proofing step - it allows us to protect users more in the future, but not less. If you'd like to take that to the board, feel free, but I'll feel very comfortable explaining to them why you're wrong. Sorry that we disagree. —Luis Villa (WMF) (talk) 18:09, 28 March 2014 (UTC)[reply]
You revoked a commitment to users that browser sniffing is incompatible with this Policy. That is no slight change, no matter how you spin it. And it seems inexplicable to me why you think that revoking a pro-user commitment to collecting less data is a "pro-user" step. But intelligent people disagree sometimes. --Elvey (talk) 02:31, 6 May 2014 (UTC)[reply]
The bottom line is that I pushed for and gained consensus for language that made it clear that the privacy policy would not allow browser sniffing. It was added and stayed in the draft for weeks. Then on the last day, it was removed. Now we have a privacy policy that allows browser sniffing, and yet claims to be informative. That's an untenable situation. That's the bottom line. If this is in any way inaccurate, I welcome corrections. Specific corrections only. Vague assertions based on no specific facts, as in your last comment, are not appropriate. --Elvey (talk) 06:35, 6 May 2014 (UTC) (update 20:49, 10 May 2014 (UTC): @Geoffbrigham:, @Drdee:, @Stephen LaPorte (WMF):, @LuisV (WMF): Well? )[reply]
But wait, the checkuser tool contains the IP address, Operating system and browser in order to identify potential sockpuppet accounts. Are you saying that they cannot do that anymore? Reguyla (talk) 18:15, 16 May 2014 (UTC)[reply]
The concern is that these individual things could be combined into a maybe-unique tracking tool, like a cookie. As we pointed out in the original comment above, we think the best way to deal with this concern is through the definition of PII and the data retention policy. This way we treat it in the same, careful way that we treat other personal information, instead of creating a separate, badly-defined category that can't be expanded or adapted as technology changes. We think overall that is both much safer for users and more likely to work in 3-5 years. —Luis Villa (WMF) (talk) 00:36, 21 May 2014 (UTC)[reply]
Thanks, LuisV and Elvey. Elvey: your concerns were noted and welcome. Luis's position is persuasive and considerate; and reflected in the policy adopted. SJ talk  19:10, 21 May 2014 (UTC)[reply]
With all due respect Luis, if the technology to do this right is 3-5 years out, then we shouldn't be leaving the privacy policy vulnerable to abuse for the next 3-5 years. I agree the policy needs updating and I agree that tools like the Checkuser tool need to be updated. But exempting a large chunk of the population with the most access to PII just doesn't make sense. Just in the last week there have been a flurry of incidents on the english Wikipedia where admins and even some members of the Arbitration commmittee, who have access to the Oversight and checkuser tools BTW, have displayed stunning lacks of good judgment. They tols multiple users to "fuck off", literally, not figuratively, they issued legal threats to an editor and someone even contacted an editors employer and included their Wikipedia user name and their real life identity. Now with this privacy policy they would be exempted from privacy policy completely. You may not agree with me and you may not change a word in the privacy policy, but I wanted to be on record for stating clearly and with no misunderstandings that these things are not ok. Reguyla (talk) 14:10, 22 May 2014 (UTC)[reply]
Can you post a link to a thread where this flurry of incidents was discussed?--Elvey (talk) 18:24, 3 June 2014 (UTC)[reply]

@LVilla (WMF):, would you please change the subject of this thread, as suggested by SJ, here?I went ahead and changed it.

Note, Folks: Wider discussion opened with an RFC at .en's Village Pump: https://en.wikipedia.org/w/index.php?title=Wikipedia:Village_pump_(policy)#New_privacy_policy.2C_which_does_not_mention_browser_sniffing. --Elvey (talk) 01:21, 28 May 2014 (UTC)[reply]

For the record I'm less concerned by the classic sniffing of browser capabilities: these headers are made with the purpose of allowing technical adaptation and compatibility and not really about "spying" users.
My question was about a much larger set of headers, including those that culd be inserted by random browser plugins that we don't use and don't need to develop our content to be available to the largest public.
But there are some concerns about logging and keeping data like prefered language : this preference is only temporary and does need archiving. All that matters is to know which language to render for the UI, and the content of the UI is not personal data and is unrelated to what users are doing in Wikimedia sites. If this data (incuding possible unique identifiers generated by plugins, which are even stronger than IP addresses) is used for collecting demographic data, the policies sed in Wikimetrics should be applied and we shouldn't need to archive it for individual users: this is personal data, used only by CheckUser admins, and that should be subject to the CheckUser policy, not used for anything else.
It is a concern because many users don't have any idea about what is transmitted in these protocol headers (and sometimes these unique IDs are inserted in protocol headers by malwares or adwares tracking users without user permission, in an attempt to bypass standard cookie filters, to track these users wherever they go in the Internet, we should not depend on them and shuld make sure than no other third party will be able to derive user data from our archive logs to correlate them with tracks left on other sites).
Note that some plugins are appending these IDs within the "User-Agent" string (normaly used only to sniff browser capabilities), so the full unfiltered User-Agent string should be considerd also as personal data (and the substrings we are sniffing in User-Agent should be very generic, never user-specific or specific to very a small community, including obscure browser names).
If developers need to get a log of values used in User-Agent strings extracted from server logs, in order to study some trends in new browser types we should support, they should request only this data and get an archive taken from a limited time period with minimal filtering of users (for example, filtering by version if IE prior IE5 is OK; per country is OK; per large ISP is OK; but per IP or per small IP block are bad). Such extraction of data for research & software development/improvement will be provided only to some developers, agreeing to not use specific substrings gound in these logs that could identity very small groups of users, we can sniff substrings in user-agent stings only if they are likely to match more than about 100 users over a short period outside peak hours (and this should not include the early detection of alpha versions of browsers tested by a few users; but such detection could be done experimentally on small wiki test sites or private wiki sites, whose content does not really matter and where some standard anti-abuse policies may be tested with contents that would not be accepted on a standard open project). verdy_p (talk) 09:14, 5 June 2014 (UTC)[reply]

There's still a need for more information

I understand that you didn't want to commit yourself absolutely in the policy. Nonetheless, "Once we receive personal information from you, we keep it for the shortest possible time that is consistent with the maintenance, understanding, and improvement of the Wikimedia Sites, and our obligations under applicable U.S. law." is a question waiting to be asked. Can you provide the users with a report of how long these retention times are, and especially, what obligations you feel you have under U.S. law? Wnt (talk) 10:57, 22 March 2014 (UTC)[reply]

Seconded. --Nemo 11:04, 22 March 2014 (UTC)[reply]
You can ask about the requirements of US law - but you can hardly ask Wikimedia to promise in the Privacy Policy (by giving a specific timespan) that those laws wont change. Alexpl (talk) 09:55, 22 April 2014 (UTC)[reply]
Retention timespans consistent with the maintenance, understanding, and improvement of the Wikimedia Sites can and should be provided.
And RPatel notes that they are provided, at m:Data_retention_guidelines.
Retention timespans consistent with perceived obligations under applicable U.S. law can and should be provided.
These, on the other hand are NOT provided at m:Data_retention_guidelines.
--Elvey (talk) 02:26, 6 May 2014 (UTC)[reply]
--Elvey (talk) 22:42, 22 May 2014 (UTC)[reply]
They sure can. But I see little benefit to the users, since such timespans do not apply to warrantless domestic wiretapping and data retention without any judicial oversight by state agencies. Alexpl (talk) 17:03, 8 May 2014 (UTC)[reply]
You're being myopic. Those with dragnet surveillance abilities aren't the only ones who can trample privacy rights. Privacy rights are regularly trampled without dragnet surveillance. --Elvey (talk) 20:55, 10 May 2014 (UTC)[reply]
The archives should prove how myopic I am about third parties. But fact remains that data could show up after the retention timespan consistent with the law, and I dont want WM to be held accountable for that because it had promised to have that data deleted by a specific date. Something like: "We will delete it after X years - but it wont disappear if the dataminig industrie or a state agency have gotten their hands on it before that date" does not sound helpful. Alexpl (talk) 06:05, 12 May 2014 (UTC)[reply]
I could have been clearer. I just meant to argue that the information isn't of little benefit. Something like, "We will delete it after 90 days. We have not been ordered to keep it longer by any government agency. But a court or agency could order us to do so, and could order us to keep the order secret (See enwp:National security letter). We take reasonable security precautions to protect personal information." does, IMO, sound helpful. --Elvey (talk) 22:42, 22 May 2014 (UTC)[reply]
Hi Wnt. Alexpl (talk) is accurate that we cannot predict whether our obligations under U.S. law will change in the future and require us to keep certain information for a longer or shorter period of time. One of the reasons that we chose not to include time frames in the privacy policy is that we want the flexibility to adjust our retention times as the law or our technological needs change, without seeking board approval for every adjustment. We do, however, provide our users with a better idea of what our promise to keep information for the shortest time possible means through our document retention guidelines. We also recently released requests for user information procedures and guidelines to provide our users with more information about our obligations under U.S. law and how we respond to requests for user information. Finally, we’re happy to answer, to the best of our ability, any specific questions you have if either of those documents don’t address them. RPatel (WMF) (talk) 20:29, 20 May 2014 (UTC)[reply]

Tracking pixel

Where's the discussion which determined that this technique with "less than the best reputation" is needed on the voyage? The phrase "tracking pixel" doesn't even exist in the cookie FAQ. More dirty laundry hanging in the front yard, s'il vous plaît, if you're serious about public comment. MaxEnt (talk) 07:29, 8 May 2014 (UTC)[reply]

In the archive maybe. I´m not qualified to answer the FAQ problem. Alexpl (talk) 08:24, 9 May 2014 (UTC)[reply]
https://meta.wikimedia.org/wiki/Talk:Privacy_policy/Archives/2014 Obviously they're very very serious about creating the appearance of consultation with and acceptance of help from the user community. However, the history of edits shows otherwise, I saw no users arguing for the opaqueness around critical issues like profiling that I tried to address through comments and edits. And yet the edits I proposed and contributed were removed. On the plus side, although the policy is certainly not clear about what it is collected, at least it no longer claims to be clear about what it is collected. Earlier versions both were not clear and yet claimed to be clear. --Elvey (talk) 03:25, 11 May 2014 (UTC)[reply]
MaxEnt (talk), you can find tracking pixels in our glossary of key terms. If you would like to read some of the discussion we had during the consultation regarding this topic, please see answers from tech here and discussion regarding third party data collection here. RPatel (WMF) (talk) 18:59, 14 May 2014 (UTC)[reply]
RPatel (WMF), please stop keep not conforming to gender stereotypes of this awesome New Yorker cartoon! </joke> :-) (The #Anchors you added are helpful.) --Elvey (talk) 08:35, 4 July 2014 (UTC)[reply]

Edits about tracking and personal information

This edits User:Elvey was remedied. User:LVilla (WMF) Elvey, please share context? (Like you did for some other thing here). Gryllida (talk) 04:30, 7 January 2014 (UTC)[reply]

To explain why I changed those -
  • this edit removed "retained" from the description of what we do with direct communications between users. I did this because we it is not accurate to say that we retain those - we may in some cases but in most cases that I'm aware of we don't.
    So does anyone think that justifies silence on this important topic? Not that I've seen (other than staff.)--Elvey (talk) 03:25, 11 May 2014 (UTC)[reply]
  • this edit removed an example about tracking pixels that Elvey had edited. Elvey's edit correctly pointed out that the example was a little hard to understand, but I don't think his edit improved it. I spent a little bit of time trying to explain it better without writing a book or assuming the reader is a web developer, and failed, so I deleted it. If folks want to take another stab at it, I'm happy to discuss it here.
Sorry for not explaining this earlier, User:Elvey - I do appreciate that you were trying to improve it :) —LVilla (WMF) (talk) 00:00, 9 January 2014 (UTC)[reply]
So does anyone think that justifies increasing opacity regarding this important topic? Not that I've seen (other than staff.) --Elvey (talk) 03:25, 11 May 2014 (UTC)[reply]

Layout problem

The blue-box summary for each major section in the left margin seems to be creating blank space in the main prose, as if there were a {{clear}} around it rather than being adjacent to the actual text. I'm using Firefox 29.0 on OS X. Seems to resolve itself if I make my browser window extra wide, so maybe something is hardcoded for some minimum something? Sorry, I can't upload images to meta to illustrate it. DMacks (talk) 00:16, 9 May 2014 (UTC)[reply]

Hi DMacks, thanks for pointing this out! We are looking into whether we can fix this. RPatel (WMF) (talk) 19:03, 14 May 2014 (UTC)[reply]

Exemptions from the Privacy Policy

I'm going to make this brief, because I don't think anyone really cares anyway, but I have a bit of a problem with the wording of this new privacy policy. In particular the part which says that Admins and functionaries (checkusers and the like) are exempt. Now I realize that there has been a developed culture where the admins here are treated like royalty and I agree there needs to be some language that allows them to do their tasks. But to say they are exempt from policy referring to Privacy information is a big problem for me. Functionaries I can go with because their identity and age are vetted. But administrators are selected by the community and their identities are never verified. There is enough problems with admin abuse on Wikipedia. We really should not be writing language that specifically excludes the from privacy policy. Reguyla (talk) 02:17, 15 May 2014 (UTC)[reply]

Are you referring to the "To Protect You, Ourselves & Others" section? The box on the left summarizes the cases when "users with certain administrative rights" can disclose information:
  • enforce or investigate potential violations of Foundation or community-based policies;
  • protect our organization, infrastructure, employees, contractors, or the public; or
  • prevent imminent or serious bodily harm or death to a person.
The third definitely makes sense. The second one is somewhat vague (protect the public/employees from what?), but seems reasonable. However, the first one could potentially be problematic. Violating WMF policy is very different from violating a "community-based" policy. Which part of the new privacy policy are you concerned with? I don't see anything where admins "are exempt", but I admit I only searched the document for the word "admin[istrator]". PiRSquared17 (talk) 22:07, 15 May 2014 (UTC)[reply]
Have you tried uncollapsing? The most important parts of the text are the two collapsed ones. Or, Talk:Privacy_policy/Archives/2014#Google Analytics, GitHub ribbon, Facebook like button, etc. and the three threads linked from it (plus some others). --Nemo 16:34, 16 May 2014 (UTC)[reply]
Oh yeah I read every word, which leads to a seperate issue of it being very long and sufficiently complex and legalistic to ensure very few will take the time to read it. In regards to the matter of admins and privacy. There are multiple problems with not clearly defining their role in the privacy policy. For example:
  1. There are about 1400 admins on the english wiki alone with varying levels of activity and interpretations of policy. Of that, only about 500 edit more than once every thirty days and of that less than 100 edit every day.
  2. They are not vetted through the WMF and are anonymous, makning privacy security dubious
  3. Even the the Functionaries like checkuser are questionable because eventhought their identifications are verified through the WMF. The verification process is pretty limited and the documentation isn't retained.
So I would recommend rewording the part about Admins like Checkuser, to refer to functionaries instead of admins and I would lose the loose wording of who is exempt. We don't have that many roles, we should just list them. Reguyla (talk) 18:12, 16 May 2014 (UTC)[reply]
@Nemo: Why are those boxes collapsed? They contain important information.
@Reguyla: Ah, I think I see what you are referring to now. "Administrative volunteers, such as CheckUsers or Stewards" is not clear whether it includes normal admins (sysops) or only CU/OS/Stewards (who are at least identified to the Wikimedia Foundation and have specific policies, as well as the access to nonpublic information policy). It would make sense to list out the specific groups or rights this covers. I don't see why admins should be exempt from policies regarding privacy. This wording seems to allow admins, essentially normal users with a few extra buttons, to disregard the privacy of other users, if I am interpreting it correctly.
@LVilla (WMF): are normal admins (sysops) exempt from this policy, or does that wording only apply to CU/OS/Stewards, who have more specific policies? PiRSquared17 (talk) 21:53, 16 May 2014 (UTC)[reply]
Hi Reguyla & PiRSquared17. Thank you for your comments and questions. We wanted to clarify why administrative volunteers are excluded from the privacy policy. The privacy policy is meant to be an agreement between the Foundation and its users on how the Foundation will handle user data. The Foundation can’t control the actions of community members such as administrative volunteers, so we don’t include them under the privacy policy. However, administrative volunteers, including CheckUsers and Stewards are subject to the access to nonpublic information policy (access policy). Under the access policy, these volunteers must sign a confidentiality agreement which requires them to treat any personal information that they handle according to the same standards outlined in the privacy policy. So, even though administrative volunteers are not included in the privacy policy, the access policy and the confidentiality agreement require them to follow the same rules set forth in the privacy policy. I hope that clears up any confusion. RPatel (WMF) (talk) 20:48, 20 May 2014 (UTC)[reply]
The Access to nonpublic information policy does not apply to "normal" sysops who are not identified to the Wikimedia Foundation, but who may have access to some private data (deleted edits). PiRSquared17 (talk) 23:07, 20 May 2014 (UTC)[reply]
@RPatel, Thank you for the response, but here is my problem with that. Checkusers, Oversighters and Stewards may sign an agreement and have their information vetted. Regular admins do not. They are still anonymous and since the "normal" admins have access to material which has been deleted, oftentimes including personal details like Email addresses, phonenumbers, etc. of edits made or derogatory material on BLP's, significant privacy issues can still be an issue. Also, your argument that you make about "the access policy and the confidentiality agreement require them to follow the same rules set forth in the privacy policy" is also applicable to regular editors, who frequently do not follow them. We have seen over the years a number of admins get in trouble, desysopped, banned, etc. for violations. Worse, we have also seen a number of admins, including some in the last week or two on Wikipedia, get away with pretty severe violations. So although I do not expect the WMF to make any changes, I still have serious concerns and hesitations about admins being exempted from the Privacy policy. Frankly, the admins are already held to a much lower bar than regular editors and frequently allowed to get away with things that would cause a regular editor to be blocked or banned entirely from the site, so this is just another example, of enabling a group of editors to be exempt from the policies that govern the site. Reguyla (talk) 20:22, 21 May 2014 (UTC)[reply]
@RPatel (WMF):, @LVilla (WMF): Reguyla- We haven't heard back since 16/20 May so I did diff because regular administrators clearly do have access to nonpublic information covered and defined by the Privacy Policy and because of the statement above by RPatel (WMF) that
"The Foundation can’t control the actions of community members such as administrative volunteers, However, administrative volunteers... are subject to the access to nonpublic information policy. Under the access policy, [all] these volunteers must sign a confidentiality agreement which requires them to treat any personal information that they handle according to the same standards outlined in the privacy policy."
I was reverted by Odder ~40 mins ago, without so much as an edit summary or other follow-up.
PiRSquared17 On what basis can you say that? I've provided two arguments for why that's not the case. We can't just put in place policies that are a more contradictory mess than the status quo. --Elvey (talk) 19:30, 27 May 2014 (UTC)[reply]
@PiRSquared17, I don't buy the argumetn that we can't control them so we just exempt them from teh policy. That makes absolutely no sense. Reguyla (talk) 20:10, 27 May 2014 (UTC)[reply]
@Elvey: My basis for that claim: The new version of the access to nonpublic information policy does not include admins in the list of users it covers. Also, admins do not necessarily meet the minimum requirements listed there. In fact, it says "Community members with the ability to access content or user information which has been removed from administrator view". If they wanted to include admins, then they wouldn't have added "which has been removed from administrator view". Being bold is fine in most cases, but (IMHO) you can't just add something to a WMF policy draft that was recommended to the Board without even discussing it on the talk page. FYI this seems to be the current version of that policy. PiRSquared17 (talk) 20:21, 27 May 2014 (UTC)[reply]
@Reguyla: I'm not sure what you're referring to (whom can't we control?). PiRSquared17 (talk) 20:21, 27 May 2014 (UTC)[reply]
I'm quoting your statement above where you say "The Foundation can’t control the actions of community members such as administrative volunteers". If that is the case, then that would also imply you can't control the editors either which makes the whole privacy policy pointless. You absolutely can control the admin corps, you have simply chosen not too and that is the problem. On En anyway the admins haev engrained a culture where they are above reproach and are exempt from policy already. Its next to impossible to remove the tools from even the most abusive admins and now they are exempted from the privacy policy too. I'm sorry but I have to wave the BS flag on that. I don't really even agree that the functionaries should be "exempt" but should be identified as having special roles that "requires" them to have access. Admins are not vetted through the WMF and they should not be exempt from the privacy policy. Reguyla (talk) 20:29, 27 May 2014 (UTC)[reply]
@Reguyla: I never said that; RPatel did. For what it's worth I agree with you. PiRSquared17 (talk) 20:45, 27 May 2014 (UTC)[reply]
Did you see this, Reguyla? PiRSquared17 (talk) 15:15, 28 May 2014 (UTC)[reply]
Yes sorry, it looked like you said it. Reguyla (talk) 17:12, 28 May 2014 (UTC)[reply]
Good points, @Reguyla:. What language changes should we make to avoid using "exempt" ? --Elvey (talk) 20:53, 27 May 2014 (UTC)[reply]
I don't know to be honest I would have to think about it. I'm pretty disallusioned with Wikipedia and the WMF at the moment so frankly I don't think they would listen to me anyway and anything I said would be a wsate of my time. I just wanted to make sure it was known that making admins exempt from privacy policy was absolutely not appropriate and was going to enable more abuse. Realistically nothing would ever happen anyway. The WMF stands behind the admins and I don't think they have ever interfered and the same goes for the admins themselves. Even if one is wrong they rarely admit it publicly and find reasons to defend even the most offensive violations of policy. So even if we said they were going to cooked over open flames if they violated the provacy policy nothing would happen because the WMF doesn't have any intention or desire of invovling them in the projects. Its beneath them.Reguyla (talk) 15:03, 28 May 2014 (UTC)[reply]
PiRSquared17: Either way, something must change. I agree when you say it's not OK that "This wording seems to allow admins, essentially normal users with a few extra buttons, to disregard the privacy of other users, if I am interpreting it correctly." We both see it as a problem. If I mustn't be bold, what then? It's OK for Odder to revert without so much as an edit summary or other follow-up? I say no. What do you say? We did discuss the need for a change, if not the actual change that I made, on this talk page, and the WMF took no action, for over a week, and I referred to this talk page in my edit summary. Please suggest or make a change that's better than the one I made. --Elvey (talk) 20:53, 27 May 2014 (UTC)[reply]
I think your edit summary here is a good example. PiRSquared17 (talk) 21:02, 27 May 2014 (UTC)[reply]
PiRSquared17: Of? Something must change. I agree when you say it's not OK that "This wording seems to allow admins, essentially normal users with a few extra buttons, to disregard the privacy of other users, if I am interpreting it correctly." We both see it as a problem. It's OK for Odder to revert without so much as an edit summary or other follow-up? I say no. What do you say? We did discuss the need for a change, if not the actual change that I made, on this talk page, and the WMF took no action, for over a week, and I referred to this talk page in my edit summary. Please suggest or make a change that's better than the one I made. --Elvey (talk) 20:53, 27 May 2014 (UTC)[reply]
The community consultation is over, according to the notice on the privacy policy and the access to nonpublic information policy, so I'm not sure. Has anyone from the WMF (perhaps RPatel) replied since? PiRSquared17 (talk) 22:07, 3 June 2014 (UTC)[reply]

Hi all. Sorry for the delay in response and for any confusion caused by my earlier response that referred to “administrative volunteers” — different types of volunteers should not have been lumped together with that phrase.

Correct me if I'm wrong, but you seem to be concerned that regular administrators (sysops) are not subject to the Access to Nonpublic Information Policy, but have access to material that has been removed from general public view (which may contain sensitive information, like email addresses, that was posted publicly).

By posting information publicly online, even if it is later removed from general public view, that information falls outside the scope of the Privacy Policy. The Privacy Policy covers "personal information", which is defined as "[i]nformation you provide us or information we collect from you that could be used to personally identify you" "if it is otherwise nonpublic.” Because sysops do not handle "personal information" within the scope of the Privacy Policy, we did not apply the Access Policy to sysops. Rules regarding sensitive information that has been removed from general view but still viewable by sysops is addressed in other policies, such as the oversight policy. Under the oversight policy, if a user is uncomfortable with sysops being able to view sensitive information in a particular situation, the user can ask for that information to be hidden. Oversighters who would handle these types of requests are subject to the Access Policy.

It is also worth noting that the Access Policy is meant to set minimum requirements for community members that do handle “personal information” as defined by the Privacy Policy. It does not limit a particular project’s community from imposing additional requirements or obligations upon community members, such as sysops who handle sensitive information. Each community must decide what is right for them and create policies accordingly. RPatel (WMF) (talk) 00:04, 4 June 2014 (UTC)[reply]

@RPatel (WMF): - That isn't entirely true and let me give you a couple examples why. Personal information, that would normally not be available or visible online is frequently passed around the backchannels through mailing lists and IRC while discussing issues or just in idle chitchat. That information is not generally allowed on Wikimedia projects and would generally be oversighted or at least revdelled. But it cannot be in the emails and IRC channels and these things are frequently logged and retained. I think we have all seen cases were these were used or leaked in inappropriate manners. The UTRS system is another good example. Lots of personal info is available there and any admin can have access. In fact there is a wanring message stating as such when the UTRS system is used. Many non admins have access to it as well making the problem even worse but thats a seperate issue. By exempting admins from the Privacy policy as its currently worded, is asking for trouble. IMO, if it ever went to court, any decent lawyer would have a good arguement for any number of exceptions to why the privacy policy violated users rights/reasonable expectation of privacy. I'm fairly surprised it hasn't already happened.
This privacy policy doesn't just cover Wikipedia or a couple projects. It is an umbrella policy designed to cover them all. Now if the WMF wants to restrict admins to those who are willing to provide personal info to the WMF to verify their identity or do that for those who wish to operate in the backchannels of IRC or UTRS, then maybe I could agree its fine. Another good step forward would be for the WMF to perform some oversite of the functionaries and admins of the Wikipedia site which is sorely lacking. But I don't think doing that is going to happen.
I for one already have serious concerns about the collegiality and civility problems of the english Wikipedia and the severe lack of leadership and oversight of the admins and functionaries of the project. If the site continues down its current path without some oversight or intervention by the WMF HQ team, no one is going to want to edit except some bullies and POV pushers (its almost to that point now). Exempting them is the last thing we should be doing to curb the rampant abuses that are already occurring. Reguyla (talk) 17:51, 4 June 2014 (UTC)[reply]

Definitions, simplification, reopening discussion

RPatel (WMF) [edit:revised] Can you add a definition of nonpublic information based on the one from Confidentiality_agreement_for_nonpublic_information to the definition section, or remove the need for one? SMcCandlish, we could fork/edit Privacy_policy/Proposed_Revisions --Elvey (talk) 10:25, 24 May 2014 (UTC)[reply]
{{editrequest}}
So, I don't think we should still have a notice that "Our Privacy Policy is changing on 6 June 2014". But since we do, to which version can we be switching? The one in place a month ago? the one with the fix RPatel just made? I don't think we can do the latter. So I think we should fix the outstanding policy issues and then repost notice that "Our Privacy Policy is changing on x xxx 2014".--Elvey (talk) 18:25, 27 May 2014 (UTC)[reply]
Hi Elvey, thanks for the question and suggestion. The privacy policy that will go into effect is the one that was approved by the Board, only changed since the Board's approval to correct typos, like the one pointed out above. To respond to your suggestion to add a definition of nonpublic information to the privacy policy, I wanted to point you to the definition of "personal information" in the definition section, which covers information that "is otherwise nonpublic and can be used to identify" users. The definition from the confidentiality agreement was not included in the privacy policy because that definition is geared towards information that volunteers would have and that is governed by the access to nonpublic information policy. For example, the confidentiality agreement definition specifies information users "receive either from tools provided to you as an authorized Wikimedia community member or from other such members." --unsigned comment byRPatel (WMF).
RPatel (WMF), are you aware that the Privacy Policy itself uses the term nonpublic information multiple times? Some of those uses of the term are far from any reference to the confidentiality agreement. I find it hard to imagine an argument for why is it better to leave the definition-and its very existence-hidden away. What's the benefit? Elvey (talk) 27 May
Hi Elvey. First, sorry about the previous unsigned comment! I think my previous comment was unclear. I read your suggestion as to take the exact definition from the confidentiality agreement and add it to the privacy policy, and I was trying to explain that the confidentiality agreement definition would not make sense in the privacy policy context (because it talks about authorized community members getting information through tools). But if you are just suggesting that a definition of nonpublic information be included, not necessarily the same definition from the confidentiality agreement, I want to respond to that as well. The privacy policy defines personal information and delineates how the Foundation handles it. Nonpublic information is a broader term that does not necessarily include personal information. For example, anonymized data that contains no personal information is "nonpublic" until we release it, whereas non-anonymized data containing personal information that has not been released (and would not be except as permitted under the privacy policy) would be both "nonpublic" and "personal information". The privacy policy does use the term "nonpublic information" and in most cases it's in reference to certain users with admin rights-- "who are supposed to agree to follow our Access to Nonpublic Information Policy" and nonpublic information is discussed in that policy. I don't think we're trying to hide its definition or existence but instead trying to be more specific by defining personal information. RPatel (WMF) (talk) 20:52, 28 May 2014 (UTC)[reply]
RPatel (WMF), Thank you for that explanation and for your patience. Indeed, Nonpublic information, private information, private user information, personal information - a lot of terms; perhaps a Venn diagram is called for. After having read the "Privacy-related pages", a user should know what is collected, know that WM employs it, and that access is restricted to approved projects and user groups, only. How should we resolve the problem of "Nonpublic information" not being defined where it is used? I have 2 ideas: A and B:
A) If we eliminate the term 'nonpublic information' from the Privacy Policy like this, is it a better policy? The Privacy Policy stops committing to protect the anonymized data you mention; is changing the status of the data in that section of the Venn diagram a significant negative? I don't see it. We simplify the document, eliminating an undefined term.
B)A definition of nonpublic information be included. I propose this one, which I derived from the extant one: "Nonpublic information. Nonpublic information is private information, including private user information, disclosure of which is covered by the Confidentiality agreement for nonpublic information. Nonpublic information includes personal information. It does not include information about a user that that user otherwise makes public on the Wikimedia projects."
Thoughts on these or other solutions, or the other changes I'm discussing with LVilla? --Elvey (talk) 20:21, 3 June 2014 (UTC)[reply]
Hi Elvey. Sorry for the delay in responding. We added a definition of nonpublic information here. Thank you for the suggestion! RPatel (WMF) (talk) 18:24, 2 July 2014 (UTC)[reply]
Wahoo! Thank you for taking it. --Elvey (talk) 07:39, 4 July 2014 (UTC)[reply]

Typo

The phrase "such a merger" should read "such as a merger". If this is a community-developed privacy policy draft, why isn't it editable? I shouldn't have to post notices like this just to get a typographical error fixed. Semi-protection from IP vandals ought to be sufficient. If a page as contentious as en:w:Wikipedia:Manual of Style can be editable, so can this.  — SMcCandlish ¢ ≽ʌⱷ҅ʌ≼  08:09, 17 May 2014 (UTC)[reply]

Because, SMcCandlish, this Policy is approved by the Board, and the Board can only approve a particular version. People can't just add whatever they think "improves" the document afterwards, just as administrators can't just "improve" passed legislation. — Pajz (talk) 08:40, 17 May 2014 (UTC) (That said, I'm very sure both Legal and the Board welcome pointers to such errors, I'm just saying that this is unlike something like the Wikipedia Manual of Style.)[reply]
Somewhere in there it still says it's a draft being worked on, not an approved final policy. That's why I thought it should be editable.  — SMcCandlish ¢ ≽ʌⱷ҅ʌ≼  09:57, 17 May 2014 (UTC)[reply]
Thank you, SMcCandlish. We will fix the typo. RPatel (WMF) (talk) 03:05, 20 May 2014 (UTC)[reply]
Fixed! Thanks. RPatel (WMF) (talk) 20:16, 20 May 2014 (UTC)[reply]

Edit request (minor) - sectionlink to What This Privacy Policy Doesn't Cover

A minor edit request; the table in the section Definitions contains the words listed in the "What This Privacy Policy Doesn't Cover" section below. A sectionlink would seem more natural and user-friendly; listed in the What This Privacy Policy Doesn't Cover section below.

I appreciate that edits to this document can be costly. If this is more than a trivial change, please feel free to ignore. - TB (talk) 10:24, 25 May 2014 (UTC)[reply]

ERq 8592464 - phrase

"We believe that information-gathering and use should must go hand-in-hand with transparency."

to show strong commitment to the principle. Ivan Pozdeev (talk) 14:12, 31 May 2014 (UTC)[reply]

Important Consideration: I hope it's not too late, but fear it is.

It may have been discussed already, and realized the time and didn't have time to read the content of this page. If is has, then forgive and disregard this section.

It pertains to the fact that every user, whether or not a "functionary" exposes himself to possibility of civil or criminal prosecution for defamatory remarks made about another user. I've been doing research on WMF's concerns about attrition and plateaud new user registrations. One thing led to another, resulting in a dominoe effect landing me as the subject in a discussion group titled with my own username. Though the guidelines when opening a new topic state not to discuss anything defamatory or libelous, the fact of the matter is that everyone who comments about me in the room is in there to say not nice things... much of it defamatory and libelous. The admins are the worst.

Now, no one there seems to get it. I was protecting that room because I was protecting WMF, WP and other users from implicating themselves. Does anyone here know what I am talking about? It's everywhere on WP... plenty of notice about it.

You should begin with the actual Section of the article which is, itself, a violation of criminal law insofar as it begs for critique: PRESIDENTISTVB. Before I could do what is necessary to clarify the issue, I was blocked. All I could then do was edit my own talk page, so I created a section in answer to it: 60 Hours a Slave. I sent an email to Oversight to explain it a little better. You can read the letter I wrote to the Oversight Committee and then view these other two docs: [ONE] [TWO] (PW is username of admin who blocked me.)

The bottom line, as the three external references on my talk page reveal is that every user risks his personal, private information being revealed via court order accompanying a lawsuit, and I firmly believe all users should be made aware of it, in a more prominent way than we have been. I've linked some graphics in the content on my talk page. Make sure you read the three linked articles/items.

Again, if I'm visiting an area already fully discussed, then all I can say is, THANK YOU.

Best regards,

PresidentistVB (talk) 03:37, 3 June 2014 (UTC) PresidentistVB[reply]

Good luck but I don't expect the WMF to make any changes here. Its become pretty clear to me that the WMF doesn't have any interest in protecting editors rights or the rights of the readers. They only seem interested in further insulating the admins thus expanding the us and the mentality of adminship on Wikipedia. Unfortunately there is a seperate discussion about this on the English Wikipedia that has much more active discussion than here and I cannot edit there because I was banned to shut me up for criticising abusive admins. Reguyla (talk) 11:21, 3 June 2014 (UTC)[reply]

L'application de la politique de confidentialité est-elle rétroactive intra muros de WP, quand des Wiki ne l'ont pas respectée auparavant en interne à l'égard d'autres wiki?

Problème passé (2012) sur une dénonciation : un Wikipédien a divulgué mon nom véritable, en l'associant à mon pseudonyme, en page de discussion d'article. A present, cette dénonciation de mon patronyme lié à mon pseudonyme continue d'être répétée dans les pages correspondantes à la demande d'information sur ma personne sur Internet quand on tape mon nom véritable. Le wikipédien dénonciateur n'a pas été inquiété et se trouve toujours parmi les contributeurs de la communauté Wikipedia. --Bruinek (talk) 12:03, 3 June 2014 (UTC)[reply]

Suite en 2014 du même problème de harcèlement déguisé de ma personne privée soi-disant au nom des "principes de WP": sur Wikipedia.fr, le même wikipédien a récidivé en m'interpellant par mon prénom véritable au lieu d'utiliser mon nom d'utilisateur dans "l'historique" de l'article sur l'écrivain Jean Sulivan le 5 juin 2014 à 21:25. Cf. aussi l'observation que j'ai faite à ce sujet dans la page de discussion de l'article Jean Sulivan : Violation de la politique de confidentialité de Wikipedia par.... Donc, que penser quand un wikipédien commence lui-même par violer la politique de confidentialité de WP pour dénoncer en public un autre wikipédien qu'il critique - quel que soit le motif invoqué -, en utilisant le nom véritable de cette personne, associé à son pseudonyme d'utilisateur? Ce wikipédien dénonciateur d'une information privée sur un autre wikipédien a-t-il pour le moins le droit de continuer de faire partie de la communauté Wikipedia? Sachant que l'information diffamatoire en question continue de figurer sur les pages du moteur de recherche Google (par exemple) dans les "réponses" fournies à mon nom véritable d'auteur par rapport à mes travaux de chercheur, livre et articles publiés! Et sachant que j'ai averti un administrateur du problème en 2012 et à nouveau en juin 2014. Ce wikipédien ne met-il pas Wikipedia en contradiction juridique flagrante à mon propos avec mon droit d'auteur sur Internet ? Moi seul(e), en tant qu'auteur, ai le droit à la divulgation de mon nom, y compris sous un pseudonyme (comme dans Wikipedia), et au retrait ! Voir Droit d'auteur et internet 2.2 Droit moral et internet: "Le droit moral de l'auteur correspond au droit à la paternité (ou droit au nom), au droit au respect de l’œuvre, au droit de divulgation et au droit de repentir ou de retrait. Ces droits sont inaliénables, perpétuels, insaisissables et imprescriptibles".--Bruinek (talk) 11:57, 15 June 2014 (UTC)[reply]
Pour l'incident de 2012, on peut difficilement revenir dessus (et le mieux c'était alors d'oublier dans les archives et tu peux toujours aussi demander à un admin de supprimer une info de l'historique public des pags concernées), c'est un peu tard, mais concernant celui de juin 2014 la politique était applicable. Parle-z’en à un admin de Wikipédia. Si tu fais preuve de harcèlement, celui qui fait ça devrait être sanctionné. On ne peut pas publier d'info privée sur quelqu'un sans son autorisation, même si l'auteur dispose de l'information. Maintenant si ça se limite à ton prénom c'est difficilement identifiable. S'il mentionne ton nom et qu'il n'est pas extrêmement courant (comme Martin, nom le plus utilisé en France...) c'est difficile de te localiser.
L'ennui c'est qu'il risque fort s'il t'a identifié de continuer à publier tout ce qu'il trouve sur toi, et verser toutes tes autres activités sur le web (surtout si tu as un compte sur un réseau social bien connu, ou si tu y as publié une photo de toi et aussi sur d'autres réseaux beaucoup plus sensibles comme des sites de rencontre) que tu ne voudrais pas lier à Wikipédia. Il n'est pas acceptable sur Wikipédia d'utiliser des infos glanées sur d'autres sites (et fortiori aussi sur les réseaux sociaux privés, c'est une violation de leur propre droit d'auteur qui ne donne accès à leur contenu que sur le site pour un usage privé, ou pour les démarchages commerciaux via certains filtrages et paiement de droits d'accès limité). S'il a obtenu des infos en rapprochant avec un réseau social, il a commis une violation de droit d'auteur (copyright) du site concerné. verdy_p (talk) 12:08, 15 June 2014 (UTC)[reply]
Note quand même: regarde [[2]] et tu verras qu'il y a une redirection qui mentionne un nom explicite. C'est public et si tu ne veux pas de cette redirection, demande à un admin de supprimer cette page de redirection. Quand tu as demandé le renommage de ton compte, la redirection n'aurait pas du être créée, ou bien l'admin qui a fait ça aurait du la supprimer immédiatement et rendre la page invisible de l'historique public et du journal public des suppressions. Ce renommage a eu lieu le 30 novembre 2007 à 21:34, il était visible en 2012, donc il n'y a pas eu de violation d'identité manifeste. Tu aurais du t'en rendre compte plus vite !‎ verdy_p (talk) 12:13, 15 June 2014 (UTC)[reply]

vandal

I'm more than 10 years present, but I do not seem to be able to revert a vandalism here. See https://meta.wikimedia.org/w/index.php?title=Privacy_policy/de&action=history and the edits of the IP just now. I cannot revert them. It is a shitty system when you should study how to do it. A revert of a vandalism should be simple. -jkb- 22:50, 22 August 2014 (UTC) - - - P.S. My feeling is that more and more users are exluded from editing here. -jkb- 22:52, 22 August 2014 (UTC)[reply]

I've reverted those edits. For pages translated using Translate extension, you have to revert the edits to the translation units separately. Special:Contributions/198.228.200.168 and revert the edits to the pages in Translations: namespace. --Glaisher (talk) 08:40, 23 August 2014 (UTC)[reply]