内容简介:On 2020-07-30, the Tor Project tweeted a response to my previous two public disclosures. While I expected a response, I thought it would have been addressed directly to me, or possibly delivered through private channels since they know how to contact me. I
On 2020-07-30, the Tor Project tweeted a response to my previous two public disclosures. While I expected a response, I thought it would have been addressed directly to me, or possibly delivered through private channels since they know how to contact me. I did not expected them to omit me from the recipient list. I wrote this reply to clarify items that the Tor Project found confusing, make corrections to misstatements, and re-raise unanswered questions.
Below is the letter I sent back to the Tor Project, as well as copies of the text from their tweet. I hope this re-opens the way for friendlier and more receptive communications with the Tor Project.
Aug 2, 2020
Dear Tor Project,
I noticed that you posted a reply on Twitter to my latest blog series. However, I was disappointed that you never replied directly to me. In particular:
- On 30-July-2020, you posted your reply to Twitter. ( https://twitter.com/torproject/status/1288955073322602496 ) Your reply consisted of one tweet with 4 pictures of text. By using pictures of text, you prevent text-based searches from discovering your response.
- You did not include my Twitter handle, @hackerfactor, on your tweet. Had other people not included me on their Twitter replies to you, I could have missed your response.
- In your "picture of text" reply, you never named me or mentioned who you were referring to, but you did directly respond to some of my criticisms and citations.
Because of the way you formulated your reply, it did not trigger any mentions in my Twitter feed. It's as if you didn't want me to read or respond to your reply. Then again, for over two years you have had repeated requests for direct discussions with me. Each time, you either ignored the request or would not agree to a meeting.
I have included your pictures of text and transcribed it below. This will allow search engines and future researchers to easily find your public reply. In addition, I have positioned them in this letter so that my responses are taken in context.
Tor Project reply, tweet introduction (textual): People have asked us about a series of bugs that are being publicized and incorrectly labeled as 0-days. Whenever we are notified of high-risk security bugs, we will, as always, address these issues and release formal responses so you know what's happening.
Your Twitter response referenced issues that I raised in my blog series "Tor 0day":
- Tor 0day: Stopping Tor Connections. ( https://hackerfactor.com/blog/index.php?/archives/888-Tor-0day-Stopping-Tor-Connections.html ) This shows how to detect direct Tor connections.
- Tor 0day: Burning Bridges. ( https://hackerfactor.com/blog/index.php?/archives/889-Tor-0day-Burning-Bridges.html ) This shows how to detect Tor's indirect (bridge) connections.
Unfortunately, you only addressed three topics, while these blog entries point out substantially more issues. In addition, your reply includes many comments that imply confusion and misunderstanding on your part. In this letter, I am detailing these specific issues. I also noticed that you sent a variation of your Twitter response to reporter Catalin Cimpanu at ZDNet ( https://www.zdnet.com/article/multiple-tor-security-issues-disclosed-more-to-come/ ).
Item 1: Arguing the definition of 0dayTor Project reply, picture #1:
We're happy to get bug reports in whatever way the reporter is willing to provide them. The two reports from last week are not new, and the reports from today are worth investigating but are presented with little evidence that they work at scale.
We also encourage people to look into the definition of "0-day" that is in a report. The computer security community typically thinks of a 0-day as a concrete security bug that has not been made public yet -- so by that definition, we haven't seen any 0-days in this blog post series so far, and it can be confusing to the public to call them 0-days.
So far, three issues have been raised:
In your reply, you tried to debate the definition of 0day. However, you ignored the fact that there are different definitions for 0day, and not one standardized definition. The folks at CMU have a list of different, common definitions for 0day that are used within the security and software development communities. ( https://insights.sei.cmu.edu/cert/2015/07/like-nailing-jelly-to-the-wall-difficulties-in-defining-zero-day-exploit.html ) My usage is consistent with definitions 3, 4, 6, and 10; a 0day remains a 0day until the vendor issues a patch, workaround, or some other kind of viable solution. I explicitly included my definition in the first blog entry.
Debating the definition of 0day is like debating whether Tor is spelled "Tor" or "TOR". It is nothing more that a sidebar that detracts from the central discussion. Personally, I would prefer to focus on the vulnerabilities and solutions rather than arguing about whether an exploit without a vendor-provided solution is considered a zero-day exploit.
Item 2: Ignoring the scrollbar vulnerability
Tor Project reply, picture #2:
(1) the claim that the scrollbar width of a user's Tor Browser can be used to distinguish which operating system they are using. We have been working on this issue in ticket #22137. This information leak is known and publicly documented, and there are other ways a Tor Browser user's operating system can be discovered. When Tor Browser does not communicate the operating system of its user, usability decreases. Commonly used websites cease to function (i.e., Google Docs). The security downside of operating system detection is mild (you can still blend with everybody else who uses that operating system), while the usability tradeoff is quite extreme.
Tor Browser has an end goal of eliminating these privacy leaks without breaking web pages, but it is a slow process (especially in a web browser like Firefox) and leaking the same information in multiple way is not worse than leaking it once. So, while we appreciate (and need) bug reports like this, we are slowly chipping away at the various leaks without further breaking the web, and that takes time.
In your reply, you claimed that changing the scrollbar's width is an "extreme" usability issue and difficult to resolve. However, this contradicts your own Tor developers. As one Tor developer commented three years ago in the open bug discussion: "17px look good." ( https://trac.torproject.org/projects/tor/ticket/22137 ) Another developer wrote 18 months ago, "I have been using the solution outlined in https://bugzilla.mozilla.org/show_bug.cgi?id=1397996#c2 for over [a] year, flawlessly."
While you say that you are working on this problem, you referenced a bug that is 3 years old and has been unmodified in 10 months. ( https://trac.torproject.org/projects/tor/ticket/22137 ) Rather than fixing the problem, the Tor Project pushed the issue to Mozilla. Unfortunately, Mozilla does not have anyone assigned to it, and has not had anyone actively working on it since it was reported three years ago. ( https://bugzilla.mozilla.org/show_bug.cgi?id=1397996 ; the Mozilla community did not re-engage discussing this issue until after it was brought it on Twitter two months ago. https://twitter.com/__sanketh/status/1269081014430765057 ) I can understand why Mozilla would be slow to address this issue. Unlike the Tor Browser, Firefox does not make a promise to "Protect yourself against tracking, surveillance, and censorship." ( https://www.torproject.org/download/ .)
You also did not mention that the Tor Project closed this issue out as "resolved" on HackerOne. Exactly what do you think was resolved?
With regards to the Tor Browser scrollbar profiling issue, I am aware that there are plenty of ways to use JavaScript to identify the underlying platform (Windows, Mac, or Linux). There are even ways to make this determination without using JavaScript.
However, the scrollbar profiling issue is more substantial. The other methods can distinguish Windows from Mac from Linux. Scrollbar profiling can distinguish Windows 8 from Windows 10 and the Linux Gnome desktop from KDE, Unity, and other desktops. Aside from a handful of the most popular web sites, individual Tor users often represent a handful users who repeatedly visit any particular site. Segmenting their systems by desktop version makes them extremely distinct. Often they are effectively unique.
Item 3: Ignoring the TLS fingerprinting issue
Tor Project reply, picture #3:
(2) That it's possible to recognize vanilla Tor traffic based on how it uses TLS with firewall rules. Fingerprinting Tor traffic is a well-known and documented issue. It's been discussed for more than a decade. (See Tor Proposal 106). Fixing the way Tor traffic can be fingerprinted by its TLS use is very small step in the censorship arms race. We decided that we should not try to imitate normal SSL certs because that's a fight we can't win. Our goal is to help people connect to Tor from censored networks. Research has shown that making your traffic look like some other form of traffic usually leads to failure. See Houmansadr et al. IEEE S&P'13 paper: The Parrot is Dead: Observing Unobservable Network Communications. The strategy Tor has decided to take is better and more widely applicable, and that strategy is developing better pluggable transports. Tor has an entire anti-censorship team tackling this problem and has funding earmarked for this specific purpose.
In your reply, you point out that TLS fingerprinting is not new. And I agree: in my first "0day" blog entry, under the heading "Dropping 0Days", I explicitly mentioned that I found these independently but that they may not be new. Just because it's old, that doesn't mean there is a vendor patch or widely distributed solution (even though it is really easy for a software developer, like the Tor Project, to patch).
In the reply, you mentioned that the Tor Project has been debating this for over a decade. During that time, you have rolled out Tor v3 onion services, hundreds of updates (including some really minor issues), and other changes. And yet, you can't address this "very high" security risk? (On HackerOne, the Tor Project referred my TLS report to Tor bug #7349. This eight-year-old bug makes obfs4 bridges vulnerable to TLS scanners. https://trac.torproject.org/projects/tor/ticket/7349 )
Your reply references Tor Proposal 106. (The Twitter reply was different than your response to the reporter. In your response to ZDNet, you included a link.) This is a 13-year-old "closed" issue. ( https://gitlab.torproject.org/tpo/core/torspec/-/blob/master/proposals/106-less-tls-constraint.txt ) As far as I can tell, the Tor Project has not been publicly debating this for a decade. Instead, it appears that you discussed it 13 years ago, decided to do nothing, and have not looked back. During that time, TLS v1.2 was released and there have been significant advancements in terms of networking, attack methods, and defense options. Technology moved on, but the Tor Project has not.
Item 4: Limiting pluggable transports
Your response (picture #3) points out that the Tor Project wants to develop better pluggable transports (PTs). You wrote, "The strategy Tor has decided to take is better and more widely applicable, and that strategy is developing better pluggable transports." While that is a good strategy, it does not explain why, over the last few years, you have normalized to a homogeneous solution using a single pluggable transport: obfs4. (In my second "0day" blog entry, I described this as putting all of your eggs in one basket.)
If you have an "entire anti-censorship team tackling this problem", then what exactly is that team doing? I ask because the only new pluggable transport is Snowflake, and it has been in testing since 2016. ( https://lists.torproject.org/pipermail/tor-dev/2016-January/010310.html and the changelog for the alpha-version of the Tor Browser: https://gitweb.torproject.org/builders/tor-browser-build.git/plain/projects/tor-browser/Bundle-Data/Docs/ChangeLog.txt?h=maint-10.0a4 )
Item 5: Focusing on scalability rather than the threat
Tor Project reply, picture #4, paragraph #1:
(3) That obfs4 can be detected and blocked. The blog post is correct in suggesting that a finely-calibrated decision tree can be highly effective in detecting obfs4; this is a weakness of obfs4. However, what works in someone's living room doesn't necessarily work at nation-scale: running a decision tree on many TCP flows is expensive (but not impossible) and it takes work to calibrate it. When considering the efficacy of this, one also has to take into account the base rate fallacy: the proportion between circumvention traffic and non-circumvention traffic is not 1:1, meaning that false positives/negative rate of 1% (which seems low!) can still result in false positives significantly outweighing true positives. That said, obfs4 is certainly vulnerable to this class of attack.
Your tweet introduction, first picture reply, and 4th picture reply questions whether my proof-of-concept detection system will scale for exploitation by a nation-state. You also made a disparaging remark about testing in a living room.
My working code is currently deployed on a few high-volume networks and takes almost no resources. Considering the impact of the exploit, I'm surprised that you focused on theoretical scaling factors and not on the severity of the exploit itself. The exploit demonstrates how to detect all of Tor's obfs4 network connections.
In your reply's 4th picture, you also mentioned that 1% false positives for detecting obfs4 connections is a large amount. However, my system is prone to false-negatives (missing some obfs4 streams) and not significant false-positive matches (incorrectly claiming that something is obfs4).
In any case, you seemed to miss the point that it's a functional proof-of-concept. As I mentioned, I have implemented it and it works well on corporate gigabit networks. With more fine tuning, I'm certain that the false detections can be further reduced. Again, the Tor Project seems to confuse the quality of a working and deployed proof-of-concept with the severity of the demonstrated threat.
Item 6: Confusing prior-art with current detection methodology
Tor Project reply, picture #4, paragraphs #2 through #5:
The post says "However, I know of no public disclosure for detecting and blocking obfs4." There's work in the academic literature. See Wang et al.'s CCS'lS paper: Seeing through Network-Protocol Obfuscation. See also Frolov et al.'s NDSS'20 paper: Detecting Probe-resistant Proxies.
The blog post cites Dunna's FOCI'18 paper to support his claim that the GFW can detect obfs4. This citation must be the result of a misunderstanding. On page 2, the paper says: "We find that the two most popular pluggable transports (Meek [7] and Obfs4 [18]) are still effective in evading GFW's blocking of Tor (Section 5.1)."
The blog post also cites another post on Medium title 'Using Tor in China' by Phoebe Cross to support the same claim. This blog post correctly points out that obfs4 bridges that are distributed over BridgeDB are blocked whereas private obfs4 bridges work. This means that censors are not blocking the obfs4 protocol, but are able to intercept bridge information from our distributors. One has to distinguish the protocol from the way one distributes endpoints.
The findings published today (7/30) are variants of existing attacks (which is great!) but not 0-days. They are worth investigating but are presented with little evidence that they work at scale.
In your reply's third point (4th picture), you referenced my citations of prior-art that discuss detecting obfs4 traffic. You also include another citation that shows how to actively scan for obfs4 servers.
Unfortunately, I think you missed the point: those papers discuss aggressive port scanning, deep packet inspection, and statistical analysis of encrypted data. Each are slow and computationally expensive. My solution has no port scanning and no statistical analysis. My solution focuses on TCP headers and is completely passive. (For complexity, I only needed 2 counters for the finite state machine. It tracks a total of 4 active states, and one of the states isn't essential.)
In your reply, you stated that this is a variant of an existing detection method because other people have other methods to detect obfs4. However, I disclosed a very different exploitation approach that is faster, requires fewer resources, and is more efficient. This is an entirely different class of vulnerability.
Your reply also noted that the cited papers cannot detect private obfs4 bridges. Again, you seemed to miss an important element that was detailed in my blog: My solution detects obfs4, regardless of whether it is public or private. My solution can detect private obfs4 bridges. You confused the findings in cited historical references with the current state-of-the-art.
Item 7: Failing to address the detection of meek
Your reply omitted some of my key disclosures. For example, you totally ignored the method for detecting meek-azure. This is an even easier exploit than detecting obfs4 traffic since it just requires tracking the TCP payload size and the time between request and reply.
The meek pluggable transport (including meek-lite) is used by the Tor Browser's launcher to acquire new bridges, Snowflake to acquire WebRTC servers, and by users who select the meek-azure bridge. Distinguishing meek's domain fronting usage from non-domain fronting requests negatively impacts all of these services.
Item 8: Failing to address censorship
The Tor Project didn't respond to the concern that detecting (1) direct connections, (2) obfs4 bridges, and (3) the meek-azure bridge means that all methods for users to connect to the Tor network are detectable. If the connection is detected, then the adversary can choose to block the connection or closely monitor the end user. In countries with oppressive regimes and extreme censorship, detectability can be the difference between freedom, prison, and potentially death. Currently, the Tor Project provides no dependable means for users in high-risk areas to safely expressing free speech.
Conclusion
I found the Tor Project's response to be disappointing, but not surprising. Why are you debating the definition of "0day", criticizing the scalability of a proof-of-concept, and continuing to self-promote flawed solutions instead of discussing security risks, fixing security issues, and addressing critical vulnerabilities? Letting known, high-severity vulnerabilities sit for years, claiming that open issues are resolved, and ignoring serious problems is not a way to develop a tool for privacy, anti-censorship, and free speech.
As I mentioned in my blog, I have spent years trying to report serious security vulnerabilities to the Tor Project. The responses have varied from complete silence to dismissive or patronizing. A few years ago, I was invited to disclose security concerns on your public mailing list. That type of public disclosure is no different than making them public on my blog.
I still believe that Tor fills a critical need, but it requires serious improvements that cannot wait years. I am still willing to discuss these issues with the Tor Project, if you are willing to listen.
Neal Krawetz, Ph.D.
Hacker Factor Solutions
https://www.hackerfactor.com/
I had originally planned to release three blog entries on the "Tor 0day" theme before briefly switching to another topic. (I don't want to be "all Tor all the time"; a little variety is nice.) This third blog entry on this theme was unexpected, so I apologize to anyone who was hoping for a specific release order.
以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持 码农网
猜你喜欢:本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们。