Tor 0day: Stopping Tor Connections

栏目: IT技术 · 发布时间: 4年前

内容简介:When coming across a security vulnerability, I have a basic philosophy: Try your best to report it to the right people. Sometimes the reporting is painless. Usually it's a little challenging. Over a decade ago, I tried to report an issue to Verisign. It to

When coming across a security vulnerability, I have a basic philosophy: Try your best to report it to the right people. Sometimes the reporting is painless. Usually it's a little challenging. Over a decade ago, I tried to report an issue to Verisign. It took weeks of constantly pestering a security staff member before he passed the vulnerability up the chain. His manager saw the issue, thanked me on the phone, and shipped me a box of swag to show that my effort was appreciated. (I was happy with them fixing the issue. To me, the phone call and swag was above and beyond.)

Unfortunately, sometimes companies are non-responsive. At that point, I have a few options. I can sell the vulnerability to someone else who will certainly exploit it. I can just let it sit -- maybe the bug will be fixed by coincidence or become obsolete, or maybe I'll find another use for it later. (I have a large collection of sitting vulnerabilities, some dating back decades.) However, sometimes I have reasons for needing a specific issue fixed soon. If the company doesn't respond to security reports, then maybe they will react to public shaming.

For people who follow my blog, you know that I've literally spent years trying to report security vulnerabilities to the Tor Project. Just finding who to report bugs to was like a masochistic scavenger hunt . After my public shaming of the Tor Project (in 2017), they changed their web site design to make it easier to report vulnerabilities. They alsoopened up their bug bounty program at HackerOne.

Unfortunately, while it is easier now to report vulnerabilities to the Tor Project, they are still unlikely to fix anything. I've had some reports closed out by the Tor Project as 'known issue' and 'won't fix'. For an organization that prides itself on their secure solution, it is unclear why they won't fix known serious issues.

The Penultimate Straw

Two events really set me off this year. The first issue was related to a massive DDoS attack over the Tor network last February. Lots of onion services went offline, and many relays crashed. My own onion service was hard hit but managed to stay up after I identified the root cause and patched my Tor daemon. I reported this vulnerability to the Tor Project (HackerOne bug #789065). The outcome was less than stellar:

  • First, the Tor Project asked for a proof of concept. I responded with source code and log files.
  • Then they asked for more details about how it worked. I provided an extremely detailed description. This resulted in a lot of bidirectional communication with descriptions, explanations, and examples. (At this point, I thought things were going well.)
  • After a lot of back-and-forth technical discussions, the Tor Project's representative wrote, "I'm a bit lost with all this info in this ticket. I feel like lots of the discussion here is fruitful but they are more brainstormy and researchy and less fitting to a bug bounty ticket." They concluded with: "Is there a particular bug you want to submit for bug bounty?" In my opinion, describing a vulnerability and mitigation options is not "brainstormy and researchy". To me, it sounds like they were either not competent enough to fix the bug, or they were not interested. In any case, they were just wasting time.

At that point, I decided that it wasn't worth trying to report any new bugs to the Tor Project.

The Final Straw

The second issue, when I decided to go public with Tor 0days , happened last month. That's when, after three years of waiting, I gave up on the Tor Project.

Over three years ago, I tried to report a vulnerability in the Tor Browser to the Tor Project. The bug is simple enough: using JavaScript, you can identify the scrollbar width. Each operating system has a different default scrollbar size, so an attacker can identify the underlying operating system. This is a distinct attribute that can be used to help uniquely track Tor users. (Many users think that Tor makes them anonymous. But Tor users can be tracked online; they are not anonymous.)

I couldn't find a direct way to report the bug to the Tor Project. Eventually, I gave up on their reporting scavenger hunt andblogged about the vulnerability. I included details and a working example.

A lot of people in the Tor community wrote to me, effectively saying "so what?" However, the initial response from the Tor Project confirmed the significance. They entered the vulnerability into their system ( defect #22137 ) and gave it a "high" priority. When they opened up their bug bounty program on HackerOne, they even paid me a bounty for this issue. This issue was reported 3 years ago, on 22-July-2017 via HackerOne. It was assigned bug #252580, a bounty was paid and the issue was closed as 'Resolved'. The HackerOne bug was publicly disclosed three months later (20-Oct-2017).

Tor 0day: Stopping Tor Connections

But that's where the positive progress stopped. Although it was marked as 'resolved', the issue was never fixed. Rather, the Tor Project pushed it upstream, to Mozilla. (The Tor Browser is based on Mozilla's Firefox web browser.) Firefox Bug 1397996 sat unassigned for two years. A year after that, the person assigned to the bug removed himself and wrote, "Not actively working on this, unassign myself." So that's three years that a high priority bug at the Tor Project has sat unaddressed, even though they claim to have resolved the issue.

It isn't like the Tor Project doesn't have options for fixing this issue without Mozilla's help. They just need to define a default scrollbar width rather than inherit the one from the operating system. With all of the other customizations that they add to make the Tor Browser, this is an easy one to fix -- but they have decided to not fix it.

Dropping 0Days

An " 0day " is any exploit that has no known patch or wide-spread solution. An 0day doesn't need to be unique or novel; it just needs to have no solution. I'm currently sitting on dozens of 0days for the Tor Browser and Tor network. Since the Tor Project does not respond to security vulnerabilities, I'm just going to start making them public. While I found each of these on my own, I know that I'm not the first person to find many of them.

The scrollbar profiling vulnerability is an example of an 0day in the Tor Browser. But there are also 0days for the Tor network. One 0day for the Tor network was reported by me to the Tor Project on 27-Dec-2017 (about 2.5 years ago). The Tor Project closed it out as a known issue, won't fix, and "informative".

Let's start with a basic premise: let's say you're like some of my clients -- you're a big corporation with an explicit "no Tor on the corporate network" rule. This is usually done to mitigate the risks from malware. For example, most corporations have a scanning proxy for internet traffic that tries to flag and stop malware before it gets downloaded to a computer in the company. Since Tor prevents the proxy from decoding network traffic and detecting malware, Tor isn't permitted. Similarly, Tor is often used for illegal activities ( child porn , drugs , etc.); blocking Tor reduces the risk from employees using Tor for illegal purposes. Although denying Tor can also mitigate the risk from corporate espionage, that's usually a lesser risk than malware infections and legal concerns. (Keep in mind, these same block and filtering requirements apply to nation-states, like China and Syria, that want to control and censor all network traffic. But I'm going to focus on the corporate environment.)

It's one thing to have a written policy that says "Don't use Tor." However, it's much better to have a technical solution that enforces the policy. So how do you stop users from connecting to the Tor network? The easy way is to download the list of Tor relays . A network administrator can add in a firewall rule blocking access to each Tor node.

0Day #1: Blocking Tor Connections the Smart Way

There are two problems with the "block them all" approach. First, there are thousands of Tor nodes. Checking every network connection against every possible Tor node takes time. This is fine if you have a slow network or low traffic volume, but it doesn't scale well for high-volume networks. Second, the list of nodes changes often. This creates a race condition , where there may be a new Tor node that is seen by Tor users but isn't in your block list yet.

However, what if there was a distinct packet signature provided by every Tor node that can be used to detect a Tor network connection? Then you could set the filter to look for the signature and stop all Tor connections. As it turns out, this packet signature is not theoretical.

Tor uses TLS for negotiating network security. However, Tor is built on zero-trust; each TLS certificate is randomly generated when the daemon starts since it never needs client validation. Each connection from a Tor client to a Tor server looks like:

  1. Client begins the TCP three-way handshake by sending a TCP SYN packet to the Server.
  2. Server responds with a SYN-ACK.
  3. Client sends an ACK to complete the three-way TCP handshake.
  4. Client sends a TLS Client-Hello request. This is the first data packet from the client.
  5. Server responds with a TLS Server-Hello and includes the certificate that was randomly generated when the server first started.

The server's TLS certificate is relatively small -- the entire server certificate fits in one packet -- and always uses the following format:

  • Self-signed . Typically, TLS includes a chain of x509 certificates for authentication. With the Tor daemon, the chain only contains one certificate, meaning it is self-signed.
  • Specific ordering . For the certificate, there are a variety of fields that can be in any order, but the Tor daemon always uses the same fields in the same order: signature, issuer, validity, subject, and then public key info. There is no other information in the server's certificate. In contrast, a typical certificate usually has multiple extensions and additional data fields.
  • One issuer . This record only contains a common name (CN) that starts with "www." and ends with ".com". In between are 8-20 random letters and numbers. This is unusual since the issuer CN is usually the proper name of the issuing authority. With Tor, there are no country (C), state (ST), organization (O) or other issuer fields that are typically seen with both authenticated and self-signed certificates.
  • One subject . The subject common name (CN) starts with "www." and ends with ".net". In between are 8-20 random letters and numbers that are not the same as the issuer. Like the issuer record, there are no other fields (S, ST, O, etc.) that are commonly found with real certificates.

Tor 0day: Stopping Tor Connections

On a very technical node:

For my own packet scanner, I wrote a function that walks the x509 certificate's ASN.1 structure and generates a packed signature that shows data and scope. The Tor server's signature looks like:

{{[2],#,{1.2.840.113549.1.1.#,NULL},{{{2.5.4.3,"www.X.com"}}},{"#Z","#Z"},{{{2.5.4.3,"www.X.net"}}},{{1.2.840.113549.1.1.1,NULL},D}},{1.2.840.113549.1.1.#,NULL},D}

where:

  • "X" is 8-20 characters in the range [a-z2-7]. This character range is because Tor uses Base32 encoding .
  • "D" is variable data.
  • "#" is a number (can be multiple digits).
  • All other characters are literals that must match in the same order.

This example is from a Tor server: {{[2],10893829876978619801,{1.2.840.113549.1.1.11,NULL},{{{2.5.4.3,"www.xds4wpy6r7uq.com"}}},{"171228000000Z","180517000000Z"},{{{2.5.4.3,"www.ph4l62eo3zyqq.net"}}},{{1.2.840.113549.1.1.1,NULL},Data[271]}},{1.2.840.113549.1.1.11,NULL},Data[129]}

ASN.1 uses dotted number sequences to define specific codes. For example, 2.5.4.3 is the identifying common name. It appears twice in the signature: once for the issuer and once for the subject. The ASN.1 code 1.2.840.113549.1.1.11 identifies sha256 with RSA encryption. My signature uses "1.2.840.113549.1.1.#" since the specific encryption can vary based on the version of the server's SSL library. (Oh yeah! Profile the server's library! Another 0day!).

When the packet sniffer sees a TLS server-side certificate, it generates a signature. If the signature matches the pattern for a Tor server, the scanner flags the connection as a Tor connection. (This is really fast .)

Validating the Vulnerability

Back in 2017, I used a scanner and Shodan to search for TLS certificates. In theory, it is possible for there to be some server with a server-side TLS certificate that matches this signature but that isn't a Tor node. In practice, every match was a Tor node. I even found servers running the Tor daemon and with open onion routing (OR) ports that were not in the list of known Tor nodes. (Some were non-public bridges. Others were private Tor nodes.)

Similarly, I scanned every know Tor node. Each matched this Tor-specific certificate profile. That makes the detection 100% accurate; no false positives and no false negatives. (Although now that I've made this public, someone could intentionally generate false-positive or false-negative certificates. The false-positives are relatively easy to construct. The false-negatives will require editing the Tor daemon's source code.)

While a scanner could be used to identify and document every Tor server, corporations don't need to do that. Corporations already use stateful packet inspection on their network perimeters to scan for potential malware. With a single rule, they can also check every new connection for this Tor signature. Without using large lists of network addresses, you can spot every connection to a Tor node and shut it down before the session layer (TLS) finishes initializing, and before any data is transferred out of the network.

Tor Project's Reply

I reported this simple way to detect Tor traffic to the Tor Project on 27-Dec-2017 (HackerOne bug #300826). The response that I got back was disappointing.

Hello and thanks for reporting this issue!

This is a known issue affecting public bridges (the ones distributed via bridgedb); see ticket #7349 for more details. This issue does not affect private bridges (the ones that are distributed in a P2P adhoc way). As indicated in the ticket, to fix this problem, we are aiming to make it possible to shutdown the ORPort of Tor relays. In our opinion, we should not to try to imitate normal SSL certs because that's a fight we can't win (they will always look differently or have distinguishers, as has been the case in the pluggable transports arms race).

Unfortunately, ticket #7349 is not straightforward to implement and has various engineering complexities; please see the ticket for more information

Due to the issue being known and planned to be fixed, I'm marking this issue as Informative.

Let's see:

  1. They say it is a known bug and not fixed.
  2. They referred me to another bug ( #7349 , "Very High" priority) that had already been opened for five years. (It has now been open for eight years.)
  3. They only viewed it as a risk to bridges, not as a risk to all Tor traffic. Even though it impacts all Tor users, including users who do not use bridges.
  4. They gave a vague opinion with an unjustifiable explanation. ("In our opinion, we should not to try to imitate normal SSL certs because that's a fight we can't win", "not straightforward to implement", and "has various engineering complexities.")
  5. They referred me to the technical discussion in the related (unfixed) bug, but I didn't see any reason that they couldn't add more variety in order to prevent packet profiling and filtering. As a test, I changed the random certificate's profile on one of my Tor daemons and it continued to work without a problem.

This particular issue really isn't too complicated to fix. It's in the Tor daemon source code. File: src/lib/tls/tortls.c, function tor_tls_context_init_certificates . The first few lines generate the 8-20 random character domain names, and the rest generates the certificates without any other settings.

nickname = crypto_random_hostname(8, 20, "www.", ".net");



#ifdef DISABLE_V3_LINKPROTO_SERVERSIDE

  nn2 = crypto_random_hostname(8, 20, "www.", ".net");

#else

  nn2 = crypto_random_hostname(8, 20, "www.", ".com");

#endif



  /* Generate short-term RSA key for use with TLS. */

  if (!(rsa = crypto_pk_new()))

    goto error;

  if (crypto_pk_generate_key_with_bits(rsa, RSA_LINK_KEY_BITS)<0)

    goto error;



  /* Generate short-term RSA key for use in the in-protocol ("v3")

   * authentication handshake. */

  if (!(rsa_auth = crypto_pk_new()))

    goto error;

  if (crypto_pk_generate_key(rsa_auth)<0)

    goto error;



  /* Create a link certificate signed by identity key. */

  cert = tor_tls_create_certificate(rsa, identity, nickname, nn2,

                                    key_lifetime);

  /* Create self-signed certificate for identity key. */

  idcert = tor_tls_create_certificate(identity, identity, nn2, nn2,

                                      IDENTITY_CERT_LIFETIME);

  /* Create an authentication certificate signed by identity key. */

  authcert = tor_tls_create_certificate(rsa_auth, identity, nickname, nn2,

                                        key_lifetime);

There are lots of options for fixing this problem. Here are just a few:

  • Use the same random characters for the .com and .net names. Very few domains use completely different names in the TLS certificates. (The ones that do use alternate names usually have a long list of names and not just two names.) Also, include "S", "O", and other x509 attributes in the issuer and subject records.
  • Allow the torrc file to specify the common names and TLS attributes. E.g., If my Tor node resides at Digital Ocean, then I'd select a information that looks like some other Digital Ocean customer. Better yet: let me supply the TLS certificate. I can supply a real one using Let's Encrypt and nobody will know that it's Tor.
  • Since the certificate isn't verified anyway, include 1-2 additional certificates in the chain so it does not look like it is self-signed.
  • Randomize the parameter ordering and add in some TLS extensions. Make them look less normalized.

If every Tor certificate looks uniquely like a Tor node, then traffic can be filtered. If every Tor node looks different, then it is much more difficult to spot and block Tor traffic from the session or application layers.

More Soon

If you have ever worked with bug bounties, then you are certain to recognize the name Katie Moussouris. She created the first bug bounty programs at Microsoft and the Department of Defense. She was the Chief Policy Officer at HackerOne (the bug bounty service), and she spread-headed NTIA's Awareness and Adoption Group's effort to standardize vulnerability disclosure and reporting. (Full disclosure: I was part of the same NTIA working group for a year. I found Katie to be a positive and upbeat person. She is very sharp, fair-minded, and realistic.)

Earlier this month, Katie was interviewed by the Vergecast podcast. I had expected her to praise the benefits of vulnerability disclosure and bug bounty programs. However, she surprised me. She has become disenchanted by how corporations are using bug bounties. She noted that corporate bug bounties have mostly been failures. Companies often prefer to outsource liability rather than solve problems. And they view the bug bounties as a way to pay for the bug and keep it quiet rather than fix the issue.

Every problem that Katie brought up about the vulnerability disclosure process echoed my experience with the Tor Project. The Tor Project made it hard to report vulnerabilities. They fail to fix vulnerabilities. They marked issues as 'resolved' when they were never fixed. They outsource simple issues, like passing a simple scrollbar issue upstream to Firefox where it is never fixed. And they make excuses for not addressing serious security issues.

During the interview, she mentioned that researchers and people reporting vulnerabilities only have a few options: try to report it, sell it, or go public. I've tried reporting and repeatedly failed. I've sold working exploits, but I also know that they can be used against me and my systems if the core issues are not fixed. (And even the people who buy exploits from me would rather have these vulverabilities fixed.) That leaves public disclosure.

In future blog posts, I will be disclosing more Tor 0day vulnerabilities. Most (but probably not all) are already known to the Tor Project. It won't be every blog entry (I also have non-Tor topics that I want to write about), but I've got a list of vulnerabilities that are ready to drop. (And for the Tor fanboys who think "use bridges" will get around this certificate profiling exploit: don't worry, I'll burn bridges next.)


以上就是本文的全部内容,希望本文的内容对大家的学习或者工作能带来一定的帮助,也希望大家多多支持 码农网

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

图解物联网

图解物联网

[ 日] NTT DATA集团、河村雅人、大塚纮史、小林佑辅、小山武士、宫崎智也、石黑佑树、小岛康平 / 丁 灵 / 人民邮电出版社 / 2017-4 / 59.00元

本书图例丰富,从设备、传感器及传输协议等构成IoT的技术要素讲起,逐步深入讲解如何灵活运用IoT。内容包括用于实现IoT的架构、传感器的种类及能从传感器获取的信息等,并介绍了传感设备原型设计必需的Arduino等平台及这些平台的选择方法,连接传感器的电路,传感器的数据分析,乃至IoT跟智能手机/可穿戴设备的联动等。此外,本书以作者们开发的IoT系统为例,讲述了硬件设置、无线通信及网络安全等运用Io......一起来看看 《图解物联网》 这本书的介绍吧!

html转js在线工具
html转js在线工具

html转js在线工具

UNIX 时间戳转换
UNIX 时间戳转换

UNIX 时间戳转换