内容简介:Skipfish是一个积极的Web应用程序的安全性侦察工具。 它准备了一个互动为目标的网站的站点地图进行一个递归爬网和基于字典的探头。 然后,将得到的地图是带注释的与许多活性(但希望非破坏性的)安全检查的输出。 最终报告工具生成的是,作为一个专业的网络应用程序安全评估的基础。To compile it, simply unpack the archive and try make. Chances are, you will need to installNext, you need to read the
Skipfish是一个积极的Web应用程序的安全性侦察工具。 它准备了一个互动为目标的网站的站点地图进行一个递归爬网和基于字典的探头。 然后,将得到的地图是带注释的与许多活性(但希望非破坏性的)安全检查的输出。 最终报告 工具 生成的是,作为一个专业的网络应用程序安全评估的基础。
https://github.com/spinkham/skipfish http://code.google.com/p/skipfish/
Install
安装所需软件库:
sudo apt-get install libssl0.9.8 sudo apt-get install libssl-dev sudo apt-get install openssl sudo apt-get install libidn11-dev
安装skipfish:
wget http://skipfish.googlecode.com/files/skipfish-1.69b.tgz tar zxvf skipfish-1.69b.tgz mv skipfish-1.69b skipfish cd skipfish make //编译完成,在目录中生成skipfish可执行程序 cp dictionaries/default.wl skipfish.wl
//拷贝其中一个字典,用来扫描 ./skipfish-o data http://mall.midea.com/detail/index //其中data是输出目录,扫描结束后可打开data目录下的index.html查看扫描结果 1 2 3
SomeParams
skipfish web application scanner-version2.10b Usage: /home/admin/workspace/skipfish/skipfish[options... ] -W wordlist-o output_dir start_url[start_url2... ] Authentication and access options: 验证和访问选项: -A user:pass - use specified HTTP authentication credentials 使用特定的http验证 -F host=IP-pretend that'host'resolves to'IP' -C name=val-append a custom cookie to all requests 对所有请求添加一个自定的cookie -H name=val-append a custom HTTP header to all requests 对所有请求添加一个自定的http请求头 -b(i|f|p) - use headers consistent with MSIE/ Firefox /iPhone 伪装成IE/FIREFOX/IPHONE的浏览器 -N- do not accept any new cookies 不允许新的cookies --auth-form url-form authentication URL --auth-user user-form authentication user --auth-pass pass -form authentication password --auth-verify-url-URL for in-session detection Crawl scope options: -d max_depth-maximum crawl tree depth(16)最大抓取深度 -c max_child-maximum children to index per node(512)最大抓取节点 -x max_desc-maximum descendants to index per branch(8192)每个索引分支抓取后代数 -r r_limit-max total number of requests to send(100000000)最大请求数量 -p crawl% -node and link crawl probability(100%) 节点连接抓取几率 -q hex-repeat probabilistic scan with given seed -I string -only follow URLs matching'string'URL必须匹配字符串 -X string -exclude URLs matching'string'URL排除字符串 -K string - do not fuzz parameters named'string' -D domain-crawl cross-site links to another domain跨域扫描 -B domain-trust,but do not crawl,another domain -Z- do not descend into 5xx locations5xx错误时不再抓取 -O- do not submit any forms不尝试提交表单 -P- do not parse HTML,etc,to find new links不解析HTML查找连接 Reporting options: -o dir-write output to specified directory(required) -M-log warnings about mixed content/non-SSL passwords -E-log all HTTP/1.0 /HTTP/1.1caching intent mismatches -U-log all external URLs and e-mails seen -Q-completely suppress duplicate nodes in reports -u-be quiet,disable realtime progress stats -v-enable runtime logging(to stderr) Dictionary management options: -W wordlist- use a specified read-write wordlist(required) -S wordlist-load a supplemental read-only wordlist -L- do not auto-learn new keywords for the site -Y- do not fuzz extensions in directory brute-force -R age-purge words hit more than'age'scans ago -T name=val-add new form auto-fill rule -G max_guess-maximum number of keyword guesses to keep(256) -z sigfile-load signatures from this file Performance settings: -g max_conn-max simultaneous TCP connections, global (40) 最大全局TCP链接 -m host_conn-max simultaneous connections,per target IP(10) 最大链接/目标IP -f max_fail-max number of consecutive HTTP errors(100) 最大http错误 -t req_tmout-total request response timeout(20s) 请求超时时间 -w rw_tmout-individual network I/O timeout(10s) -i idle_tmout-timeout on idle HTTP connections(10s) -s s_limit-response size limit(400000B) 限制大小 -e- do not keep binary responses for reporting不报告二进制响应 Other settings: -l max_req-max requests per second(0.000000) -k duration-stop scanning after the given duration h:m:s --config file-load the specified configuration file
How to run the scanner?
To compile it, simply unpack the archive and try make. Chances are, you will need to install http://ftp.gnu.org/gnu/libidn/libidn-1.18.tar.gz ‘>libidn or http://www.pcre.org/ ‘>libpcre3 first.
Next, you need to read the instructions provided in doc/dictionaries.txt to select the right dictionary file and configure it correctly. This step has a profound impact on the quality of scan results later on, so don’t skip it.
Once you have the dictionary selected, you can use -S to load that dictionary,
and -W to specify an initially empty file for any newly learned site-specific
keywords (which will come handy in future assessments):
$ touch new_dict.wl
$ ./skipfish -o output_dir -S existing_dictionary.wl -W new_dict.wl \
http://www.example.com/some/starting/path.txt
You can use -W- if you don’t want to store auto-learned keywords anywhere.
Note that you can provide more than one starting URL if so desired; all of them will be crawled. You can also read a list of URLs from a file using this syntax:
$ ./skipfish …other options… -o output_dir @/path/to/url_list.txt
The tool will display some helpful stats while the scan is in progress. You
can also switch to a list of in-flight HTTP requests by pressing return.
In the example above, skipfish will scan the entire www.example.com (including services on other ports, if linked to from the main page), and write a report to output_dir/index.html. You can then view this report with your favorite browser (JavaScript must be enabled; and because of recent file:/// security improvements in certain browsers, you might need to access results over HTTP). The index.html file is static; actual results are stored as a hierarchy of JSON files, suitable for machine processing or different presentation frontends if needs be. A text-based list of all the visited URLs, plus some useful metadata, is stored to a file named pivots.txt, too.
A simple companion script, sfscandiff, can be used to compute a delta for two scans executed against the same target with the same flags. The newer report will be non-destructively annotated by adding red background to all new or changed nodes; and blue background to all new or changed issues found.
Some sites may require authentication; for simple HTTP credentials, you can try:
$ ./skipfish -A user:pass …other parameters…
Alternatively, if the site relies on HTTP cookies instead, log in in your browser or using
a simple curl script, and then provide skipfish with a session cookie:
$ ./skipfish -C name=val …other parameters…
Other session cookies may be passed the same way, one per each -C option.
Certain URLs on the site may log out your session; you can combat this in two ways: by using the -N option, which causes the scanner to reject attempts to set or delete cookies; or with the -X parameter, which prevents matching URLs from being fetched:
$ ./skipfish -X /logout/logout.aspx …other parameters…
The -X option is also useful for speeding up your scans by excluding /icons/, /doc/,
/manuals/, and other standard, mundane locations along these lines. In general, you can
use -X and -I (only spider URLs matching a substring) to limit the scope of a scan any way you like - including restricting it only to a specific protocol and port:
$ ./skipfish -I http://example.com:1234/ …other parameters…
A related function, -K, allows you to specify parameter names not to fuzz
(useful for applications that put session IDs in the URL, to minimize noise).
Another useful scoping option is -D - allowing you to specify additional hosts or domains to consider in-scope for the test. By default, all hosts appearing in the command-line URLs are added to the list - but you can use -D to broaden these rules, for example:
$ ./skipfish -D test2.example.com -o output-dir http://test1.example.com/
…or, for a domain wildcard match, use:
$ ./skipfish -D .example.com -o output-dir http://test1.example.com/
In some cases, you do not want to actually crawl a third-party domain, but you trust the owner of that domain enough not to worry about cross-domain content inclusion from that location. To suppress warnings, you can use the -B option, for example:
$ ./skipfish -B .google-analytics.com -B .googleapis.com …other parameters…
By default, skipfish sends minimalistic HTTP headers to reduce the amount of data exchanged over the wire; some sites examine User-Agent strings or header ordering to reject unsupported clients, however. In such a case, you can use -b ie or -b ffox to mimic one of the two popular browsers; and -b phone to mimic iPhone.
When it comes to customizing your HTTP requests, you can also use the -H option to insert any additional, non-standard headers (including an arbitrary User-Agent value); or -F to define a custom mapping between a host and an IP (bypassing the resolver). The latter feature is particularly useful for not-yet-launched or legacy services.
Some sites may be too big to scan in a reasonable timeframe. If the site features well-defined tarpits - for example, 100,000 nearly identical user profiles as a part of a social network - these specific locations can be excluded with -X or -S. In other cases, you may need to resort to other settings: -d limits crawl depth to a specified number of subdirectories; -c limits the number of children per directory, -x limits the total number of descendants per crawl tree branch; and -r limits the total number of requests to send in a scan.
An interesting option is available for repeated assessments: -p. By specifying a percentage between 1 and 100%, it is possible to tell the crawler to follow fewer than 100% of all links, and try fewer than 100% of all dictionary entries. This - naturally - limits the completeness of a scan, but unlike most other settings, it does so in a balanced, non-deterministic manner. It is extremely useful when you are setting up time-bound, but periodic assessments of your infrastructure. Another related option is -q, which sets the initial random seed for the crawler to a specified value. This can be used to exactly reproduce a previous scan to compare results. Randomness is relied upon most heavily in the -p mode, but also for making a couple of other scan management decisions elsewhere.
Some particularly complex (or broken) services may involve a very high number of identical or nearly identical pages. Although these occurrences are by default grayed out in the report, they still use up some screen estate and take a while to process on JavaScript level. In such extreme cases, you may use the -Q option to suppress reporting of duplicate nodes altogether, before the report is written. This may give you a less comprehensive understanding of how the site is organized, but has no impact on test coverage.
In certain quick assessments, you might also have no interest in paying any particular attention to the desired functionality of the site - hoping to explore non-linked secrets only. In such a case, you may specify -P to inhibit all HTML parsing. This limits the coverage and takes away the ability for the scanner to learn new keywords by looking at the HTML, but speeds up the test dramatically. Another similarly crippling option that reduces the risk of persistent effects of a scan is -O, which inhibits all form parsing
and submission steps.
Some sites that handle sensitive user data care about SSL - and about getting it right. Skipfish may optionally assist you in figuring out problematic mixed content or password submission scenarios - use the -M option to enable this. The scanner will complain about situations such as http:// scripts being loaded on https:// pages - but will disregard non-risk scenarios such as images.
Likewise, certain pedantic sites may care about cases where caching is restricted on HTTP/1.1 level, but no explicit HTTP/1.0 caching directive is given on specifying -E in the command-line causes skipfish to log all such cases carefully.
In some occasions, you want to limit the requests per second to limit the load on the targets server (or possibly bypass DoS protection). The -l flag can be used to set this limit and the value given is the maximum amount of requests per second you want skipfish to perform.
Scans typically should not take weeks. In many cases, you probably want to limit the scan duration so that it fits within a certain time window. This can be done with the -k flag, which allows the amount of hours, minutes and seconds to be specified in a H:M:S format. Use of
this flag can affect the scan coverage if the scan timeout occurs before testing all pages.
Lastly, in some assessments that involve self-contained sites without extensive user content, the auditor may care about any external e-mails or HTTP links seen, even if they have no immediate security impact. Use the -U option to have these logged.
Dictionary management is a special topic, and - as mentioned - is covered in more detail in dictionaries/README-FIRST. Please read that file before proceeding. Some of the relevant options include -S and -W (covered earlier), -L to suppress auto-learning, -G to limit the keyword guess jar size, -R to drop old dictionary entries, and -Y to inhibit expensive extension fuzzing.
Skipfish also features a form auto-completion mechanism in order to maximize scan coverage. The values should be non-malicious, as they are not meant to implement security checks - but rather, to get past input validation logic. You can define additional rules, or override existing ones, with the -T option (-T form_field_name=field_value, e.g. -T login=test123 -T password=test321 - although note that -C and -A are a much better method of logging in).
There is also a handful of performance-related options. Use -g to set the maximum number of connections to maintain, globally, to all targets (it is sensible to keep this under 50 or so to avoid overwhelming the TCP/IP stack on your system or on the nearby NAT / firewall devices); and -m to set the per-IP limit (experiment a bit: 2-4 is usually good for localhost, 4-8 for local networks, 10-20 for external targets, 30+ for really lagged or non-keep-alive hosts). You can also use -w to set the I/O timeout (i.e., skipfish will wait only so long for an individual read or write), and -t to set the total request timeout, to account for really slow or really fast sites.
Lastly, -f controls the maximum number of consecutive HTTP errors you are willing to see before aborting the scan; and -s sets the maximum length of a response to fetch and parse (longer responses will be truncated).
When scanning large, multimedia-heavy sites, you may also want to specify -e. This prevents binary documents from being kept in memory for reporting purposes, freeing up a lot of RAM.
Further rate-limiting is available through third-party user mode tools such as http://monkey.org/~marius/trickle/ ‘>trickle, or kernel-level traffic shaping.
Oh, and real-time scan statistics can be suppressed with -u.
But seriously, how to run it?
A standard, authenticated scan of a well-designed and self-contained site (warns about all external links, e-mails, mixed content, and caching header issues), including gentle brute-force:
$ touch new_dict.wl
$ ./skipfish -MEU -S dictionaries/minimal.wl -W new_dict.wl \
-C"AuthCookie=value" -X/logout.aspx-o output_dir \ 1
Five-connection crawl, but no brute-force; pretending to be MSIE and caring less about ambiguous MIME or character set mismatches, and trusting example.com links:
$ ./skipfish -m 5 -L -W- -o output_dir -b ie -B example.com http://www.example.com/
Heavy brute force only (no HTML link extraction), limited to a single directory and timing out after 5 seconds:
$ touch new_dict.wl
$ ./skipfish -S dictionaries/complete.wl -W new_dict.wl -P -I http://www.example.com/dir1/ \
-o output_dir -t 5 -I http://www.example.com/dir1/
For a short list of all command-line options, try ./skipfish -h. A quick primer on some of the particularly useful options is also http://lcamtuf.blogspot.com/2010/11/understanding-and-using-skipfish.html ‘>given here.
以上所述就是小编给大家介绍的《skipfish笔记》,希望对大家有所帮助,如果大家有任何疑问请给我留言,小编会及时回复大家的。在此也非常感谢大家对 码农网 的支持!
猜你喜欢:- 【每日笔记】【Go学习笔记】2019-01-04 Codis笔记
- 【每日笔记】【Go学习笔记】2019-01-02 Codis笔记
- 【每日笔记】【Go学习笔记】2019-01-07 Codis笔记
- vue笔记3,计算笔记
- Mysql Java 驱动代码阅读笔记及 JDBC 规范笔记
- 【每日笔记】【Go学习笔记】2019-01-16 go网络编程
本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们。
算法技术手册
[美]海涅曼 (Heineman.G.T.)、[美]波利切 (Pollice.G.)、[美]塞克欧 (Selkow.S.) / 东南大学出版社 / 2009-4 / 58.00元
创造稳定的软件需要有效的算法,但是程序设计者们很少能在问题出现之前就想到。《算法技术手册(影印版)》描述了现有的可以解决多种问题的算法,并且能够帮助你根据需求选择并实现正确的算法——只需要一定的数学知识即可理解并分析算法执行。相对于理论来说,本书更注重实际运用,书中提供了多种程序语言中可用的有效代码解决方案,可轻而易举地适合一个特定的项目。有了这本书,你可以: 解决特定编码问题或改进现有解决......一起来看看 《算法技术手册》 这本书的介绍吧!