|
|
projects . screamingCobra v1.00
|
|
Introduction
Any CGI that doesn't check arguments that are passed to it over the web are possibly vulnerable to attacks which allow a malicious user to get read access to almost any file on that system, if not access to execute programs. screamingCobra is almost always able to find those bugs REMOTELY due to the common errors programmers make.
screamingCobra is an application for remote vulnerability discovery in ANY UNKNOWN web applications such as CGIs and PHP pages. Simply put, it attempts to find vulnerabilities in all web applications on a host without knowing anything about the applications. Modern CGI scanners scan a host for CGIs with known vulnerabilities. screamingCobra is able to 'find' the actual vulnerabilities in ANY CGI, whether it has been discovered before or not.
Background
I've been told by administrators of very well known sites that they've been able to use screamingCobra (originally called crawl5b, before this release) and find at least one bug which allows anyone to get read access to almost any file on the system, if not access to execute applications. When you launch screamingCobra, it crawls the specified host over the web and attempts to find all the CGIs or any other applications where parameters can be passed. It then attempts to use a few techniques to read files on that machine. By default, it attempts to read /etc/passwd, and if successful it will display the URL in which it used to access the file.
The core of screamingCobra was originally written at DefCon 9, specifically at Caezar's Challenge V, for Challenge B:
"Identify hypothetical cases of common bugs in server-side programs and then to describe algorithms that could detect those problems from a special version of the client software"
I did just that, and wrote a program to go along with it.
Configuration
There's not much, if any, configuring to be done. Although, there may be some things you want to change. I'll go over those now. Open up screamingCobra.pl in a text editor and check these lines out:
Line 24: This is the file it will attempt to access. Change 'etc/passwd' to, say, 'bin/ls' to attempt to read /bin/ls. I recommend KEEPING /etc/passwd as the default.
Line 25: This is the additional technique for finding vulnerabilities. Leave it alone if you don't know what it's doing :)
Line 27: -- This is the HTML tags to look for that contain URLs. The array, by default, includes 'a' (for <a href's...), 'img' (<img src...), 'body', 'area', 'frame', and 'meta'. You may add more, just follow the defaults.
Line 31: -- This is the tag options to look for inside of a tag, such as 'href' (for <a href...) and 'src' (<img src...). This will NOT just look at the 2nd word in the tag, but any words following a whitspace so it WILL catch something like <a blah="" href="...">.
Line 35: -- Extensions of files to not do a GET on, just because they usually don't contain HTML and are a waste of bandwidth.
Line 40: -- this is the basic header that's sent to the server when requesting a page or CGI. screamingCobra randomly chooses one for each GET it does, and more according to the two default ones.
That's it! You probably didn't have to change anything or add anything, but it's good to know how to.
Running
usage: screamingCobra.pl [-e] [-i] [-s|-v] <http://host.name>[:port][/start/page]
-e: EXTRA TECHNIQUES
Uses multiple techniques to find bugs. This will take over twice the amount of time to complete a scan and the other techniques used with this option are not commonly found in applications but if you need to do a very strong pen test, you may want to use this option.
-i: DON'T IGNORE ANY FILES
In the program, there is a user-configurable array of extensions to ignore (not to GET). Those include images, compressed files, etc.. This is because those files will usually not be HTML pages so there won't be any useful data in them, and they make take up a lot of bandwidth as well. This option ignores that list and screamingCobra will not ignore any files.
-s: STATUS BAR
This creates a status bar with constantly updated numbers of pages accessed, bugs found and attempted vulnerability scans. Cannot be used with verbose, although the status bar is ALWAYS displayed when the user unexpectedly exits or kills the application (^C) or when the application is finished crawling.
-v: VERBOSE
This will display all the files being accessed and will also list when CGIs are found and attempted to be broken (to find vulnerabilities). Cannot be used with status bar, although a status bar is ALWAYS displayed when the user unexpectedly exits or kills the application (^C) or when the application is finished crawling.
<http://host.name>: Hostname or IP of host to scan. [REQUIRED]
For example, http://cobra.lucidx.com
[:port]: Port to connect to, default is 80.
For example, http://cobra.lucidx.com:80
[/start/page]: Page to start on.
For example, http://cobra.lucidx.com/screamingCobra-1.00/ and also, http://cobra.lucidx.com:80/index.html
Download
Use one of the links to download screamingCobra from our site or one of our mirrors.
screamingCobra v1.00 @ dachb0den.com
screamingCobra
screamingCobra
MD5: 13f513b87520c2c23e1102c17b659360
Results
Some administrators and security consultants have told me that they've used previous versions of screamingCobra and have found some, if not many, bugs in CGIs and were easily and quickly able to patch them, thanks to screamingCobra :)
I've been TOLD that vulnerabilities have been found in:
WebCT - A popular application for online classes in universities and colleges.
The day after I released the original crawl5b, it was RUMOURED that vulnerabilities (the /etc/passwd) were found with screamingCobra on these sites:
CNN.com or a sub-host of it (?.cnn.com)
ATT.net
ZDnet.com
washingtonpost.com
Fact or fiction? You be the judge :)
The /etc/passwd is the file that screamingCobra attempts to access by default (with extended techniques it also attempts to execute perl).
Contact
That's all for now. Send me any questions or comments, hope to hear from you all! :)
|
|
|
|
|