Appsec Testing Tips: Edge Cases & Tool Chaining

 

At BruCon 2011 I gave a talk called The Web Application Hackers Toolchain. In this talk i outlined several non-standard additions and aides to web pentesters. One section in particular was leveraging tool chaining for better application mapping.

In the appsec space the biggest challenges to testing come from edge cases in heavily dynamic pages, non-standard payload types, or pages using special authentication routines; Things like REST, AJAX, NTLMv2, Kerberos, Flash, Silverlight, ActiveX, Java Serialized Objects, encoded JSON, InnerHTML, ViewState, CSRF tokens, etc.

It’s sad to say but, most standalone tools do not test these correctly, or require high levels or customization to deal with. In these cases it becomes necessary to utilize a toolset that includes upstream proxy options. The advantages in tool chaining are several, but ill start with some easy examples and then reference others. In these scenarios I’m assuming you’re using some type of scanning tool (open source or enterprise level). Without going into which tools are better for each type of edge case, ill instead focus only on the ones use we’ll use to solve an issue.

Let’s start with the elephant in the room, AJAX. Almost every site uses it these days, in some fashion or another.

Issue: As an app sec tester you need visibility into every parameter for fuzzing coverage. When using tools, crawler engines in these tools work on link based logic and tend not to fire of DOM events that might be crucial to the application. Some crawling engines can parse these ajax calls, but don’t map them as completely as a browser actually interacting with the page/DOM would. You could manually walk the site and interact with it (which should be done always) but in enterprise or large applications ensuring you hit all functionality of the site is difficult. This becomes increasingly harder if you don’t have a good set of test data (i’ll go more into this later) like application specific “stuff” (credit card numbers, transaction IDs, any non-standard data used to drive application logic).

Solution(s): Like i said, using the browser and manually walking the app is your best bet on smaller sites. The next option involves tool chaining. Most scanners have the ability to act as an inline proxy (Burp, WebInspect, etc, etc). If they don’t, like some command line scanning tools, they can take a flat file of links an crawl/spider from those. To a lesser extent, this can capture application functionality. So right now we have;

Browser > Scanner in proxy mode

At this moment you are browsing the site, filling up the site map and executing/mapping all the main functions. Later, after you finish walking the site, you can then have your scanner fuzz off of that site map.

Now lets take a new tool called ACT (Ajax Crawling Tool) and add it to the mix. ACT uses selenium to actually open a browser and spider the site, thus executing DOM events better.

Already have done the above > ACT Browser > Scanner in proxy mode

This setup should be used in addition/after already having walked the site manually to populate a site tree in your scanner. Now, ACT does it’s thing and hopefully finds some functions you didn’t (this is common in large apps). Now when you launch the fuzzing tests in your scanner, you can sleep better knowing you probably have executed everything.

Now, good application security testers don’t just stop there. We use some sort of inline proxy tool to tamper with individual requests, or perform custom checks on functions that look interesting. In this case we are going to use Burp. Don’t lie to yourselves, if you’re not using Burp (ZAP is damn close to catching up though) then you are not doing in-depth analysis. Let’ try this:

Step 1:
Browser (walk the app manually) > Burp > target

Step 2:
ACT Browser > Burp > target

Step 3:
Burp Spider > Scanner in proxy mode > target

So, some advantages here: Burp now has a good site tree with parameters parsed well from AJAX calls (thanks to a manual walk and ACT). We then can execute Burps crawler, which will use our already filled in site tree and possibly improve upon it. When we hit “scan” on our scanning tool it will also spider/crawl the site before fuzzing it. Quadruple crawl coverage. Also burp has a full site tree to do your manual intruder/repeater fuzzing tests (we love the fuzzDB).

Caveats: Chaining a scanner to a scanner can be done, but keep it to crawling only, fuzzing/scanning early in the chain will pollute your site tree’s upstream. If you always have your Burp session clean you can use it as a jump off point for a different scanner.

The biggest pushback against this idea is: “Shouldn’t X tool do this for me?” In a perfect world yes, but we’re just not there yet. Some areas of appsec testing (both dynamic and static) will always require a sharp mind and a dynamic toolset to test around edge cases. Be very wary of vendors telling you they do this, make them show you. Dinis Cruz posted a while back on how awesome this type of thing is… edge case testing is what separates good appsec testers from great appsec testers. I just thought we knew this already.

Other examples of tool chaining:

using burp to custom rewrite for your scanner: scanner > burp re-write rules > target
using your scanner to deal with authentication burp can’t: scanner that handles kerberos > burp > target

TLDR; Feed tools with tools for fun and profit and new Web Application Hackers Toolchain 2.0 talk coming šŸ™‚

Leave a Reply

Your email address will not be published. Required fields are marked *