This week Tenable released a new “plugin” (what they call a vulnerability detection) named “Web Server HTTP Header Information Disclosure”, plugin id 88099. In spite of even the title saying it only an information disclosure vulnerability, they rate this a medium. In my environment that means we need to address it. I think its a little crazy for an information disclosure vulnerability to be rated that high. It turns out Tenable has ceded vulnerability severity ratings to the CVSS system. So because this has a CVSS score of 5 it has to be rated moderate.
Now with SecurityCenter, I’d be able to change the security severity of this detection. I’m not sure that’s possible in Nessus. Even so, when scanning servers for other people, you cant just change the results of the scan. And now the problem, the other party’s security people dont have the ability to make rational security decisions. They just want all the detections gone.
Having a web server banner is one of those vulnerability detections from 15 years ago. Its kind of weird that Tenable is just writing this detection now. Having a server banner visible isn’t some vulnerability in the server software. Its part of the standard. Who is removing this information supposed to stop? It might stop a script that checks server versions and the applies a specific exploit or test (perhaps it would stop a naive vulnerability scanner). That’s about it.
It would be one thing if it were easy to change. For example removing “x-powered-by:asp.net” is easy to remove. Removing an IIS version is probably going to require URLscan as if this were IIS4.
I had some Windows 2008 R2 servers in Amazon AWS EC2. To save some money, they were turned off when they weren’t needed. I noticed when I did boot them that they had some time issues apparently jumping from Eastern US time to UTC time for a while before switching back.
It seems when you search for time issues, specifically when you have a *nix Host Operating System set to UTC and a Windows guest OS set to a local timezone people will link you to the “RealTimeIsUniversal” registry key.
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\TimeZoneInformation\RealTimeIsUniversal = 1 REG_DWORD
The problem is, that registry key was already set.
Further searching brought me to Amazon’s article about setting the time for a Windows OS.
This had a couple of suggestions. To make sure that KB2800213 and KB2922223 are installed. After some searching it turned out that KB2800213 was superseded by KB2922223. Also KB2922223 was already installed.
Checking the Windows Event Log found the time was changed by the Citrix Tools for Virtual Machines service. “C:\Program Files (x86)\Citrix\XenTools\XenGuestAgent.exe”
I verified that this service was causing the issue by restarting just the service. Sure enough, the time changed to UTC. Then when I opened up time in Windows and had it check against the NTP server, it changed back to local time.
To resolve the problem, I upgraded the Amazon EC2 Paravirtual Driver. This had a prerequisite to update EC2Config.
With a solution found and tested on one server, I turned over the resolution on the other servers to the System Administrators.
Incorrect time impacts security logs and any subsequent troubleshooting or investigation. According to Amazon, issues like this can cause problem with DHCP leases. There can any number of unknown application problems. I expect Kerberos wouldn’t be very happy either.
I moved this site over to Cloudflare.
The previous CDN doesn’t give SSL to free accounts. I’ve wanted to get SSL on here for many reasons, such as using SSL to protect my logins. Additionally the use of SSL is necessary for the SPDY protocol, which should speed up the site. I”m expecting Cloudflare to migrate to HTTP/2 as that becomes the new standard. Google reportedly also gives a tiny page ranking boost to encourage the adoption of SSL.
I also like that using Cloudflare means I can have DNSSec (which I haven’t turned on yet). By hosting my DNS with Cloudflare, I no longer need to pay for a Dyn account as was necessary with the last CDN I was using. (due to the way my webhost does DNS).
If you notice anything not working let me know in the comments.
Management types are always trying to push BitLocker rather than third party encryption because its free. “Free” as in, “included in Windows Professional/Enterprise”. They never consider the less obvious costs in usability and to the helpdesk. The Windows guys would even team up with the management types complaining that non-Microsoft full disk encryption products made system deployment difficult. (There are of course ways to work with things like MDT in McAfee encryption. I don’t know about the other versions.)
For me it always came down to two main things.
- It’s not acceptable security to me to use Bitlocker without pre-boot authentication.
- Using Bitlocker with pre-boot authentication is kind of annoying.
a. Bitlocker preboot authentication requires a per machine password. Users would need to know this additional password rather than the single signon used by non-Microsoft alternatives.
b. The password recovery options available are kind of cumbersome.
This month, Microsoft released security bulletin MS15-122 to patch a vulnerability in Bitlocker when used without pre-boot authentication. This attack involves spoofing a domain controller.
Dell Secureworks has identified targeted attacks occurring through LinkedIn.
In this attack, a fake user with a network of connections is created. Under the guise of a recruiter, they contact targets of opportunity, think sysadmins at a target company. The victim is enticed to go to a purported resume submission website. And then you have malware.
- As always, on LinkedIn be aware that people may not be who they claim to be.
- If you are going to apply for a job, go the known, established website of a company, and click on something like “Careers” to find a link to their jobsite. Where it’s an external recruiter contacting you, take care in establishing their bona fides.
- Dont be part of the problem. Only accept connections from people you know and trust. Your connection is an implicit endorsement to other people.
I was just recommending LastPass on a corporate Chatter. Then I read that LogMeIn has bought LastPass.
LogMeIn isn’t one of my favorite companies IIRC it is quite impossible to block LogMeIn’s enterprise security circumventing product without blocking remote support sessions also. This is becuase they use the same servers for each. GoToMyPC on the other hand provided a ways specifically to block its use in an enterprise, and kept their gotoassist/gotomypc servers separate. This was a few years ago so perhaps things have changed.
Since that time LogMeIn annoyed people by doing away with their free product (making my need to block it much less). And also engaged in rampant price hikes for those foolish enough to pay for the service.
Even if none of the above were true, our passwords make a huge target. LastPass was believed to be a security company who realized they’d lose everything if they failed to protect our encrypted passwords. Even then twice now we’ve all had to change our master password out of an abundance of caution. Now they’re being bought by a company that doesn’t seem to have the same drive for security.
It is very disappointing.
an old one, but new to me.
The FBI is investigating the St Louis Cardinals for a hack of the Houston Astros.
The Cardinals reviewed a “master list of passwords” to access the Houston prospect database. A former employee of the Cardinals now worked for Houston in setting up this system. The FBI tracked the unauthorized login to the home of Cardinals team officials.
source – NY Times. (if the link is paywalled, do a search on google to find the article or add a google refer to your request.)
This illustrates why password reuse is a problem. Additionally if passwords were routinely changed, even with an admin using the same password initially, they would be forced to change it to something else. One does wonder about this “master list of passwords”. I’m guessing these were service or admin account passwords rather than the organization knowing individual user passwords. At least I hope so.
I’m going through RackSpace’s free CloudU training.
They use the analogy of infrastructure a couple of times.
It would be bizarre, given widespread availability of electricity on tap, that an organization would create their own electricity plant to power their factory, so too is it becoming more bizarre to host one’s own software or buy one’s own hardware.
It sounds great. I suppose we shouldn’t look at it too much though.
What business of any size doesn’t have battery backups on the data center and network gear? Who doesn’t have a generator to keep the servers running if the outage is more than a flicker?
How many data centers have the windfarm and solar panels storing energy in batteries for use later?
Its rare but people do go off grid (on homes, not data centers as far as I know). The Telsa battery announcement may make this even more common.
Why would they choose to undertake this capital expense?
Infrastructure may free you to focus on your prime business, but it also makes you dependent. I’m not arguing against the cloud, I just was thinking about whether their analogy is breaking apart already as battery and green technology improve.