Phone phishing, just one way to social engineer information from end users

Social engineering is used to obtain or compromise information about an organization or its computer systems. An attacker may seem unassuming and respectable, possibly claiming to be a new employee, repair person, or researcher and even offering credentials to support that identity.

The following is a recent real life example which would seem very innocuous.

An associates phone rings.  The person identified herself as working for the accounts receivable department.  She indicated to the user that the phone extension he had was noted as sitting near an HP Color Printer.  She asked if he could provided the model and serial number for her records.  (Before we go any further, how many of you reading this sit “near” and HP printer?)

The user was keen enough to ask the caller’s name.  She responded with only a first name “Kathy”.  Fortunately this set off a red flag that something many not be completely legitimate with her request.  He then indicated it wasn’t necessarily a good time for him and asked if he could get the information and send it to her in an email.  Still suspicious but now afraid the caller may just hang up, the user stalled and answered “oh yes, there is an HP printer right here” and gave the model number, but nothing specific to the device or the company he works for (serial number or IP address).

After saying this, the caller seemed more interested again and continued to ask how they administer and maintain the printers.  The end user indicated he wasn’t sure and would have to ask.  He then asked for her last name to which she responded “White”.  Being resourceful, the user quickly checked the companies Active Directory.  No users matched that specific name.

He then offered to get the rest of the information and call her back.  The caller indicated that the phone she was using was only able to make outbound calls and she wasn’t sure what number would call her area (does this sound like any phone in your company?).  When he insisted he’d need to call her back, she quickly hung up on him.

By asking specific and probing questions, a caller may be able to piece together enough information to infiltrate an organization’s network. If an attacker is not able to gather enough information from one source, he or she may contact another source within the same organization and rely on the information from the first source to add to his or her credibility.  While each of those pieces of information may seem insignificant by themselves, in total, they may give a hacker just the information they need to footprint a company or network in order to run a targeted attack on the environment.

iOS 4.2 is out! Update your iDevice!

Image representing Apple as depicted in CrunchBase

Image via CrunchBase

While many people (me included) are happy to update their devices to iOS 4.2 for the new features enabled, most are not aware of the security fixes included that are also necessary.  iOS 4.2 (like many iOS updates prior) includes fixes to address multiple vulnerabilities. Exploitation of these vulnerabilities may allow an attacker to execute arbitrary code, initiate a call, cause a denial-of-service condition, gain system privileges, or obtain sensitive information on your iPhone, iPad, or iTouch.  (While there is an update for AppleTV also, I’m not aware of what, if any, vulnerabilities were addressed with that update).

A quick overview of these fixes includes fixing an issue with the new iAD service where the ads could send you to malicious sites, fixing mail issues where properly formatted HTML emails could send information back to the sender of the email, and a network issue where properly formatted PIM messages could cause a denial of service situation or the device to completely shut down.

To see a full list of the vulnerabilities addressed, please see Apple’s security page here:  http://support.apple.com/kb/HT4456

Related Articles

The Broken Window Theory applied to Information Security

An observer could peer straight through the bu...

Image via Wikipedia

The Broken Window Theory has two popular accepted approaches to it’s application.

The original was an economic theory proposed in the 1850s.  Essentially it stated that even something bad that happens (e.g. the breaking of a window) has a positive effect on the economics of a society (need to create another window and employee someone to install it).

There is a more contemporary theory that is focused on criminology originally proposed in the March 1982 edition of The Atlantic Monthly.  It basically states that, if a few broken windows go un-repaired, then from that there is a higher propensity for other windows to be broken.  From that, there is even more chance that other nefarious activities will be more prevalent in that location.

I’m going to take a leap here and compare the second theory with Information Security and reducing risk.

According to the theory, there are three factors that support why the condition of the environment affects crime (and the opportunity for crime):

  • social norms and conformity
  • the presence or lack of monitoring, and
  • social signalling and signal crime

In this first part, I’ll use Information Security examples to explain these factors:

Whether intentionally or not, the policies we create and enforce will affect the social norms of our computing environments.  If you do not enforce the needs of proper patch management or secure coding, you create a social norm where it is implicitly acceptable to not follow those policies.  Social norms tell us that people will do as the group does and will monitor others to make sure they act in the same manner.  If this holds true, then here inlines the answer to many departments problem.  Make sure you have a policy, it’s well enforced and communicated to your end users, and the end users will help expand your monitoring capabilities to ensure they are being followed.  Seems too simple right?

The second factor is the presence of monitoring.  Because of the nature of our environments, it’s not always possible for people to get feedback from those around them and you cannot rely on (or even expect) to have any norms being transmitted from others.  In this case, you turn to your tools.  Even though you may have created and communicated the appropriate policies, now you need some technical controls in place to enforce them.

These technical controls are the third factor (signals) that indicate to the end users that they are (or are not) compliant with their activities.  So add accurate, timely, and visible monitoring to your list.

The other key component to take away from the Broken Window’s theory is that addressing problems when they are small will give you the opportunity for easy, less expensive fixes to problems.  A sound Risk Management methodology would tell you that Addressing issues like patch management, policy violations, secure coding practices earlier, are less costly and less difficult than addressing them after they have been exploited and you are now dealing with a breach or data loss.

Sadly, the early economic theory of Broken Windows would state that all these things are good.  If a breach occurs many people will be employed conducting the investigation and doing research.  I feel I can confidently say that the business we own or work for would not be satisfied with us following that theory.  It would be far more acceptable to accept the social/criminology theory and begin to remediate many of our issues before they become larger problems.

Google dumps Windows? Are Google’s security issues really related to Microsoft?

Per article in Financial Times, Google is not deploying user systems with a Microsoft operating system without very high level clearance.  The statement (not seemingly from an official spokesperson) is that they were tightening up security after the recent Aurora incident.

Let me make just a few observations:

Borrowed from Techdigest.tv

  1. The IE vulnerability was a Zero-Day attack.  If Google were to drop any browser vendor that was subject to a Zero day, I would assume that they won’t be using Chrome either?
  2. Sure the vulnerability was against IE, but it covered IE 6, 7, and 8.  From all accounts it seems that Google was using IE 6 (at least that’s what all the exploit recounts are telling us).  Any reason Google isn’t on a more current browser?
  3. A key piece of the exploit being possible was the social engineering aspect.  Is Google replacing the employees that were “fooled” by these social engineering attempts for ones that are “more secure”?
  4. Better handing of user and admin rights.  Since the remote code runs the exploit as the user running IE, a non-priviledged account would have prevented execution.  Sure this is easily done in OSX or Linux, but it’s also possible in Windows.  This is an issue of proper system management not the OS itself.  Has Google changed this?
  5. Monitor internal DNS queries.  Most command and control type malware make calls out to some dynamic dns hosts.  Your clients (if well configured) should not being using dynamic DNS queries.  Those requests should be aggregated and logged by local DNS responder.  Assuming you are properly monitoring your logs that will offer more visibility.  Google?
  6. Audit VPN user and config changes.  The last step once admin accounts were compromised was to add a VPN user so the attackers had easy access into the network. A more robust account administration process with automated checks/controls and an audit process would have also brought the issues to light more quickly.  Google, what process changes have happened there?

Sure Microsoft has a name for being not-so-secure.  However it seems that Google may be using this to push the move to it’s new operating system and give themselves an excuse to do so.  If they were concerned about improving their security posture, there would be a lot of process changes internally (and I’m quite certain there are), but let’s not try to focus the blame on Microsoft here.