Phone phishing, just one way to social engineer information from end users

Social engineering is used to obtain or compromise information about an organization or its computer systems. An attacker may seem unassuming and respectable, possibly claiming to be a new employee, repair person, or researcher and even offering credentials to support that identity.

The following is a recent real life example which would seem very innocuous.

An associates phone rings.  The person identified herself as working for the accounts receivable department.  She indicated to the user that the phone extension he had was noted as sitting near an HP Color Printer.  She asked if he could provided the model and serial number for her records.  (Before we go any further, how many of you reading this sit “near” and HP printer?)

The user was keen enough to ask the caller’s name.  She responded with only a first name “Kathy”.  Fortunately this set off a red flag that something many not be completely legitimate with her request.  He then indicated it wasn’t necessarily a good time for him and asked if he could get the information and send it to her in an email.  Still suspicious but now afraid the caller may just hang up, the user stalled and answered “oh yes, there is an HP printer right here” and gave the model number, but nothing specific to the device or the company he works for (serial number or IP address).

After saying this, the caller seemed more interested again and continued to ask how they administer and maintain the printers.  The end user indicated he wasn’t sure and would have to ask.  He then asked for her last name to which she responded “White”.  Being resourceful, the user quickly checked the companies Active Directory.  No users matched that specific name.

He then offered to get the rest of the information and call her back.  The caller indicated that the phone she was using was only able to make outbound calls and she wasn’t sure what number would call her area (does this sound like any phone in your company?).  When he insisted he’d need to call her back, she quickly hung up on him.

By asking specific and probing questions, a caller may be able to piece together enough information to infiltrate an organization’s network. If an attacker is not able to gather enough information from one source, he or she may contact another source within the same organization and rely on the information from the first source to add to his or her credibility.  While each of those pieces of information may seem insignificant by themselves, in total, they may give a hacker just the information they need to footprint a company or network in order to run a targeted attack on the environment.

iOS 4.2 is out! Update your iDevice!

Image representing Apple as depicted in CrunchBase

Image via CrunchBase

While many people (me included) are happy to update their devices to iOS 4.2 for the new features enabled, most are not aware of the security fixes included that are also necessary.  iOS 4.2 (like many iOS updates prior) includes fixes to address multiple vulnerabilities. Exploitation of these vulnerabilities may allow an attacker to execute arbitrary code, initiate a call, cause a denial-of-service condition, gain system privileges, or obtain sensitive information on your iPhone, iPad, or iTouch.  (While there is an update for AppleTV also, I’m not aware of what, if any, vulnerabilities were addressed with that update).

A quick overview of these fixes includes fixing an issue with the new iAD service where the ads could send you to malicious sites, fixing mail issues where properly formatted HTML emails could send information back to the sender of the email, and a network issue where properly formatted PIM messages could cause a denial of service situation or the device to completely shut down.

To see a full list of the vulnerabilities addressed, please see Apple’s security page here:

Related Articles

Why do I do what I do?


Image by .michael.newman. via Flickr

I’ve taken a couple of weeks off from posting any new articles to enjoy the birth of our third child.  It has put me behind in my goal to have a least one blog post per week on here for the year, but I feel I can make that up and if I don’t, it was time off well spent.

An interesting thing about spending more time at home is that we get out of our daily rut/routines.  With all that is going on every day with school, extracurricular activities, board meetings for various groups, church, family events, etc I find that my wife and I end up having more status updates than time to talk.  It isn’t anything against her nor a statement about our marriage, but more a matter of need or efficiency to make certain we’re keeping up with the responsibilities of day to day life and raising three children.

Conversely, at work, I tend to take for granted that my motivations and goals are clear.  Not because they are always explicitly communicated, but because we have a good team of people working for the same outcomes (at least, that’s my assumption).  What becomes interesting is how to relate those motivations and goals in my professional life, to my wife.

In the past two weeks, we’ve had one such conversation which has had me up at night thinking quite a bit.  Okay, truth be told, a 14 day old is keeping me up, but my mind is still using the time wisely… 🙂

I’m going to go with the assumption that not everyone reading this is aware of what my job specifically entails.  Technically I work in Risk and Security with my main focus being on Governance, Risk, and Compliance (which we call GRC in “the biz”).  Rather than give you the standard business definition of how we interact with various lines of business and governmental regulatory bodies, I’m going to explain it from the perspective of a being a parent.

EOS 7D-2010_09_19-2098

Image by craighagan via Flickr

As a parent, my main goal is to care for an protect my family.  This is focused somewhat on the here and now as well as long term support and stability.  This is done on a daily basis by providing food, shelter, and clothing.  In my professional life, this is similar to the support provided to different lines of business.  I’m not going to make marketing decisions, but we help to provide them with direction and authority on certain regulatory manners which we communicate to our customers.  Our main purpose isn’t to help R&D come up with the ideas, but to help make them better.  Think of it in term of the BASF brand statement: “We don’t make a lot of the products you buy. We make a lot of the products you buy better.”

I don’t think that this is terribly ground breaking at this point.  Many people in the Security, Governance, or Risk Management area would consider this a given.  However here’s something I think should be of interest:

Many of the things we do (projects, policies, etc) may not be done with the appropriate objective.  I won’t say the intentions are incorrect, as they aren’t.  Just that the diffuse goals may cause them to seem interruptive and often and affect their use and or adoption because they aren’t properly focused.

Let’s focus again on parenting.  As a parent it’s imperative that we teach our children.  Not necessarily in terms of the three R’s (of which only Reading starts with R by the way) but in terms of personal responsibilities, manners, morals (yes, I went there), and general behavior.  It’s fortunate that raising a child works the way it does.  When they are very young, they spend almost 100% of their life in your presence.  As time goes on, they spend less and less of that time around you.  As they become teenagers and adults, that shifts dramatically.  So if you are going to be effective at forming these appropriate behaviors, you had best start early and make sure you’ve instilled them well before they spend more time away from you that with you.

In psychiatry these is generally referred to as teaching behavior modification.  As anyone knows with learning a new skill or teaching something an effective feedback mechanism is necessary to speed and enforce the learned behavior.  As a parent, we don’t want our child to have to touch a hot stove to learn that they shouldn’t.  Yes it is an effective and immediate feedback, but we would much rather tell them 50 times not to touch it and have them learn through that reinforcement than having to go through the painful approach of experiencing the outcome.  A key point to being able to tell them 50 times, is that you need to be in there presence all 50 times and around a stove.  Over time, their behavior will be shaped by your teaching.

Behaviour modification

Image by moleitau via Flickr

As children get older, you are not always in their presence to be able to teach them.  So you fill them with all of the information you can to help them make the best judgment they can in the situations they are given.

Let’s bring this back to Risk Management.  As I stated earlier, most things we do may not have the best stated objectives when they are done.  Most will say they do things like encrypt a users hard drive for the safety of the end user (or the data hopefully), they install products like data loss protection (DLP) in order to stop users from losing or stealing data, and we use complex passwords because users need to understand security better.  None of these are entirely accurate goals.

Whether stated in these words or not, our goal is (or should usually be) behavior modification.  The same process used to teach children, can be highly effective in helping to shape the behaviors of your employees.  Yes it is effective to have a developer lose his hard drive, have it sold on eBay, end up in your competitors hands and have most of their life’s work (or at least intellectual work) given away.  Much like touching the hot stove, it could be an effective way for the developer to learn to protect information.  But it’s not the way you want them to learn and certainly not at that expense.

Sesame Street

Image via Wikipedia

So we put in place policies, processes, and often a lot of technology.  Unfortunately often these are full of “don’t do’s”.  Don’t do this, don’t do that, etc.  If you try to do something unauthorized, a big red X appears on your screen.  This may seem well intentioned, but I think this approach to employee awareness is similar to TV teaching.  (If you don’t know what that is, I consider that letting Sesame Street teach your children social skills vs. you as a parent doing it).  You leaving too much judgment up to the employee and often the results is a misinterpreted policy or procedure.  When they then try to do what they need to for their jobs, there is some punitive action taken on their efforts (they are blocked from what they are trying to do).  Because they have misunderstood the original intent, their next step is to figure out a way around this roadblock rather than understand why their actions have brought them here.

This is particularly difficult in the implementation of a new piece of technology.  Often the approach is, there’s a lot of them (employee’s) and only a few of us (security team) so lets put this in place and it will keep them from being able to… (what every your looking to modify in their behavior).  Again I would assert that this will only help to teach people to figure out ways around what you are trying to put in place.  Instead the most effective technology implementation can be measured by this one assessment: “Did you properly educate your employees such that, when something was implemented, there was little effect on how they did their work”.  Ultimately if you can put in a control that is not disruptive at all, you have done an effective job at providing information to your user base and helped to modify their behavior in advance.  Again, this takes away all the punitive approaches to teaching some behavior and should have a much higher adoption rate.

So in simplest terms, the goals, as a parent or a manager, should very often be education and awareness.  If we try to always communicate them in terms that 5 year-olds can understand with the clear intention of providing assistance (again non-punitive approach) we can learn to be more effective in out work.  If you’d like a nice physics analogy here, remember reduced friction (more education) means less force is needed (effort) to achieve a certain amount of work.  And aren’t we all looking for a way to get more done with the same or less work (efficiency)?

How to follow me, well my car at least…

Conspiracy theorist ready your tin hats!

I’ve taken to listening to podcast instead of music while running and heard some interesting news that encouraged me to rush back to my computer this morning and do some research.

History: Most of you will remember the Firestone tire recall from 1990 where more than 100 deaths were attributed to tire separation which was due to over inflation of the tire.  In response to this, the Clinton Administration passed the TREAD act.  One of the key provisions of this act was that all cars sold after Sept 1 2007 have installed TPMS (Tire Pressure Monitoring Systems) which would give the driver near real time information on the status of tire pressure.  The information is fed back to your cars ECU (“computer”) which would presumably know the optimum pressure for your factory tires and warn you of over/under inflation.

If you don’t know how these work, these are small devices which are stuck to the inside of your rim with a small RF sensor that is run by a small watch battery (see image at right).  The information is not real time, it is sent periodically (60-90 second intervals) to your cars computer.  However your computer is always “listening” for input from these devices.

The news around this is that researchers from Rutgers University have published a press release that they are going to discuss the dangers of spoofing these devices in order to gain access to the computer possibly able to cause issues for the driver or the vehicles control systems.  The crux of the issue is that these devices have short (relatively) 32 bit IDs with no encryption between the tag (sensor) and the control unit.  According to the researchers the protocol is also quite simple and easy to spoof.  They will (presumably) demonstrate this week how they can send/receive signals from these units up to 40 meters away.

So let’s put a privacy spin on this (ready your tin hats!).

  1. The sensors have a broadcast range of roughly 40 meters
  2. The IDs are easily spoof able (and easily identified)
  3. There isn’t any encryption
  4. The protocol is simple
  5. Broadcasts occur in timed increments (60-90 seconds)

So do you want to follow me?  You could.  Building a single sensor that would read the ID from one (or all) of my TPMS would be quite simple.  Place it in a location where I’m going 1.5 MPH or less (rough math using 40 meter coverage and a 60 second window) and you have a reasonable chance of being able to authenticate my presence, or at least my car’s presence, at that location.  Granted you or I have a small issue here, the ability to do this on any scale that would be effective.  If you wanted to cover a large area or a large number of people, this would be quite an undertaking.  But if you are a government and control the local infrastructure of a municipality, you have quite an opportunity here.

Android and iPhone exploits revealed in past week

Over the weekend, a new Web-based jailbreak became available for iOS devices, offering users a simple method to open their devices to installation of unauthorized third-party applications.  An error in the processing of Compact Font Format (CFF) data within PDF files can be exploited to execute arbitrary code e.g. when a user visits a specially crafted web page using Mobile Safari.

This is applicable to any iOS 4 device (all new iPhone 4s, iPads and any upgraded iPhone 3G and 3Gs).  On of the main features of iOS 4 was the SandBoxing approach to applications.  This exploit bypasses the SandBoxing by exploiting a third party app.  I have to say this doesn’t help Adobe’s popularity in Cupertino.

Time will tell if Apple will release a patch to iOS to resolve the issue or if Adobe will have to update their code.  For the time being, the best advice is to browse “safely” (if that’s really possible anymore) or just not browse at all.

The Andriod exploit has a completely different twist on it.  Spider Labs released a DVD at Defcon last week that provided a method to root the device.  Once the exploit is applied the Android device acts as a bot for the hacker who has full remote-control over the device providing access to all the user information on it.  What makes this more interesting is that Spider Labs is an ethical hacking team using this approach to incentivize manufacturer to provide  a fix to the issue more quickly.

“It wasn’t difficult to build,” said Nicholas Percoco, head of Spider Labs, who along with a colleague, released the tool at the Defcon hacker’s conference in Las Vegas on Friday.  Percoco said it took the team about two weeks to build the malicious software.

CNET reported that there were ten companies had data compromised.  The list included Pepsi, Coca-Cola, Apple, and Google amongst others.  All information was solicited through one phone call to an employee of the company.

************** UPDATE Aug 5th **********************

CNET has posted that Apple has acknowledged the issue and already have a fix.  They did not mention when it would be released but a software update is imminent.

************** UPDATE Aug 11th *********************

Apple has released iOS 4.0.2 for iPhone and iTouch as well as iOS 3.2.2 for the iPad to address this vulnerability.  Of course the a side effect to addressing this vulnerability is that it now breaks the functionality of JailbreakMe 2.0.  Not that this should be a surprise.

The Broken Window Theory applied to Information Security

An observer could peer straight through the bu...

Image via Wikipedia

The Broken Window Theory has two popular accepted approaches to it’s application.

The original was an economic theory proposed in the 1850s.  Essentially it stated that even something bad that happens (e.g. the breaking of a window) has a positive effect on the economics of a society (need to create another window and employee someone to install it).

There is a more contemporary theory that is focused on criminology originally proposed in the March 1982 edition of The Atlantic Monthly.  It basically states that, if a few broken windows go un-repaired, then from that there is a higher propensity for other windows to be broken.  From that, there is even more chance that other nefarious activities will be more prevalent in that location.

I’m going to take a leap here and compare the second theory with Information Security and reducing risk.

According to the theory, there are three factors that support why the condition of the environment affects crime (and the opportunity for crime):

  • social norms and conformity
  • the presence or lack of monitoring, and
  • social signalling and signal crime

In this first part, I’ll use Information Security examples to explain these factors:

Whether intentionally or not, the policies we create and enforce will affect the social norms of our computing environments.  If you do not enforce the needs of proper patch management or secure coding, you create a social norm where it is implicitly acceptable to not follow those policies.  Social norms tell us that people will do as the group does and will monitor others to make sure they act in the same manner.  If this holds true, then here inlines the answer to many departments problem.  Make sure you have a policy, it’s well enforced and communicated to your end users, and the end users will help expand your monitoring capabilities to ensure they are being followed.  Seems too simple right?

The second factor is the presence of monitoring.  Because of the nature of our environments, it’s not always possible for people to get feedback from those around them and you cannot rely on (or even expect) to have any norms being transmitted from others.  In this case, you turn to your tools.  Even though you may have created and communicated the appropriate policies, now you need some technical controls in place to enforce them.

These technical controls are the third factor (signals) that indicate to the end users that they are (or are not) compliant with their activities.  So add accurate, timely, and visible monitoring to your list.

The other key component to take away from the Broken Window’s theory is that addressing problems when they are small will give you the opportunity for easy, less expensive fixes to problems.  A sound Risk Management methodology would tell you that Addressing issues like patch management, policy violations, secure coding practices earlier, are less costly and less difficult than addressing them after they have been exploited and you are now dealing with a breach or data loss.

Sadly, the early economic theory of Broken Windows would state that all these things are good.  If a breach occurs many people will be employed conducting the investigation and doing research.  I feel I can confidently say that the business we own or work for would not be satisfied with us following that theory.  It would be far more acceptable to accept the social/criminology theory and begin to remediate many of our issues before they become larger problems.

Russian spies are just like your average end user?

Funny as this may sound, it’s seems to be the case with the recently arrested Russian spies.

This article from Network World points out some of the issues the users had and how those issues helped get them caught.

As an IT or Security Professional, how likely are these scenarios in your workplace:

  • A 27 character password was enforced.  So the password ended up written down on a post-it.
  • Frustrated with trying to get a program to work, you turn to a complete stranger for help.  If that stranger happens to be an undercover FBI agent, handing him your laptop just made his day.
  • Waiting 2 months to get a new laptop and have it configured then being told you can get it fixed in 6 months if it doesn’t work.  Then telling your co-worker (or co-spy) “they don’t understand what we go through over here”.  Sound familiar?
  • Users/spies turn to off the shelf programs so they don’t have to wait for their IT department to install.
  • Having all new systems but not be able to run the programs necessary as it crashed or timed out before the application could finish.
  • Users/spies set up peer-to-peer wireless networks (without encryption) so they could transfer files easier.  Made it a lot easier to intercept those files during transfer too.

They seem so comical that it’s almost hard to believe they aren’t movie plot lines for Steve Carrell’s next Get Smart episode.