Our take on social engineering

dnet 2019-04-04    

Like many other offensive IT security companies, we also offer social engineering assessments. And like in other areas of our portfolio, we try to steer client needs in a way that they order something that actually matters. This blog post summarizes what we experienced and how we see things in this field. While many things work the same way around the globe, the starting point is our feeling here in Hungary, where many people in the local IT security scene think social engineering means walking into buildings dressed as a pizza delivery guy and calling targets on the phone.

We find The Art of Deception by Kevin Mitnick a great book, kind of a bible for anyone who’s into social engineering. This is especially so since it’s in stark contrast with what you can see in movies and YouTube videos that includes cons and social engineering. Calling targets on the phone and getting into their premises is something that looks cool, but it’s a nightmare if you look at the risks and potential rewards. If you read The Art of Deception carefully, you’ll see that most stories involve the attacker getting just as close as necessary to the target, and not an inch closer.

One great example is chapter 5 (“Let Me Help You”), where Mitnick presents the analysis of Craig’s con:

Craig avoided the risk of physically entering the building simply by having the fax sent to the receptionist, knowing she was likely to be helpful.

Another good one is Robert from chapter 13 (Clever Cons) who asks the lobby receptionist to cooperate:

Best of all, he never had to show up physically at the location of the fax machine.

He sums it up the best in chapter 14 (Industrial Espionage) at the beginning of “the new business partner”:

Social engineers have a big advantage over con men and grifters, and the advantage is distance. A grifter can only cheat you by being in your presence, allowing you to give a good description of him afterward or even call the cops if you catch on to the ruse early enough.

Social engineers ordinarily avoid that risk like the plague. Sometimes, though, the risk is necessary, and justified by the potential reward.

As mentioned before, we try to nudge clients so that they spend their budget on assessments that matter instead of doing things that just sound cool. Of course, if someone insists on a specific task, we’ll perform it, so we have experience in that subset as well.

For instance, we’ve been tasked on numerous occasions to get into the premises by violating the entry protocol. Sometimes it worked, sometimes it didn’t – especially when personnel was trained based on previous years’ experience. However, if you look at the big picture, what kind of real-world attacker would try this? Just compare the risk of getting caught in two scenarios.

  1. Getting into the building by violating protocol, so if you fail to fool security personnel, you get caught. Posing as a pizza delivery guy looks good on TV, not so much if they won’t let you in anyway. Try stealing physical goods or sitting in front of workstations, and you’ll likely be caught.
  2. Send an email to just a handful of employees. As soon as someone clicks on the wrong button, you can behave like you’re sitting at their workstation, or even do more. You can hide behind proxies and hacked jump hosts, so even if IT catches on, you’re probably good.

On one occasion, we just visited a busy HQ building to catch some Wi-Fi packets passively (thus triggering no automated alarms), and just by sitting with a notebook in the public lobby for a few minutes resulted in CCTV footage being sent to the security team immediately. Then again, this was still a better experience for us than what happened to this lady carrying a thumb drive loaded with malware last week.

In our experience, email campaigns are still frighteningly successful. People open links, enter their corporate credentials upon encountering the right message. People open attachments and click on any warnings, given the right context in the message.

It’s just that many people find it hard to analyze such results after an assessment. If we send 3,000 emails and 1,000 people fall for it, some people interpret it as a success, as only a 33% could be attacked. This leads to the fundamental asymmetry of IT defense: the defenders must succeed every time, while attackers need to succeed only once. If I send an email to 6 random people in the same company, around 2 should fall for it, which is more than enough.

And while the operators can claim that they got calls after X minutes because of the sheer amount of emails,

  • they will probably act way after some victims already gave away their credentials or ran malicious code on their workstation,
  • no sane attacker would have sent so many emails to a single target, and
  • most people process their inbox as a linear timeline from past to present, getting to the malicious mail way before reading the additional warnings sent after the fact – we’ve had recipients becoming victims days after the campaign, as they were on vacation, and fell for our trap upon arriving at the office.

What surprised us at first was that an attacker doesn’t even need to have a polished message to win. In one case, several versions of the same email were sent,

  • some had all the bells and whistles of a regular benign campaign from any marketing/comms department,
  • some had no fancy graphics, even spelling errors and had no specific phrases that would’ve indicated a targeted message, and
  • some were a mixture of these two.

In some cases, the last one caught the largest amount of fallen victims, gathering more pwned accounts/machines than the first one. In a somewhat related case, we even had a client where they noted after the attack that our fake corporate page used the then-new corporate branding way before some of their own systems started using it.

Also, in some cases, victims can even make the life of the attacker easier: we’ve had a case where a director has forwarded our malicious attachments to the whole department asking them to run it as well. Although this was a rare exception, we also had a number of people actually replying to the email. Some of these realized the trick (either before or after the fact) and joked or spewed vulgarities. However, we had a number of honest replies as well, which could’ve been a great way to drill down and see what we could’ve made them do, if only it wasn’t outside the scope of the assessment.

If anyone was skeptic about whether such a simple entry point as email could be a major problem, ransomware campaigns of the last few years proved them wrong. It just hit that nice spot where attackers were motivated and the results were loud – but that doesn’t mean this was the first and the last time organizations were and will be targeted in these ways.

For example, here in Hungary, a clever trick was used against people in the Ministry of Defense using OWA as part of Operation Pawn Storm. Targets were sent an email with a link to a conference, and as it opened in a new tab, its scripts had access to the opener tab. While the victim browsed the new tab, the original webmail tab was redirected from mail.hm.gov.hu to mail.hm.qov.hu (note GOV vs QOV), which hosted a fake OWA login screen. (The whole report made by Trend Micro could be found here, see page 12 for the above story.)

Another well-known example is RSA where four RSA employees were targeted, and one of them managed to find the mail in the spam folder and open the attachment called 2011 Recruitment Plan.xls that opened the door for attackers, as it had a Flash object embedded inside the spreadsheet that exploited CVE-2011-0609. However, we don’t even need to go back in time that far; during the 2016 US presidential elections, one single phishing email led to the leak of emails belonging to John Podesta, who chaired Hillary Clinton’s campaign.

Since emails go deep into the bowels of corporate infrastructure, it’s also a great way to attack nodes that are poorly patched and maintained. Just a few years back, we even got a hit on our phishing server from an instance of Internet Explorer 5 running on Windows 98.

Of course, people don’t like to admit to being fooled by a simple email. In one case, we had a sysadmin having second thoughts after submitting his credentials, so he started investigating. We intentionally don’t mask ourselves beyond a certain point, our company could be found just by querying WHOIS, so he found out pretty quickly what he was up against. After that, he called us and tried to talk his way out of being included in the statistics as a victim.

And employees on the lower half of the food chain are not the only ones that worry about such tests. In most cases, C-level executives are explicitly excluded from the target list – even though targeting such people are not only common but in case of success, the impact is even more severe. And this is not the only way to weaken such an assessment. Another example is the common practice (as it can be applied to technical tests as well) of doing the assessment and then putting the report on the shelf – instead of acting upon it like training employees for awareness. Of course, there’s a difference between training and training, as we’ve seen cases where the ratio of people falling for phishing emails didn’t improve significantly on a year-by-year basis. Also, training in itself is not necessarily enough; employees need to sift through numerous emails every day just to do their jobs, and many things are easier to catch by automated means. Think about it: is it easier to look at each domain to see the difference between rn and m than to tag all messages on the incoming SMTP server to clearly state their origin?

This is also a good reason to prefer emails first: since it can be mass-tested, the results apply to a wide and diverse set of people. In an ideal world, all employees would receive such an email since it costs just as much time and money to send it to 1% of employees as sending it to all of them.

Testing physical security on the other hand is time consuming, resulting in few test cases, eventually distorted by the law of small numbers. We’ve had a bank where we visited three branch offices selected randomly. We’ve been successful at persuading employees in all cases, but one of them had a special layout that required the help of local security, who actually followed protocol, thus stopping our attack. Is this a 66% success rate? What if we only went to the single branch office where we failed? And the other way around? And it took half a day with all the traveling between locations…

In conclusion, we’d like to remind people within the security departments of various companies that social engineering is not just more than waltzing into an office building and pretending to be various personas. An attacker won’t spend resources and risk getting caught by paying a physical visit when her objective could be fulfilled by a single email. Social engineering budgets can be spent on recreating fantasies based on movies, but in many cases, this hardly results in a significant improvement in the overall security of the organization. Think about how an economically rational attacker would approach your company and what weak spots she would choose, and spend your resources on testing and improving that.

Of course, this requires some forethought and clear motives. If someone is motivated to have a clean report that stated “we couldn’t get into the premises physically”, it’s not that hard to achieve this. However, if your aim is to improve overall resilience of your IT infrastructure, physical entry is usually one of your least worries.