Automatically stop data breaches and security threats caused by employees on email. Powered by machine learning, Tessian detects anomalies in real-time, integrating seamlessly with your email environment within minutes and starting protection in a day. Provides you with unparalleled visibility into human security risks to remediate threats and ensure compliance.
Tessian's Threat Intelligence team anazlyzed 2,000,000 malicious emails to identify the tactics bad actors are leveraging in today's advanced spear phishing attacks.
Download the report now to learn more, including how to protect your organization.
Over a 12-month period, Tessian detected nearly 2 million malicious emails that slipped past legacy phishing solutions. Learn more about bad actors’ tactics to understand the risk and how to combat it.
Native tools do a good job protecting users against bulk phishing attacks and spam, but can’t detect more sophisticated spear phishing and social engineering attacks. Phishing awareness programs help, but still leave people as the last line of defense and – as we all know – to err is human.
That’s why, despite cybersecurity spending being at an all-time high, threats continue to land in employees’ inboxes, and, year-on-year, incidents are doubling and even tripling in frequency.
To help you understand the risk, we analyzed nearly 2 million emails flagged by Tessian Defender as malicious to identify the what, how, who, why, and when of today’s spear phishing landscape.
Ten out of twelve months, we saw an increase in the number of attacks with the biggest spike in Q3 (+45% QoQ), immediately before and following Black Friday. There was also a somewhat surprising nose-dive just in time for Christmas and New Years.
But still, 2 million emails slipped right past customers’ SEGs and native tools, leaving employees as the last line of defense against bad actors who make a sport out of staying a step ahead.
Even with training, is it fair to expect employees to spot every malicious email that lands in their inbox? What would be the cost of just one mistake?
Bad actors will research their target using OSINT, pretend to be a trusted person or brand, exploit times of uncertainty or transition, use language that pressures the target to act fast, and do everything they can to ensure the email doesn’t look like the phishing attacks we see in training sessions or simulations.
They look like the real deal; not like the Nigerian Prince scams of the 1990’s.
And 2% of the time, these malicious/fraudulent emails won’t just appear to have come from a trusted vendor or supplier’s legitimate email address, they actually will have come from it. This is called Account Takeover (ATO).
That means if targets are only on alert for emails from suspicious domains that are riddled with grammatical errors and contain dubious attachments… they’ll never spot the phish.
but do seem to have an affinity for Retail, Manufacturing, F&B, R&D, and Tech. But still, across all industries, Tessian flagged 14 malicious emails a year, per employee.
Wondering why they don’t focus exclusively on the “big fish” (i.e. enterprise)? Because smaller companies – who generally have less money to spend on cybersecurity – are often easier to infiltrate. This can be a foothold for lateral movement, especially for companies with large supply chains.
Take the 2020 SolarWinds hack, for example. After breaching the SolarWinds Orion system (a network management system that helps organizations manage their IT resources), nation-state hacking group Nobelium was able to gain access to the networks, systems, and data of thousands of SolarWinds customers, including government departments such as Homeland Security, State, Commerce, and Treasury. Affected private companies include FireEye, Microsoft, Intel, Cisco, and Deloitte.
that bad actors create phishing campaigns with one (or more) of the following end goals in mind:
Our payload and keyword analysis corroborate this.
Of course, when we talk about the correlation between payloads and attack goals, there is certainly a grey area, and we won’t claim to know the specific intention of every email we’ve flagged.
A link could lead your employee to a perfectly safe website, which could direct them to another website that deploys malware. A link could also lead someone to a look-a-like site that harvests their credentials, without the words “credentials” ever appearing in the body or subject of the email.
Likewise, an email that doesn’t contain a payload and doesn’t contain the keyword “wire” could contain directions to update bank account details, resulting in diverted funds. An email without a payload could also be the first in a series of correspondence, designed purely to build rapport between the attacker and the target, before making a request days or weeks later.
We’re often told that bad actors borrow best practice from marketers. If that’s the case, most phishing attacks would land in employees’ inboxes around 10 AM on Wednesdays.
The most malicious emails are delivered around 2PM and 6PM, with very little fluctuation day-to-day (except over the weekend). This isn’t an accident. Since employees are more likely to make mistakes when they’re stressed, tired, and distracted, the second half of the work day is a bad actor’s best bet.
This is reinforced by the fact that employees are most likely to mark an email as malicious between 9AM and 1PM, before the afternoon slump. We then see a steady decline starting at 2PM, right when the bad guys are ramping up.