Request a Demo of Tessian Today.

Automatically stop data breaches and security threats caused by employees on email. Powered by machine learning, Tessian detects anomalies in real-time, integrating seamlessly with your email environment within minutes and starting protection in a day. Provides you with unparalleled visibility into human security risks to remediate threats and ensure compliance.

Move beyond your SEG with Tessian’s SEG Consolidation Wizard  | Generate Report Now →

Integrated Cloud Email Security, Insider Risks, Email DLP
11 Examples of Data Breaches Caused By Misdirected Emails
Wednesday, March 17th, 2021
While phishing, ransomware, and brute force attacks tend to make headlines, misdirected emails (emails sent to the wrong person) are actually a much bigger problem.   In fact, in organizations with 1,000 employees, at least 800 emails are sent to the wrong person every year. That’s two a day. You can find more insights in The Psychology of Human Error and The State of Data Loss Prevention 2020.   Are you surprised? Most people are. That’s why we’ve rounded up this list of 11 real-world (recent) examples of data breaches caused by misdirected emails. And, if you skip down to the bottom, you’ll see how you can prevent misdirected emails (and breaches!) in your organization.     11 examples of data breaches caused by misdirected emails   1. University support service mass emails sensitive student information   University and college wellbeing services deal with sensitive personal information, including details of the health, beliefs, and disabilities of students and their families. Most privacy laws impose stricter obligations on organizations handling such sensitive personal information—and there are harsher penalties for losing control of such data.   So imagine how awful the Wellbeing Adviser at the University of Liverpool must have felt when they emailed an entire school’s worth of undergraduates with details about a student’s recent wellbeing appointment. The email revealed that the student had visited the Adviser earlier that day, that he had been experiencing ongoing personal difficulties, and that the Adviser had advised the student to attend therapy.   A follow-up email urged all the recipients to delete the message “immediately” and appeared to blame the student for providing the wrong email address.One recipient of the email reportedly said: “How much harder are people going to find it actually going to get help when something so personal could wind up in the inbox of a few hundred people?”   2. Trump White House emails Ukraine ‘talking points’ to Democrats   Remember in 2019, when then-President Donald Trump faced accusations of pressuring Ukraine into investigating corruption allegations against now-President Joe Biden?Once this story hit the press, the White House wrote an email—intended for Trump’s political allies—setting out some “talking points” to be used when answering questions about the incident (including blaming the “Deep State media”).   Unfortunately for the White House, they sent the email directly to political opponents in the Democratic Party.White House staff then attempted to “recall” the email. If you’ve ever tried recalling an email, you’ll notice that it doesn’t normally work.   Recalling an email only works if the recipient is on the same exchange server as you—and only if they haven’t read the email. Looking for information on this? Check out this article: You Sent an Email to the Wrong Person. Now What? Unsurprisingly, this was not the case for the Democrats who received the White House email, who subsequently leaked it on Twitter.   I would like to thank @WhiteHouse for sending me their talking points on how best to spin the disastrous Trump/Zelensky call in Trump’s favor. However, I will not be using their spin and will instead stick with the truth. But thanks though. — US Rep Brendan Boyle (@RepBrendanBoyle) September 25, 2019 3. Australia’s Department of Foreign Affairs and Trade  leaked 1,000 citizens’ email addresses   On September 30, 2020, Australia’s Department of Foreign Affairs and Trade (DFAT) announced that the personal details of over 1,000 citizens were exposed after an employee failed to use BCC. So, who were the citizens Australians who have been stuck in other countries since inbound flights have been limited (even rationed) since the outbreak of COVID-19.   The plan was to increase entry quotas and start an emergency loans scheme for those in dire need. Those who had their email addresses exposed were among the potential recipients of the loan.Immediately after the email was sent, employees at DFAT tried to recall the email, and event requested that recipients delete the email from their IT system and “refrain from any further forwarding of the email to protect the privacy of the individuals concerned.”   4. Serco exposes contact traces’ data in email error    In May 2020, an employee at Serco, a business services and outsourcing company, accidentally cc’d instead of bcc’ing almost 300 email addresses. Harmless, right? Unfortunately not.   The email addresses – which are considered personal data – belonged to newly recruited COVID-19 contact tracers. While a Serco spokesperson has apologized and announced that they would review and update their processes, the incident nonetheless has put confidentiality at risk and could leave the firm under investigation with the ICO.   5. Sonos accidentally exposes the email addresses of hundreds of customers in email blunder    In January 2020, 450+ email addresses were exposed after they were (similar to the example above) cc’d rather than bcc’d. Here’s what happened: A Sonos employee was replying to customers’ complaints. Instead of putting all the email in BCC, they were CC’d, meaning that every customer who received the email could see the personal email addresses of everyone else on the list. The incident was reported to the ICO and is subject to potential fines.
6. Gender identity clinic leaks patient email addresses   In September 2019, a gender identity clinic in London exposed the details of close to 2,000 people on its email list after an employee cc’d recipients instead of bcc’ing them. Two separate emails were sent, with about 900 people cc’d on each.   While email addresses on their own are considered personal information, it’s important to bear in mind the nature of the clinic. As one patient pointed out, “It could out someone, especially as this place treats people who are transgender.”   The incident was reported to the ICO who is currently assessing the information provided. But, a similar incident may offer a glimpse of what’s to come.   In 2016, the email addresses of 800 patients who attended HIV clinics were leaked because they were – again – cc’d instead of bcc’d. An NHS Trust was £180,000. Bear in mind, this fine was issued before the introduction of GDPR.   7. University mistakenly emails 430 acceptance letters, blames “human error”   In January 2019, The University of South Florida St. Petersburg sent nearly 700 acceptance emails to applicants. The problem? Only 250 of those students had actually been accepted. The other 400+ hadn’t. While this isn’t considered a breach (because no personal data was exposed) it does go to show that fat fingering an email can have a number of consequences.   In this case, the university’s reputation was damaged, hundreds of students were left confused and disappointed, and the employees responsible for the mistake likely suffered red-faced embarrassment on top of other, more formal ramifications. The investigation and remediation of the incident also will have taken up plenty of time and resources.   8. Union watchdog accidentally leaked secret emails from confidential whistleblower   In January 2019, an official at Australia’s Registered Organisations Commission (ROC) accidentally leaked confidential information, including the identity of a whistleblower. How? The employee entered an incorrect character when sending an email. It was then forwarded to someone with the same last name – but different first initial –  as the intended recipient.   The next day, the ROC notified the whistleblower whose identity was compromised and disclosed the mistake to the Office of the Australian Information commissions as a potential privacy breach.   9. Major Health System Accidentally Shares Patient Information Due to Third-Party Software for the Second Time This Year   In May 2018 Dignity Health – a major health system headquartered in San Francisco that operates 39 hospitals and 400 care centers around the west coast – reported a breach that affected 55,947 patients to the U.S. Department of Health and Human Services.   So, how did it happen? Dignity says the problem originated from a sorting error in an email list that had been formatted by one of its vendors. The error resulted in Dignity sending emails to the wrong patients, with the wrong names. Because Dignity is a health system, these emails also often contained the patient’s doctor’s name. That means PII and Protect health information (PHI) was exposed.   10. Inquiry reveals the identity of child sexual abuse victims   This 2017 email blunder earned an organization a £200,000 ($278,552) fine from the ICO. The penalty would have been even higher if the GDPR has been in force at the time. When you look at the detail of this incident, it’s easy to see why the ICO wanted to impose a more severe fine.   The Independent Inquiry into Child Sexual Abuse (IICSA) sent a Bcc email to 90 recipients, all of whom were involved in a public hearing about child abuse. Sending a Bcc means none of the recipients can see each other’s details/ But the sender then sent a follow-up email to correct an error—using the “To” field by mistake.   The organization made things even worse by sending three follow-up emails asking recipients to delete the original message—one of which generated 39 subsequent “Reply all” emails in response. The error revealed the email addresses of all 90 recipients and 54 people’s full names.   But is simply revealing someone’s name that big of a deal? Actually, a person’s name can be very sensitive data—depending on the context. In this case, IICSA’s error revealed that each of these 54 people might have been victims of child sexual abuse.   11. Boris Johnson’s dad’s email blunder nearly causes diplomatic incident   Many of us know what it’s like to be embarrassed by our dad. Remember when he interrogated your first love interest? Or that moment your friends overheard him singing in the shower. Or when he accidentally emailed confidential information about the Chinese ambassador to the BBC.   OK, maybe not that last one. That happened to the father of U.K. Prime Minister Boris Johnson in February 2020.Johnson’s dad, Stanley Johnson, was emailing British officials following a meeting with Chinese ambassador Liu Xiaoming. He wrote that Liu was “concerned” about a lack of contact from the Prime Minister to the Chinese state regarding the coronavirus outbreak.   The Prime Minister’s dad inexplicably copied the BBC into his email, providing some lucky journalists with a free scoop about the state of U.K.-China relations. It appears the incident didn’t cause any big diplomatic issues—but we can imagine how much worse it could have been if Johnson had revealed more sensitive details of the meeting.
Prevent misdirected emails (and breaches) with Tessian Guardian Regardless of your region or industry, protecting customer, client, and company information is essential. But, to err is human. So how do you prevent misdirected emails? With machine learning.   Tessian Cloud Email Security intelligently prevents advanced email threats and protects against data loss, to strengthen email security and build smarter security cultures in modern enterprises.   Interested in learning more about how Tessian can help prevent accidental data loss and data exfiltration in your organization? You can read some of our customer stories here or book a demo.
Read Blog Post
Integrated Cloud Email Security
Email is the #1 Threat Vector. Here’s Why.
Thursday, March 11th, 2021
Billions of people use email everyday — it’s the backbone of online collaboration, administration, and customer service. But businesses lose billions to email-based cyberattacks every year. Workers use email to exfiltrate sensitive company data. And simple human errors, like sending an email to the wrong person, can be highly problematic. The bottom line: for all its benefits, email communication is risky and, according to research, it’s the threat vector security leaders are most concerned about protecting.  This article will look at the main threats associated with using email — and consider what you can do to mitigate them. The scope of the problem Before we look at some of the risks of email communication, let’s consider the scope of the problem. After all, around 4 billion people worldwide use email regularly.  2020 estimates showed that people send and receive around 306.4 billion emails per day — up 4% from 2019. The Digital Marketing Association suggests that 90% of people check their email at least once per day.  Adobe data shows that email is the preferred contact method for marketing communications — by a long shot. So, with alternative platforms like Slack and Teams rising in popularity. why does email remain the world’s main artery of communication? Email is platform-independent, simple, and accessible. No company would consider cutting email out of its communication channels.  But for every “pro” involved in using email, there’s a “con.” If you’re relying on email communication, you need to mitigate the risks. Security risks involved in using email  A major risk of email communication is security. Because it’s so flexible and easy-to-use, email carries a unique set of security risks. Phishing attacks  Phishing is a type of online “social engineering” attack. The attacker impersonates somebody that their target is likely to trust and manipulates them into providing sensitive information, transferring money, or revealing login credentials. Around 90% of phishing occurs via email. Here are the main types: Spear phishing: The attacker targets a specific individual (instead of sending bulk phishing emails indiscriminately). Whaling: The attacker targets a CEO or other executive-level employee. Business Email Compromise (BEC): A phishing attack in which the attacker appears to be using a legitimate corporate email address. CEO fraud: The attacker impersonates a company’s CEO and targets a junior employee. Wire transfer phishing: The attacker persuades a company employee to transfer money to a fraudulent bank account. Credential phishing: The attacker steals login details, such as usernames or passwords While today, most people are attuned to the problem of phishing, the problem is only getting worse. Don’t believe us? Check out these 50+ must-know phishing statistics. That means phishing protection is an essential part of using email. Looking for more information on inbound email protection? Click here.  Insider threats As well as inbound email threats, like phishing, you must also consider the threats that can arise from inside your business. Tessian survey data suggests that 45% of employees download, save, send, or otherwise exfiltrate work-related documents before leaving their job. The most competitive industries — like tech, management consultancy, and finance — see the highest rates of this phenomenon.  !function(e,t,s,i){var n="InfogramEmbeds",o=e.getElementsByTagName("script"),d=o[0],r=/^http:/.test(e.location)?"http:":"https:";if(/^\/{2}/.test(i)&&(i=r+i),window[n]&&window[n].initialized)window[n].process&&window[n].process();else if(!e.getElementById(s)){var a=e.createElement("script");a.async=1,a.id=s,a.src=i,d.parentNode.insertBefore(a,d)}}(document,0,"infogram-async","//e.infogram.com/js/dist/embed-loader-min.js"); Email is a quick and convenient way to send large amounts of data to external contacts — and can be a pipeline for disgruntled or corrupt employees to siphon off company assets. If you want to learn more about insider threats, including real-world examples, check out these articles: What is an Insider Threat? Insider Threat Types and Real-World Examples Insider Threat Statistic You Should Know Insider Threat Indicators: 11 Ways to Recognize an Insider Threat Remote working Phishing is a booming criminal industry — and there’s evidence that the new patterns of remote working are making phishing more common than ever. Tessian research shows that 65% of US and UK employees received a phishing email when working remotely in 2020 due to the COVID-19 pandemic, and 82% of IT leaders think their company is at greater risk of phishing attacks when employees are working from home. If your company operates a hybrid or remote working model, email security is even more crucial. Human error on email Innocent mistakes can be just as harmful as cyberattacks. In fact, 88% of data breaches are caused by human error. Misdirected emails Research shows that most people have sent at least one email to the wrong person, with nearly one-fifth admitting to sending an email to someone outside of their organization. Our platform data also shows that there are, on average, 800 misdirected emails per year in companies with more than 1000 employees.That’s two a day.  Sending an email to the wrong recipient is so common, you might not think they’re a big deal. But data from the UK’s Information Commissioner’s Office (ICO) consistently shows that misdirected emails are the number one cause of reportable data breaches. Misspelling, autocorrect, reply-all — these are all reasons you might send an email to the wrong recipient. It’s a serious risk of email communication — but you can prevent it. Misattached files Along with misdirected emails, “misattached files” are a major cause of data loss. New data shows some very worrying trends related to people sending emails with incorrect attachments. First, here’s what’s inside the documents people are sending in error: 42% contained company research or data  39% contained security information, such as login credentials  38% contained financial information and client information  36% contained employee data The survey also shows that – as a result of sending misattached files – one-third lost a customer or client — and 31% faced legal action. Email communication: how to mitigate the risks The risks we’ve described all depend on human vulnerabilities. Cyberattackers prey on people’s trust and deference to authority — and anyone can make a mistake when sending an email. That’s why email security is a must. Looking for help choosing a solution? We’ve put together this handy guide: 9 Questions That Will Help You Choose the Right Email Security Solution. If you want more tips, how-to guides, and checklists related to email security specifically and cybersecurity more broadly, sign-up for our newsletter!  While you’re here… Tessian software mitigates all types of risks associated with email communication: Tessian Defender: Automatically prevents spear phishing, account takeover, business email compromise, and other targeted email attacks. Tessian Enforcer: Automatically prevents data exfiltration over email. Tessian Guardian: Automatically prevents accidental data loss caused by misdirected emails and misattached files.
Read Blog Post
Integrated Cloud Email Security
5 Cybersecurity Stats You Didn’t Know (But Should)
by Tessian Monday, March 8th, 2021
When it comes to cybersecurity – specifically careers in cybersecurity –  there are a few things (most) people know. There’s a skills gap, with 3.12 million unfilled positions. There’s also a gender gap, with a workforce that’s almost twice as likely to be male.  But, we have good news. We surveyed 200 women working in cybersecurity and 1,000 recent grads (18-25 years old) for our latest research report, Opportunity in Cybersecurity Report 2021,  and the skills and gender gap seem to both be closing, and women working in the field are happier than ever, despite a tumultuous year.   Here’s five cybersecurity stats you didn’t know (but should). P.s. There are even more stats in the full report, and plenty of first-hand insights from women currently working in the field and recent grads considering a career in cybersecurity.
1. 94% of cybersecurity teams hired in 2020 As we all know, COVID-19 has had a profound impact on unemployment rates. But, as the global job market has contracted, cybersecurity appears to have expanded. According to our research, a whopping 94% of cybersecurity teams hired in 2020. Better still, this hiring trend isn’t isolated; it’s consistent across industries, from Healthcare to Finance. Want to know which industries were the most likely to hire in 2020? Download the full report. 2. Nearly half of women say COVID-19 POSITIVELY affected their career
This is one figure that we’re especially proud to report: 49% of women say COVID-19 positively affected their career in cybersecurity. In the midst of a global recession, this is truly incredible. Is it increased investment in IT that’s driving this contentment? The flexibility of working from home? An overwhelming sense of job security? We asked female cybersecurity professionals, and they answered. See what they had to say.  3. 76% of 18-25 year olds say cybersecurity is “interesting” Last year, we asked women working cybersecurity why others might not consider a job in the field. 42% said it’s because the industry isn’t considered “cool” or “exciting”. We went directly to the source and asked recent grads (18-25 years old) and our data tells a different story. 76% said of them said that cybersecurity is interesting.  This is encouraging, especially since… 4. ⅓ of recent grads would consider a job in cybersecurity !function(e,t,s,i){var n="InfogramEmbeds",o=e.getElementsByTagName("script"),d=o[0],r=/^http:/.test(e.location)?"http:":"https:";if(/^\/{2}/.test(i)&&(i=r+i),window[n]&&window[n].initialized)window[n].process&&window[n].process();else if(!e.getElementById(s)){var a=e.createElement("script");a.async=1,a.id=s,a.src=i,d.parentNode.insertBefore(a,d)}}(document,0,"infogram-async","//e.infogram.com/js/dist/embed-loader-min.js"); While we don’t have any data to compare and contrast this number to, we feel confident saying that interest in the field is growing. Perhaps fueled by the fact that it is – actually – interesting? 31% of recent grads say they would consider a job in cybersecurity. But men are almost twice as likely as women to float the idea.   Want to know why? We pulled together dozens of open-ended responses from  our survey respondents. Click here to see what they said.  5. There’s $43.1 billion up for grabs…
Today, the total value of the cybersecurity industry in the US is $107.7 billion. But, if the gender gap were closed, and the number of women working in the field equaled the number of men, the total value would jump to $138.1 billion. And, if women and men earned equal salaries, it’d increase even more.  The total (potential) value of the industry? $150.8 billion.
Read Blog Post
Integrated Cloud Email Security
7 Things We Learned at Tessian Human Layer Security Summit
by Tessian Tuesday, March 2nd, 2021
That’s a wrap! Thanks to our incredible line-up of speakers and panelists, the first Human Layer Security Summit of 2021 was jam-packed with insights and advice that will help you level-up your security strategy, connect with your employees, and thrive in your role. Looking for a recap? We’ve rounded up the top seven things we learned. 1. CISOs can’t succeed without building cross-functional relationships  Today, security leaders are responsible for communicating risk, enabling individuals and teams, and influencing change at all levels of the organization. That’s easier said than done, though…especially when research shows less than 50% of employees (including executives) can identify their CISO.  The key is building relationships with the right people. But how? Patricia Patton, Human Capital Strategist and Executive Coach, Annick O’Brien, Data Protection Officer and Cyber Risk Officer, and Gaynor Rich, Global Director Cybersecurity Strategy & Transformation at Unilever tackled this topic head-on and introduced a new framework for security leaders to use: Relationship 15.
Find out more by watching the full session below or check out this blog to download a template for the Relationship 15 Framework. Further reading: Relationship 15: A Framework to Help Security Leaders Influence Change CEO’s Guide to Data Protection and Compliance  16 Tips From Security Leaders: How to Get Buy-In For Cybersecurity How to Communicate Cybersecurity ROI to Your CEO 2. Securing your own organization isn’t enough. You have to consider your supply chain’s attack surface and risk profile, too We often talk about how cybersecurity is a team sport. And it is. But, today your “team” needs to extend beyond your own network.  Why? Because more and more often, bad actors are gaining access to the email accounts of trusted senders (suppliers, customers, and other third-parties) to breach a target company in account takeover (ATO) attacks. The problem is, you’re only as strong as the weakest (cybersecurity) link in your supply chain, and these sophisticated attacks slip right past Secure Email Gateways (SEGs), legacy tools, and rule-based solutions. Marie Measures, CTO, at Sanne Group, and Joe Hancock, Head of Cyber at Mishcon de Reya explain how firms in both the legal sector and financial services are preventing these threats by consulting enterprise risk management frameworks, partnering with customers, and leveraging technology. Further reading: What is Account Takeover? How to Defend Against Account Takeover 3. If you want to understand and reduce risk, you need data (and smart tech) Throughout the Human Layer Security Summit, one word was repeated over, and over, and over again. Visibility. It makes sense. Clear visibility of threats is the first step in effectively reducing risk. But, because so many security solutions are black boxes that make investigation, remediation, and reporting admin-intensive, this can be a real challenge. We have a solution, though. Tessian Human Layer Risk Hub. This game-changing product (coming soon!) enables security and risk management leaders to deeply understand their organization’s security posture by providing granular visibility and reporting into individual user risk levels. How? Each user is assigned a risk score based on dozens of factors and risk drivers, including email behavior, training track record, and access to sensitive information. This clearly shows administrators who needs help (on an individual level and a team level).  The tool also intelligently recommends actions to take within and outside the Tessian portal to mitigate risk. Finally, with industry benchmarking and dashboards that show how risk changes over time, you’ll be able to easily track and report progress. Want to learn more about Tessian Human Layer Risk Hub? Sign-up for our newsletter to get an alert on launch day or book a demo. Further reading: Ultimate Guide to Human Layer Security Worst Email Mistakes at Work (And How to Fix Them) 4. Rule-based solutions aren’t enough to prevent data exfiltration 
If you’re interested in learning more about Human Layer Security, this is the session for you. David Aird, IT Director at DAC Beachcroft, and Elsa Ferreira, CISO at Evercore take a deep dive into why people make mistakes, what the consequences of those mistakes are, and how they – as security leaders – can support their employees while protecting the organization. Spoiler alert: blunt rules, blocking by default, and one-and-done training sessions aren’t enough. To learn how they’re using Tessian to automatically prevent data exfiltration and reinforce training/policies – and to hear what prompted Elsa to say “They say security is a thankless job. But Tessian was the first security platform that we deployed across the organization where I personally received ‘thank you’s’ from employees…”– watch the full session. Further reading:  Research Report: Why DLP Has Failed and What the Future Looks Like 12 Examples of Data Exfiltration 5. When it comes to security awareness training, one size doesn’t fit all  Security awareness training is an essential part of every cybersecurity strategy. But, when it comes to phishing prevention, are traditional simulation techniques effective? According to Joe Mancini, VP Enterprise Risk at BankProv, and Ian Schneller, CISO, at RealPage they’re important… but not good enough on their own. Their advice: Find ways to make training more engaging and tailored to your business initiatives and employees’ individual risk levels  Focus on education and awareness versus “catching” people Make sure training is continuously reinforced (Tessian in-the-moment warnings can help with that) Don’t just consider who clicks; pay attention to who reports the phish, too Consider what happens if an employee fails a phishing test once, twice, or three times Want more tips? Watch the full session. Further reading: Why The Threat of Phishing Can’t be Trained Away Why Security Awareness Training is Dead Phishing Statistics (Updated 2021) 6. The future will be powered by AI Nina Schick, Deepfakes expert, Dan Raywood, Former deputy-editor at Infosec Magazine, and Samy Kamkar, Privacy and Security Researcher and Hacker went back and forth, discussing the biggest moments in security over the last year, what’s top of mind today, and what we should prepare for in the next 5-10 years. Insider threats, state-sponsored threats, and human error made everyone’s lists…and so did AI.
Watch the full session to hear more expert insights. Further reading: 2021 Cybersecurity Predictions  21 Cybersecurity Events to Attend in 2021 7. Hackers can – and do – use social media and OOO messages to help them craft targeted social engineering attacks against organizations  Spear phishing, Business Email Compromise (BEC), and other forms of social engineering attacks are top of mind for security leaders. And, while most organizations have a defense strategy in place – including training, policies, and technology – there’s one vulnerability most of us aren’t accounting for. Our digital footprints. Every photo we post, status we update, person we tag, and place we check-in to reveals valuable information about our personal and professional lives. With this information, hackers are able to craft more targeted, more believable, and – most importantly – more effective social engineering attacks. So, what can you do to level-up your defenses? Jenny Radcliffe, Host of The Human Factor, and James McQuiggan, CISSP Security Awareness Advocate, KnowBe4, share personal anecdotes and actionable advice in the first session of the Human Layer Security Summit.  Watch it now. Further reading: New Research: How to Hack a Human  6 Real-World Social Engineering Examples Want to join us next time? Subscribe to our blog below to be the first to hear about events, product updates, and new research. 
Read Blog Post
Integrated Cloud Email Security, Advanced Email Threats
Romance Fraud Scams Are On The Rise
by Tessian Thursday, February 11th, 2021
Cybercriminals are exploiting “lockdown loneliness” for financial gain, according to various reports this week, which reveal that the number of incidents of romance fraud and romance scams increased in 2020.    UK Finance, for example, reported that bank transfer fraud related to romance scams rose by 20% in 2020 compared to 2019, while Action Fraud revealed that £68m was lost by people who had fallen victim to romance fraud last year – an increase on the year before.   Why? Because people have become more reliant on online dating and dating apps to connect with others amid social distancing restrictions put in place for the Covid-19 pandemic.
With more people talking over the internet, there has been greater opportunity for cybercriminals to trick people online. Adopting a fake identity and posing as a romantic interest, scammers play on people’s emotions and build trust with their targets over time, before asking them to send money (perhaps for medical care), provide access to bank accounts or share personal information that could be used to later commit identity fraud. Cybercriminals will play the long-game; they have nothing but time on their hands.   A significant percentage of people have been affected by these romance scams. In a recent survey conducted by Tessian, one in five US and UK citizens has been a victim of romance fraud, with men and women being targeted equally.
Interestingly, people aged between 25-34 years old were the most likely to be affected by romance scams. Tessian data shows that of the respondents who said they had been a victim of romance fraud, 45% were aged between 25-34 versus just 4% of respondents who were aged over 55 years old.    !function(e,t,s,i){var n="InfogramEmbeds",o=e.getElementsByTagName("script"),d=o[0],r=/^http:/.test(e.location)?"http:":"https:";if(/^\/{2}/.test(i)&&(i=r+i),window[n]&&window[n].initialized)window[n].process&&window[n].process();else if(!e.getElementById(s)){var a=e.createElement("script");a.async=1,a.id=s,a.src=i,d.parentNode.insertBefore(a,d)}}(document,0,"infogram-async","//e.infogram.com/js/dist/embed-loader-min.js");   This may be because romance fraud victims are most commonly targeted on social media platforms like Facebook or Instagram, with a quarter of respondents (25%) saying they’d been successfully scammed on these channels.    This was closely followed by email (23%) while one in five people said they’d been targeted on mobile dating apps, and 16% said they’d been scammed via online dating websites.    This behavior is quite typical, say experts. Often romance fraud will start on dating apps or official dating websites but scammers will move to social media, email or text in order to reduce the trail of evidence.
How to avoid falling for a romance scam It’s important to remember that most dating apps and websites are completely safe. However, as social distancing restrictions remain in place for many regions, people should consider how they could be targeted by social engineering attacks and phishing scams at this time.   We advise people to question any requests for personal or financial information from individuals they do not know or have not met in person, and to verify the identity of someone they’re speaking to via a video call. We also recommend the following:   Never send money or a gift online to someone who you haven’t met in person. Be suspicious of requests from someone you’ve met on the internet. Scammers will often ask for money via wire transfers or reload cards because they’re difficult to reverse. Be wary of any email or DM you receive from someone you don’t know. Never click on a link or download an attachment from an unusual email address.  Keep social media profiles and posts private. Don’t accept friend requests or DMs from people you don’t know personally.    The FBI and Action Fraud have also provided citizens with useful advice on how to avoid falling for a romance scam and guidance for anyone who thinks they may have already been targeted by a scammer.    And if you want to learn more about social engineering attacks, you can read Tessian’s research How to Hack a Human. 
Read Blog Post
Integrated Cloud Email Security, Email DLP
Industry-First Product: Tessian Now Prevents Misattached Files on Email
by Harry Wetherald Thursday, February 11th, 2021
Misdirected emails – emails sent to the wrong person – are the number one security incident reported to the Information Commissioner’s Office. And, according to Tessian platform data, an average of 800 misdirected emails are sent every year in organizations with over 1,000 employees.  An unsolved problem We solved this years ago with Tessian Guardian, our solution for accidental data loss. But sending an email to the wrong person is just one part of the problem. What about sending the wrong attachment? After all, our data shows that 1 in 5 external emails contain an attachment and new Tessian research reveals that nearly half (48%) of employees have attached the wrong file to an email. We call these “misattached files” and we’re happy to announce a new, industry-first feature that prevents them from being sent.  The consequences of attaching the wrong file The consequences of a misattached file depend on what information is contained in the attachments.  According to Tessian’s survey results, 42% of documents sent in error contained company research and data. More worryingly, nearly two-fifths (39%) contained security information like passwords and passcodes, and another 38% contained financial information and client information.  36% of mistakenly attached documents contained employee data.  Any one of the above mistakes could result in lost customer data and IP, reputational damage, fines for non-compliance, and customer churn. In fact, one-third of respondents said their company lost a customer or client following this case of human error, and a further 31% said their company faced legal action.  Until now, there weren’t any email security tools that could consistently identify when wrong files were being shared. This meant attachment mistakes went undetected…until there were serious consequences.  How does Tessian detect misattached files? The latest upgrade to Tessian Guardian leverages historical learning to understand whether an employee is attaching the correct file or not. When an email is being sent, Guardian’s machine learning (ML) algorithm uses deep content inspection, natural language processing (NLP), and heuristics to detect attachment anomalies such as: Counterparty anomalies: The attachment is related to a company that isn’t typically discussed with the recipients. For example, attaching the wrong invoice. Name anomalies: The attachment is related to an individual who isn’t typically  discussed with the recipients. For example, attaching the wrong individual’s legal case files. Context anomalies: The attachment looks unusual based on the email context. For example, attaching financial-model.xlsx to an email about a “dinner reservation.” File type anomalies: The attachment file type hasn’t previously been shared with the receiving organization. For example, sending an .xlsx file to a press agency.
If a misattached file is detected, the sender is immediately alerted to the error before the email is sent. Best of all, the warnings are helpful, not annoying and flag rates are low. This means employees can do their jobs without security getting in the way.  Want to learn more about how Tessian detects attachment anomalies before they’re sent? Download the data sheet.
Benefits for Tessian customers Tessian is the only solution in the market that can solve the problem of misattached files, giving customers complete protection from accidental data loss on email.  In addition to preventing human error and subsequent breaches, Tessian Guardian has several features that help ease the burden of compliance on thinly-stretched security teams and give key key stakeholders peace of mind. These include: Automated protection: Tessian Guardian automatically detects and prevents misattached files. No rules or manual investigation required.   Flexible configuration options: With this new feature, customers will be able to configure Guardian’s algorithm to enable and/or disable specific use-cases. This allows administrators to balance user experience with the level of protection appropriate to their risk appetite. Data-rich dashboards: For the first time, customers will have visibility of how many misattached files are being sent in their organization and by whom. This demonstrates clear ROI and makes auditing and reporting easy. 
Learn more about Tessian Interested in learning more about Tessian Guardian’s new features? Current Tessian customers can get in touch with your Customer Success Manager. Not yet a Tessian customer? Learn more about our technology, explore our customer stories, or book a demo now.
Read Blog Post
Integrated Cloud Email Security
Check out the Speaker Line-Up for Tessian Human Layer Security Summit!
by Tessian Friday, February 5th, 2021
On March 3, Tessian is hosting the first Human Layer Security Summit of 2021. And, after a hugely successful series of summits in 2020,  we’re (once again) putting together an agenda that’ll help IT, compliance, legal, and business leaders overcome security challenges of today and tomorrow.
What’s on the agenda for Human Layer Security Summit? Panel discussions, fireside chats, and presentations will be focused on solving three key problems: Staying ahead of hackers to prevent advanced email threats like spear phishing, account takeover (ATO), and CEO fraud Reducing risk over time by building a strong security culture Building future-proof security strategies that engage everyone, from employees to the board So, who will be sharing their expertise to help you overcome these problems? 20+ speakers and partners. Some of the best and the brightest in the field.  If you want to learn more about what to expect, you can watch sessions from previous summits on-demand here.  Who’s speaking at Human Layer Security Summit? While we don’t want to give all the surprises away just yet, we will share a sneak peek at 11 speakers. Make sure to follow us on LinkedIn and Twitter and subscribe to our newsletter for the latest updates, including detailed information about each of the nine sessions. Elsa Ferriera, CISO at Evercore: For nearly 10 years, Elsa has managed risks, audited business processes, and maintained security at Evercore, one of the most respected investment banking firms in the world.  Gaynor Rich, Global Director of Cybersecurity Strategy at Unilever: Well-known for her expertise in cybersecurity, data protection, and risk management, Gaynor brings over 20 years of experience to the summit, the last six of which have been spent at one of the most universally recognized brands: Unilever.   Samy Kamkar, Renowned Ethical Hacker: As a teenager, Samy released one of the fastest-spreading computer viruses of all-time. Now, he’s a compassionate advocate for young hackers, whistleblower, and privacy and security researcher.  Marie Measures, CTO at Sanne Group: With over two decades of experience in the field, Marie has headed up information and technology at Capital One, Coventry Building Society, and now Sanne Group, the leading provider of alternative asset and corporate services.   Joe Mancini, SVP, EnterpriseRisk at BankProv: Joe is the Senior Vice President-Enterprise Risk at BankProv, an innovative commercial bank headquartered in Amesbury, MA. Joe has implemented a forward-thinking, business-enabling risk management strategy at BankProv which allows the fast-paced organization to safely expand its products and services to better suit their growing client base. Prior to his role at BankProv, he spent several years as the CISO at Radius Bank, and ten years at East Boston Savings Bank in various risk related roles. Joe is an expert in emerging technologies such as digital currency and blockchain, along with data security, and risk and compliance requirements in a digital world.  David Aird, IT Director at DAC Beachcroft: Having held the position of IT Director at DAC Beachcroft – one of the top 20 UK law firms – for nearly eight years, David has led the way to be named Legal Technology Team of the Year in 2019 and received awards in both 2017 and 2019 for Excellence in IT Security.  Dan Raywood, Former Security Analyst and Cybersecurity Journalist: Dan – the former Deputy Editor of Infosecurity Magazine and former analyst for 451 Research – is bringing decades of experience to the summit.  Jenny Radcliffe, “The People Hacker”: Jenny is a world-renowned Social Engineer, penetration tester, speaker, and the host of Human Factor Security podcast. Patricia Patton, Executive Coach: The former Global Head of Professional Development at Barclays and Executive Coach at LinkedIn, Patricia’s expertise will help security leaders forge better relationships with people and teams and teach attendees how to lead and influence. Nina Schick, Deep Fake Expert: Nina is an author, broadcaster, and advisor who specializes in AI and deepfakes. Over the last decade, she’s worked with Joe Biden, President of the United States, and has contributed to Bloomberg, CNN, TIME, and the BBC. Annick O’Brien, Data Protection Officer and Cyber Risk Officer: As an international compliance lawyer, certified Compliance Officer (ACCOI), member of the IAPP, and registered DPO, Annick specializes in privacy, GDPR program management, and training awareness projects.  Don’t miss out. Register for the Human Layer Security Summit now. It’s online, it’s free, and – for anyone who can’t make it on the day – you’ll be able to access all the sessions on-demand.
A word about our sponsors We’re thrilled to share a list of sponsors who are helping make this event the best it can be.  Digital Shadows, Detectify, The SASIG, Mischcon De Reya, HackerOne, AusCERT, and more.  Stay tuned for more announcements and resources leading up to the event.
Read Blog Post
Integrated Cloud Email Security
The 7 Deadly Sins of SAT
Tuesday, February 2nd, 2021
Security Awareness Training (SAT) just isn’t working: for companies, for employees, for anybody.  By 2022, 60% of large organizations will have comprehensive SAT programs (source: Gartner Magic Quadrant for SAT 2019), with global spending on security awareness training for employees predicted to reach $10 billion by 2027. While this adoption and market size seems impressive, SAT in its current form is fundamentally broken and needs a rethink. Fast.  There are 7 fundamental problems with SAT today: 1. It’s a tick box SAT is seen as a “quick win” when it comes to security – a tick box item that companies can do in order to tell their shareholders, regulators and customers that they’re taking security seriously. Often the evidence of these initiatives being conducted is much more important than the effectiveness of them. 2. It’s boring and forgettable Too many SAT programs are delivered once or twice a year in unmemorable sessions. However we dress it up, SAT just isn’t engaging. The training sessions are too long, videos are cringeworthy, and the experience is delivered through clunky interfaces reminiscent of CD-Rom multimedia from the 90s. What’s more, after just one day people forget more than 70% of what was taught in training, while 1 in 5 employees don’t even show up for SAT sessions. 3. It’s one-size-fits-all We give the same training content to everyone, regardless of their seniority, tenure, location, department etc. This is a mistake. Every employee has different security characteristics (strengths, weaknesses, access to data and systems) so why do we insist on giving the same material to everybody to focus on? 4. It’s phishing-centric Phishing is a huge risk when it comes to Human Layer Security, but it’s by no means the only one. So many SAT programs are overly focused on the threat of phishing and completely ignore other risks caused by human error, like sending emails and attachments to the wrong people or sending highly confidential information to personal email accounts. Learn more about the pros and cons of phishing awareness training.  5. It’s one-off Too many SAT programs are delivered once or twice a year in lengthy sessions. This makes it really hard for employees to remember the training they were given (when they completed it five months ago), and the sessions themselves have to cram in too much content to be memorable. 
6. It’s expensive So often companies only look at the license cost of a SAT program to determine costs—this is a grave mistake. SAT is one of the most expensive parts of an organization’s security program, because the total cost of ownership includes not just the license costs, but also the total cost of all employee time spent going through it, not to mention the opportunity cost of them doing something else with that time. 
7. It’s disconnected from other systems SAT platforms are generally standalone products, and they don’t talk to other parts of the security stack. This means that organizations aren’t leveraging the intelligence from these platforms to drive better outcomes in their security practice (preventing future breaches), nor are they using the intelligence to improve and iterate on the overall security culture of the company.  The solution? SAT 2.0 So, should we ditch our SAT initiative altogether? Absolutely not! People are now the gatekeepers to the most sensitive systems and data in the enterprise and providing security awareness and training to them is a crucial pillar of any cybersecurity initiative. It is, however, time for a new approach—one that’s automated, in-the-moment, and long-lasting. Read more about Tessian’s approach to SAT 2.0 here.
Read Blog Post
Integrated Cloud Email Security
SAT is Dead. Long Live SAT.
by Tim Sadler Tuesday, February 2nd, 2021
Security Awareness Training (SAT)  just isn’t working: for companies, for employees, for anybody.  The average human makes 35,000 decisions every single day. On a weekday, the majority of these decisions are those made at work; decisions around things like data sharing, clicking a link in an email, entering our password credentials into a website. Employees have so much power at their fingertips, and if any one of these 35,000 decisions is, in fact a bad decision — like somebody breaking the rules, making a mistake or being tricked —it can lead to serious security incidents for a business.  The way we tackle this today? With SAT. By 2022, 60% of large organizations will have comprehensive SAT programs (source: Gartner Magic Quadrant for SAT 2019), with global spending on security awareness training for employees predicted to reach $10 billion by 2027. While this adoption and market size seems impressive, SAT in its current form is fundamentally broken and needs a rethink. Fast.  As Tessian’s customer Mark Lodgson put it, “there are three fundamental problems with any awareness campaign. First, it’s often irrelevant to the user. The second, that training is often boring. The third, it takes a big chunk of money out of the business.” 
The 3 big problems with security awareness training There are three fundamental problems with SAT today: SAT is a tick-box exercise SAT is seen as a “quick win” when it comes to security – a box ticking item that companies can do in order to tell their shareholders, regulators and customers that they’re taking security seriously. Often the evidence of these initiatives being conducted is much more important than the effectiveness of them.  Too many SAT programs are delivered once or twice a year in lengthy sessions. This makes it really hard for employees to remember the training they were given (when they completed it five months ago), and the sessions themselves have to cram in too much content to be memorable.  SAT is one-size-fits-all and boring We give the same training content to everyone, regardless of their seniority, tenure, location, department etc. This is a mistake. Every employee has different security characteristics (strengths, weaknesses, access to data and systems) so why do we insist on giving the same material to everybody to focus on? Also, however we dress it up, SAT just isn’t engaging. The training sessions are too long, videos are cringeworthy and the experience is delivered through clunky interfaces reminiscent of CD-Rom multimedia from the 90s. What’s more, after just one day people forget more than 70% of what was taught in training, while 1 in 5 employees don’t even show up for SAT sessions. (More on the pros and cons of phishing awareness training here.) SAT is expensive So often companies only look at the license cost of a SAT program to determine costs—this is a grave mistake. SAT is one of the most expensive parts of an organization’s security program, because the total cost of ownership includes not just the license costs, but also the total cost of all employee time spent going through it, not to mention the opportunity cost of them doing something else with that time.
Enter, security awareness training 2.0 So, should we ditch our SAT initiative altogether? Absolutely not! People are now the gatekeepers to the most sensitive systems and data in the enterprise and providing security awareness and training to them is a crucial pillar of any cybersecurity initiative. It is, however, time for a new approach. Enter SAT 2.0.  SAT 2.0 is automated, in-the-moment and continuous Rather than having SAT once or twice per year scheduled in hour long blocks, SAT should be continuously delivered through nudges that provide in-the-moment feedback to employees about suspicious activity or risky behavior, and help them improve their security behavior over time. For example, our SAT programs should be able to detect when an employee is about to send all of your customer data to their personal email account, stop the email from being sent, and educate the employee in-the-moment about why this isn’t OK.  SAT also shouldn’t have to rely on security teams to disseminate to employees. It should be as automated as possible, presenting itself when needed most and adapting automatically to the specific needs of the employee in the moment. Automated security makes people better at their jobs.  SAT 2.0 is engaging, memorable and specific to each employee Because each employee has different security strengths and vulnerabilities, we need to make sure that SAT is specifically tailored to suit their needs. For example, employees who work in the finance team might need extra support with BEC awareness, and people in the sales team might need extra support with preventing accidental data loss. Tailoring SAT means employees can spend their limited time learning the things that are most likely to drive impact for them and their organization.   SAT should put the real life threats that employees face into context. Today SAT platforms rely on simulating phishing threats by using pre-defined templates of common threats. This is a fair approach for generic phishing awareness (e.g. beware the fake O365 password login page), but it’s ineffective at driving awareness and preparing employees for the highly targeted phishing threats they’re increasingly likely to see today (e.g. an email impersonating their CFO with a spoofed domain).
SAT 2.0 delivers real ROI SAT 2.0 can actually save your company money, by preventing the incidents of human error that result in serious data breaches. What’s more, SAT platforms are rich in data and insights, which can be used in other security systems and tools. We can use this information as an input to other systems and tools and the SAT platform itself to provide adaptive protection for employees. For example, if my SAT platform tells me that an employee has a 50% higher propensity to click malicious links in phishing emails, I can use that data as input to my email security products to, by default, strip links from emails they receive, actively stopping the threat from happening. It’s also crucial to expand the scope of SAT beyond just phishing emails. We need to educate our employees about all of the other risks they face when controlling digital systems and data. Things like misdirected emails and attachments, sensitive data being shared with personal or unauthorized accounts, data protection and PII etc.
SAT 2.0 is win-win for your business and your employees   The shift to SAT 2.0 is win-win for both the enterprise and employees. Lower costs and real ROI for the business   Today SAT Is one of the most expensive parts of an enterprise’s security program, but it doesn’t have to be this way. By delivering smaller nuggets of educational nudges to employees when it’s needed most it means no more wasted business hours. Not only this, but by being able to detect risky behavior in the moment, SAT 2.0 can meaningfully help reduce data breaches and deliver real ROI to security teams. Imagine being able to report the board that your SAT 2.0 program has actually saved your company money instead. SAT 2.0 builds employees’ confidence In a recent study about why fear appeals don’t work in cybersecurity, it was revealed that the most important thing for driving behavior change for your employees is to help them build self-efficacy: a belief in themselves that they are equipped with the awareness of threats and the knowledge of what to do if something goes wrong. This not only hones their security reflexes, but also increases their overall satisfaction with work, as they get to spend less time in boring training sessions and feel more empowered to do their job securely. 3 easy steps to SAT 2.0   A training program that stops threats – not business or employee productivity – might sound like a pipe dream, but it doesn’t have to be. SAT 2.0 is as easy as 1,2,3… Step 1: Leverage your SAT data to build a Human Risk Score Your SAT platform likely holds rich data and information about your employees and their security awareness that you’re not currently leveraging. Start by using the output of your SAT platform (e.g. test results, completion times, confidence scores, phishing simulation click through rates) to manually build a Human Risk Score for each employee. This provides you with a baseline understanding of who your riskiest and safest employees are, and offers insight into their specific security strengths and weaknesses. You can also add to this score with external data sources from things like your data breach register or data from other security tools you use. Step 2: Tailor your SAT program to suit the needs of departments or employees Using the Human Risk Scores you’ve calculated, you can then start to tailor your SAT program to the needs of employees or particular departments. If you know your Finance team faces a greater threat from phishing and produces higher click through rates on simulations, you might want to double down on your phishing simulation training. If you know your Sales team has problems with sending customer data to the wrong place, you may want to focus training there. Your employees have a finite attention span, make sure you’re capturing their attention on the most critical things as part of your SAT program. Step 3: Connect your SAT platform to your other security infrastructure Use the data and insights from your SAT platform and your Human Risk Scores to serve as input for the other security infrastructure you use. You might choose to have tighter DLP controls set for employees with a high Human Risk Score or stricter inbound email security controls for people who have a higher failure rate on phishing simulations. Want an even easier path to SAT 2.0?   Tessian can help you automatically achieve all of this and transition your organization into the brave new world of SAT 2.0. Using stateful machine learning, Tessian builds an understanding of historical employee security behavior, to automatically map Human Risk Scores, remediate security threats caused by people, and nudge employees toward better security behavior through in-the-moment notifications and alerts. SAT is not “just a people problem” We so often hear in the security community that “the focus is too much on technology when it needs to be on people”. I disagree. We need to ask more of technology to deliver more impact with SAT.   SAT 1.0 is reminiscent of a time when to legally drive a car all you had to do was pass a driving test. You’d been trained! The box had been checked! And then all you had to do was make sure you did the right thing 100% of the time and you’d be fine.   But that isn’t what happened.   People inevitably made mistakes, and it cost them their lives. Today, I still have to pass my driving test to get behind the wheel of a car.But now our cars are loaded with assistive technology to keep us safe doing the most dangerous thing we do in our daily lives. Seatbelts, anti-lock brakes, airbags, notifications that tell me when I’m driving too fast, when I lose grip or when I’m about to run out of fuel.   However hard we try, however good the training, you can never train away the risk of human error. Car companies realized this over 60 years ago—we need to leverage technology to protect people in the moment when they need it the most.   This is the same shift we need to drive (excuse the pun) in SAT.   One day, we’ll have self driving cars, with no driving tests. Maybe we’ll have self driving cybersecurity with no need for SAT. But until then, give your employees the airbags, the seatbelt and the anti-lock brakes, not just the driving test and the “good luck”.
Read Blog Post
Podcast, Integrated Cloud Email Security
Episode 4: The Fear Factor with Dr. Karen Renaud and Dr. Mark Dupuis
by Tessian Wednesday, January 20th, 2021
We have a fascinating episode lined up for you this week, as I’m delighted to be joined by Dr. Karen Renaud and Dr. Mark Dupuis. Dr. Renault is an esteemed Professor and Computer Scientist from Abertay University, whose research focuses on all aspects of human centred security and privacy. Through her work, she says, she wants to improve the boundary where humans and cybersecurity meet. And Dr Dupuis is an Assistant Professor within the Computing and Software Systems division at the University of Washington Bothell. He also specializes in the human factors of cybersecurity primarily examining psychological traits and their relationship to the cybersecurity and privacy behaviour of individuals.  And together they are exploring the use of fear appeals in cybersecurity, answering questions like whether they work or are they more effective ways to drive behavioral change. They recently shared their findings in the Wall Street Journal, a brilliant article titled Why Companies Should Stop Scaring Employees About Security. And they’re here today to shed some more light on the topic. Karen, Mark, welcome to the podcast! Tim Sadler: To kick things off, let’s discuss that Wall Street Journal article, in which you essentially concluded that fear and scaremongering just don’t work when it comes to encouraging people to practice safer cybersecurity behaviors. So why is this the case? Dr Marc Dupuis: Well, I think one of the interesting things if we look at the use of fear, fear is an emotion. And emotions are inherently short-term type of effects. So in some research that I did, about eight years ago, one thing I looked at was trade effect – which is a generally stable, lifelong type of effect. And I tried to understand how it relates to how individuals, whether in an organizational setting or home setting, how they perceive a threat, that cybersecurity threat, as well as their belief in being able to take protective measures to try and address that threat.  And one of the interesting things from that research was, how important the role of self-efficacy was, but more, perhaps more importantly, the relationship between trade positive aspect and self-efficacy. And so a trade positive effect is generally feelings of happiness and positivity in one aspect. And so what this gets at is, the higher levels of positivity we have with respect to trade effect, the more confident we feel, and being able to take protective measures. 
So how this relates to fear is, if we need people to take protective measures, and we know that their self-efficacy, their level of confidence, is related to positive effect, why then are we continually going down the road of using fear – a short term emotion to try and engender behavioral change? And so that was a, you know, interesting conversation that Karen and I had, and then we started thinking about well, let’s take a look at the role of fear specifically. TS: Karen, what would you add to that? Dr Karen Renaud: Well, you know, I had seen Mark’s background, and I’d always wanted to look at fear because I don’t like to be scared into doing things, personally. And I suspect I’m not unusual in that. And when we started to look at the literature, we just confirmed that businesses were trying to use a short-term measure to solve a long-term problem. Yeah. And so yeah, I was gonna say, why do you think that is? And you know, it almost seems using fear is just such a sort of default approach and so many, in so many things, you know, when we think about how, I’m thinking about how people sell insurance, and you know, it’s the fear, to try and drive people to believe that, hey, your home’s gonna get burgled.  Tomorrow, you better get insurance so you can protect against the bad thing happening. And why do you think companies actually just go to fear as this almost carrot to get people to do what they’re supposed to do? It feels to me as if the thing that intuitively you think will work often doesn’t work. So you know, there’s nasty pictures they put on the side of cigarette packets actually are not very effective in stopping heavy smokers. So, whereas somebody who doesn’t smoke thinks, oh my gosh, this is definitely going to scare people, and we’re going to get behavioral change, it actually doesn’t work. So sometimes intuition is just wrong. I think in this case, it’s a case of not really doing the research the way we did to say, actually, this is probably not effective, but going well, intuitively, this is going to work. You know, they used to, when I was at school, they used to call up kids to get them to study. Now, we know that that was really a bad thing to do. The children don’t learn when they’re afraid. So we should start taking those lessons from education and applying them in the rest of our lives as well. 
TS: Yeah, I think it’s a really good call that it’s almost like we just generally, as society, need to do better at understanding actually how these kinds of fear appeals work and engage with people. And, then, maybe if we just go a layer deeper into this concept of fear tactics. You know, are people becoming immune to fear tactics? 2020 was a really bad year, a lot of people faced heightened levels of stress and anxiety as a result of the pandemic and all of that change. Do you think that this is playing a part in why fear appeals don’t work?  KR: Well, yeah, I think you’re right. The literature tells us that when people are targeted by a fear appeal, they can respond in one of two ways. They can either engage in a danger control response, which is kind of what the designer of the fear appeals recommends they do. For example, if you don’t make backups, you can lose all your photos if you get attacked. So, the person engaging in a danger control response will make the backup – they’ll do as they’re told.  But they might also engage in a fear control response, which is the other option people can take. In this case, they don’t like the feeling of fear. And so they act to stop feeling it. They attack the fear, rather than the danger. They might go into denial or get angry with you. The upshot is they will not take the recommended action. So if cybersecurity is all you have to worry about, you might say, “Okay, I’m going to engage in that danger control response.”  But we have so many fear appeals to deal with anyway. And this year, it’s been over the top. So if you add fear appeals to that folks will just say, “I can’t be doing with this. I’m not going to take this on board.” So I think you’re absolutely right. And people are fearful about other things, as well as just COVID. And so you know, adding the layer to that. But what we also thought about was how ethical it actually is to add to people’s existing levels of anxiety and fear…
TS: And do you think that this, sort of, compounds? Do you think there’s a correlation between if people are already feeling naturally kind of anxious, stressed about a bunch of other stuff that actually adding one more thing to feel scared about is even less likely to have the intended results on changing their behavior? MD: Yeah, I mean, I think so. I think it just burns people out. And you kind of get this repeated messaging. You know, one thing I think about, just because we in the States just got through this whole election cycle, and maybe we’re still in this election cycle, but where all these political ads are using fear time and time and time again. And especially with political ads. But I think, in general, people do start to tune out and they want to. They just want to be done with it.  And so it’s one of these things that, I think, just loses its efficacy, and people just kind of have had enough. I have a three and a half year old son. And you know, my daughter was very good at listening to us when we said, “This is dangerous, don’t do this.” But my son, I’m like, I’m like, “Don’t get up there. You’re gonna crack your head open, and don’t do this.” And he ignores me, first of all, and then he does it anyway. And he doesn’t crack his head open. And he says, “See, Daddy, I didn’t crack my head open.” And I’m like, no. But it gets to another point; if we scare people and we try to get them scared enough to do something. But when they don’t do it and if nothing bad happens, it only reinforces the idea that “Oh, it can’t be this bad anyway.” KR: Yeah, you’re right. Because of the cause and the effects. If you divulge your email address or your password somewhere, and the attack is so far apart, a lot of the time you don’t make that connection even.  But it’s really interesting. If you look way back during the first Second World War, Germany decided to bomb the daylights out of London. And the idea was to make the Londoners so afraid that the British would capitulate. But what happened was a really odd thing. They became more defiant. And so we need to get a look back at that sort of thing. And somebody called McCurdy who wrote a book about this — she said people got tired and afraid of being afraid. And so they just said, “No, I don’t care how many bombs you’re throwing on us. We’re just not going to be afraid.” Now, one day if people are having so many fear appeals thrown at them, they’re losing their efficacy. TS: A very timely example talking about the Blitz in World War II, as I just finished reading a book about exactly that, which is the resilience of the British people through that particular period of time. And as you say, Karen, I knew very little about this topic, but it absolutely had the unintended consequence of bringing people together. It was like a rallying cry for the country to say, “We’re not going to stand for this, we are going to fight it.”  And I guess everything you’re saying is reinforced by the research you conducted as well, which completely makes sense. I’m going to read from some notes here. And in the research paper you surveyed CISOs about their own use of fear appeals in their organization. How Chief Information Security Officers actually engage with their employees, and it said 55% were against using fear appeals, with one saying, fear is known to paralyse normal decision making and reactions. And 36% thought that fear appeals were acceptable, with one saying that fear is an excellent motivator. And not a single CISO ranked scary messages as the most effective technique. What were your thoughts on these findings? Were you surprised by them?
MD: We were, I think, surprised that many were against the use of fear appeals. You look at these individuals that are the chief person responsible for the security, information security of the organization. And here they’re coming out and telling us, yeah, we don’t believe in using fear appeals. And there’s multiple reasons for this one, maybe they don’t believe in the efficacy of it. But I think it’s also because we don’t know how effective it’s going to be, but we do know that it can also damage the employee employer relationship.  And as well as some ethical issues related to it, you start to add up the possible negative ramifications of using fear appeals. And it was interesting, even going back to that example, during World War II, you think about why this was effective in what England was doing. It’s because they were in this together, they have this sense of this communal response of, you know. We’re sick of being scared, we’re in this together, we’re gonna fight in this together, and I think maybe CISOs are starting to see that, to try and help make the employee/employer relationship more positive and empower their employees rather than trying to scare them and hurt that relationship. TS: And there was one really interesting finding, which was that you found the longest serving CISOs – i.e. those with more experience – were more likely to approve the use of cybersecurity fear appeals. Why do you think that is? Is fear, maybe kind of an old school way of thinking about cybersecurity?  KR: I think as a CISO, it’s really difficult to stay up to date with the latest research, the latest way of thinking. They spend a lot of time keeping their finger on the pulse of cyber threat models, the compromises hackers are coming with. But if you go and look at the research, the attitudes towards users are slowly changing. And maybe the people who approve of fear appeals aren’t that aware of that. Or it might be they just become so exasperated by the behavior of their employees over the years that they just don’t have the appetite for slower behavioral change mechanisms. You know, and I understand that exasperation. But I was really quite heartened to see that the others said no, this is not working – especially the younger ones. So you feel that cultural change is happening. TS: One thing I was gonna ask was, there’s this interesting concept of, you know, the CISOs themselves, and whether they use fear appeals in their organization. Do you think that’s somewhat a function of how fear appeals are used to them, if that makes sense? Like they have a board that they’re reporting to, they have a boss, they have stakeholders that they’ve got to deliver results for – namely, keep the organization secure, keep our data secure, keep our people secure. Do you think there’s a relationship between how fear appeals are used to them in terms of how they use that then to others in their organization? MD: I think that’s an interesting question. I mean, I think that’s always possible. And I, you know, I think a lot of times people default to what they know and what they’re comfortable with, and what they’ve experienced and so on. And maybe that’s why we see some of the CISOs that have been in that role longer to default to that. And, you know, some of them might be organizational structural as well. Like I said, if they are constantly being bombarded with fear appeals by those that they report to, then, maybe they are more likely to engage in fear appeals. That question is a little unclear. But I do think it’s an interesting question because it, again, intuitively it makes sense. I can have a conversation with someone and, you know, if I want to use fear appeals, I don’t have to make a case for them. The case is almost intuitively made in and of itself. But trying to do the counter and say, well, maybe fear appeals don’t work, it’s a much bigger leap to try and make that argument than I think to try and say, “Well, yeah, let’s scare someone into doing something, of course, that’s gonna work, right.”
TS: I think it’s an interesting point. I think it’s just really important that we also remember, certainly in the context of using fear appeals, that there is a role beyond the CISO, as well. And it’s the role the board plays, it’s the culture of the organization, and how you set those individuals up for success. Like, on one hand as a CISO, the sky is always falling. There is always some piece of bad news or something that’s going wrong, or something you’re defending. And I think it’s again, maybe there’s something in that for thinking about how organizations can kind of empower CISOs, so that they can then go on to empower their people.  And so shifting gears slightly, we’ve spoken a lot about why fear appeals are maybe not a good idea, and how they are limited in their effectiveness. But what is the alternative? What advice would you give to the listeners on this podcast about how they can improve employee cybersecurity behavior through other means, especially as so many are now working remotely?  KR: Well, going back to what Mark was saying, we think the key really is self efficacy. You’ve got to build confidence in people, and without making them afraid.  A lot of the cybersecurity training that you get in organizations is a two-hour session that brings everyone into a room and we talk with them. Or maybe people are required to do this online. This is not self efficacy. This is awareness. And there’s a big difference. So the thing is, you can’t deliver cybersecurity knowledge and self efficacy like a COVID vaccination. It’s a long-term process and employers really have to engage with the fact that it is a long-term process, and just keep building people’s confidence and so on.  What you said earlier about the whole community effect, up to now cybersecurity has been a solo game. And it’s not a tennis solo game, right. It’s a team sport. And we need to get all the people in the organization helping each other to spot phishing messages or whatever. But you know, make it a community sport, number one. And everybody supports each other in building that level of self efficacy that we all need. TS: I love that. And, yeah, I think we said it earlier. But you know, just this concept of teamwork, and coming together, I think is so, so important. Mark, would you add anything to that in terms of just these alternative means to fear appeals that leaders and CISOs can think about using with their employees? MD: Yeah, I mean, it’s not gonna be one size fits all. But I think whatever approach we use, as Karen said, we really do need to tap into that self efficacy. And by doing that, people are going to feel confident and empowered to be able to take action.  And we need to think about how people are motivated to take action, you know. So fear is scaring them personally, about consequences they may face like termination or fines or something else. But if you start thinking about developing this and, as I mentioned before, this being in-this-together, this is developing an intrinsic motivation that “I’m not doing this, because I’m fearful of the consequences”, so much. It’s more “I’m doing this because, you know, we’re all in this together.” We want to make this better for everyone. We want to have a good company, we want to be able to help each other. And we want people to take the actions that are necessary to make sure that we are secure, and we’re here to be able to talk about it.  TS: Yeah, it’s exactly what both of you are saying that if somebody feels that they can’t, if they don’t have that self efficacy, they’re not going to raise things, they’re not going to bring it forward. And ultimately, that’s when disasters happen, and things can go really bad. And then, I love the idea of, you know, it makes complete sense that if you are striking fear into the hearts of people, it’s not necessarily going to have the desired outcome 100% of the time, but isn’t it a little bit of fear needed? I mean, when I say this, of course, it has to be used ethically. But when I’m thinking about just the nature of what organizations are facing today, and we’ve just heard about the Solar Winds hack, and there are a number of others as well. These things are pretty scary, and the techniques that are being used are pretty scary. So isn’t a little bit of fear required here? And is there any merit to using that to make people understand the severity and the consequences of what’s at stake? MD: Yeah, I think there’s a difference between fear and providing people with information that might inherently have scary components to it. And, so what I mean by that is, when people are often using fear appeals, they’re doing it to scare people into complying with some specific goal. But instead we should provide information to people – which we should, we should let people know that there are some possible things that can happen or some possible consequences – but not with the goal of scaring them, but more with the goal of empowering them by giving them information. They, again, tap into that self efficacy, more so than anything else, because then they know that there’s some kind of threat out here. They’re not scared, but they know there’s a threat. And if they feel empowered through knowledge, and through that self efficacy, then they’re more likely to take that action, as opposed to designing a message that’s just designed to scare them into compliance.
TS: From your experience, can either when you think of any really good examples of how companies or any campaigns that have maybe built this kind of self efficacy or empowered people without having to use fear as the motivating factor? KR: And I think I mentioned one of them in the paper. So there’s an organization that I’m familiar with and they had a major problem with phishing. They appointed one person and if anybody had a suspicious message, they say “you were quite right to report this to me, thank you so much for being part of the security perimeter of this organization. But email looks fine, you can click.” Overtime, this is actually built up efficacy. They don’t have phishing problems anymore, in that organization, because they have this person. And it’s almost an informal thing he does but he’s building up self efficacy slowly but surely, across the organization, because nobody ever gets made to feel small by reporting or made to feel humiliated. We’re all participating. We’re all part of this, that that is the best example I’ve seen of actually how this has worked.  TS: Yeah, I really like that. It’s like, when people do risk audits, they will say that the time the alarm should sound is when there’s nothing on the risk register. When the risk registers may be getting 510 entries every single week, you know, that people actually do have that confidence to come forward. And also they’re paying attention, right? They’re actually aware of these things.  And where I want to go next is to talk about this is a side of things in the cybersecurity vendor world. You know, many companies that are trying to provide solutions to organizations do rely quite heavily on this concept of fear, uncertainty and doubt. It’s even got its own acronym right? FUD. And, essentially, FUD is used so heavily. As the saying goes “bad news sells” – we see scary headlines, the massive data breaches dominate the media landscape. So I think it’s fair to say eliminating fraud is going to be tough. And there is a lot of work to do here. In your opinion, who is responsible for changing the narrative? And what advice would you give to them for how they can start doing this? MD: I think it definitely, you know, starts in things such as having these conversations and trying to, I guess, place a little uncertainty or doubt into those decision makers and CISOs about how effective fear is. It’s kind of flipping the script a little bit. And maybe part of it is we need a new acronym, to say, well give this a try, or this is why we think this is going to work, or this is what the research shows. And this is what your peer organizations are doing, and they find it very effective. Their employees feel more empowered. So, I think a lot of it is just beginning with those conversations and trying to flip the script a little bit to start to help CISOs know. Well, you know, it’s always easy to criticize something, but then the bigger question is, okay, if, if we’re taking the use of fear and its effectiveness for granted, then what are we going to replace it with?  And a lot of it, we know that self efficacy is the major player there but what’s that going to look like? And I think Karen gave a great example looking at what an organization is doing, which is increasing improving levels of self efficacy. It’s creating that spirit of we’re all in this together and it’s less about a formalised punitive type of system. And so looking at ways to tap into that and for one organization, it might be you have a slightly different approach, but I think the concepts and stuff will be the same.
TS: Again, it ties in a really important point, which is just more understanding is needed, I think, by the lay person, or the people that are putting this out.  And, and then I think, Marc, to your point just about this being collective responsibility. I mean, I see it as a great opportunity as well, because I think everyone would welcome some more positivity and optimism, right? And if we can actually bring that to the security community, which is, you know, generally a fearful community, focusing on defense and threat actors. The language, the aesthetic, everything is generally negative, fearful, scary. I think there’s a great opportunity here, which is that, you know, doesn’t have to be that way and that we can come together. And we can have a much more positive dialogue and a much more positive response around it.  There was something that I wanted to touch on. Karen, you speak about, in your research, this concept of “Cybersecurity Differently.” And you explain, and I’m going to quote you verbatim here – “It’s so important that we change mindsets from the human-as-a-problem to human-as-solution in order to improve cybersecurity across the sociotechnical system.” What do you mean by that? And what are the core principles of Cybersecurity Differently? KR: When you treat your users as a problem, right, then that informs the way you manage them. And, so, then what you see in a lot of organizations because they see their employees’ behaviors as a problem. They’ll train them, they’ll constrain them, and then they’ll blame them when things go wrong. So that’s the paradigm.  But what you’re actually doing is excluding them from being part of the solution. So, it creates the very problem you’re trying to solve. What you want is for everyone to feel that they’re part of the security defense of the organization. I did this research with Marina Timmerman, from the University of Darmstadt, technical University Darmstadt. And so the principles are:  One we’ve been speaking about a lot: encouraged collaboration and communication between colleagues, so that people can support each other. We want to encourage everyone to learn. It should be a lifelong learning thing, not just something that IT departments have to worry about.  It isn’t solo, as I’ve said before, you have to build resilience as well as resistance. So currently, a lot of the effort is on resisting anything that somebody could do wrong. But you don’t then have a way of bouncing back when things do go wrong, because all the focus is on sort of resistance. And, you know, a lot of the time we treat security awareness training and policies like a-one-size-fits-all. But that doesn’t refer to people’s expertise. It doesn’t go to the people and say, “Okay, here’s what we’re proposing, is this going to be possible for you to do these things in a secure way?” And if not, how can we support you to make what you’re doing more secure.  Then, you know, people make mistakes. Everyone focuses on if a phishing message comes to an organization, people focus on the people who fell for it. But there were many, many more people who didn’t fall for it. And so what we need to do is examine the successes, what can we learn from the people? Why did they spot that phishing message so that we can encourage that in the people who did happen to make mistakes?  I didn’t get these ideas, just out of the air. I got them from some very insightful people. One of them was Sidney Dekker, who has applied this paradigm in the safety field. What’s interesting was that he got Woolworths in Australia to allow him to apply the paradigm in some of their stores. They previously had all these signs up all over the store – “Don’t leave a mess here” and “Don’t do this” – and they had weekly training on safety. He said, right, we’re taking all the signs out. Instead, what we’re gonna do is just say, you have one job, don’t let anyone get hurt. And the stores that applied the job got the safety prize for Woolworths that next year. So, you know, just the idea that everyone realized it was their responsibility. And it wasn’t all about fear, you know, rules and that sort of thing. So I thought if he could do this in safety, where people actually get harmed for life or killed, surely we can do this in cyber?! And then I found a guy who ran a nuclear submarine in the United States. His name is David Marquet. He applied the same thing in his nuclear submarine which you would also think, oh, my goodness, a nuclear submarine. There’s so much potential for really bad things to happen! But he applied the same sort of paradigm shift – and it worked! He won the prize for the best run nuclear submarine in the US Navy. So it’s about being brave enough to go actually, you know, what we’re doing is not working, and every year it’s not working Maybe it’s time to think well, can we do something different?  But like you said, Marc, we need a brave organization to say, okay, we’re gonna try this. And we haven’t managed to find one yet. But we will, we will! TS: And that’s one of the things I wanted to close out on. I spoke to you at the beginning of this podcast is how much I love the article in the Wall Street Journal, but also just the mission that both of you are on – to improve, what I see really is the relationship between people and the cybersecurity function. And my question to you is, again, touches on that concept of how much progress have we actually made? And then, to close, how optimistic are you that we can actually flip the script and stop using fear appeals? MD: Yeah, I feel like we’ve made a lot of progress, but not nearly enough. So, you know, there’s, and part of the challenge, too, is, none of this stuff is static, right? All this stuff is constantly changing; the cybersecurity threats out there change, we’re talking, so much, about phishing today, and social engineering is going to be something different next year. And so it’s always this idea of playing catch-up. But also, you know, having the fortitude to take that step out there to take that leap of faith that maybe we can do something else besides using fear. 
MD: I think I am optimistic that it can be done. We can make a lot of progress. For it to actually be done to, you know, 100%… I don’t know that we’ll ever get to that point. But I feel like we can make a lot of progress. And looking at part of this is recognizing the fact that – you’re mentioning the socio technical side of this – this isn’t just a technical problem, right? And a lot of times the people we throw into cybersecurity positions have this very strong technical background but they’re not bringing in other disciplines. Perhaps from the arts, from literature, from the humanities, and from design, we can bring new considerations to try and look at this as a very holistic multidisciplinary problem. If the problem is like that, well, then solutions definitely have to be as well.  We have to acknowledge that and start trying to get creative with the solutions. And we need those brave organizations to try these different approaches. I think they’ll be pleased with the results because they’re probably spending a lot of time and money right now, to try and make the organization more secure. They’re telling their bosses, the CISOs are telling their bosses, well, this is what we’re doing. We’re scaring them. But the results don’t always speak for themselves.  TS: And, Karen, what would you add to that? KR: Well, I just totally concur with everything Mark said, I think he’s rounded this off very nicely. I ran a study recently – it was really unusual study – where we put old fashioned typewriters in coffee shops and all over, and we put pieces of paper in. We just typed something along the top that said, “When I think about cybersecurity, I feel…” and we got unbelievable stuff back from people going: “I don’t understand it, I’m uncertain.” Lots and lots of negative responses – so there’s a lot of negative emotion around cyber. And that’s not good for cybersecurity. So I’d really like to see something different. And, you know, the old saying, If you keep doing the same thing without getting results, there’s something wrong. We see it’s not working, this might be the best way of changing and making it work. TS: I completely agree. I completely agree. Thank you both so much for that great discussion. I really enjoy getting deeper as well, and hearing your thoughts on all of this. As you say, I think it’s a win-win scenario on so many counts. More positivity means better outcomes for employees. And I think it means better outcomes for the security function.   If you enjoyed our show, please rate and review it on Apple, Spotify, Google or wherever you get your podcasts. And remember you can access all the RE:Human Security Layer podcasts here. 
Read Blog Post
Integrated Cloud Email Security, Podcast
6 Cybersecurity Podcasts to Listen to Now
Tuesday, January 19th, 2021
If you’re interested in cybersecurity, this list of the best cyber security podcasts is for you.  We’ve collated six of the best cyber security podcasts in the world — where engaging hosts provide breaking news, intelligent analysis, and inspiring interviews. The Best Cyber Security Podcasts The CyberWire Daily Launched: December 2015 Average episode length: 25 minutes Release cycle: Daily As one of the most prolific and productive cybersecurity news networks, CyberWire has access to world-class guests, top research, and breaking news. The CyberWire Daily brings listeners news briefings and a great variety of in-depth cybersecurity content. The CyberWire Daily showcases episodes from across CyberWire’s podcast catalog, including Career Notes, in which security leaders discuss their life and work; Research Saturday, where cybersecurity researchers talk about key emerging threats; and Hacking Humans, which focuses on social engineering. Here are some great recent episodes: Deep Instinct’s Shimon Oren talks about his research on the worrying re-emergence of the Emotet malware Craig Williams, head of outreach at Cisco’s Talos Unit, discusses the perils of malicious online ads (malvertising) Ann Johson, Microsoft’s Corporate VP Cybersecurity Business Development, discusses her career journey —- from lawyer to cybersecurity executive Unsupervised Learning Launched: January 2015 Average episode length: 25 minutes Release cycle: Weekly Originally called “Take 1 Security,” Daniel Miessler’s Unsupervised Learning podcast is an insightful look at long-running themes and emerging issues in cybersecurity. Miessler has provided thoughtful written commentary on cybersecurity for over two decades. His podcast’s format varies: most weeks involve a run-down of the week’s cybersecurity headlines, but some episodes feature an essay, interview, or a book review.  Some standout episodes over the past year have included: An analysis of Verizon’s all-important annual data breach report  An interview with General Earl Matthews on election security A spoken essay about how the US should address its ransomware problem WIRED Security Launched: November 2020 Average episode length: 8 minutes Release cycle: Every weekday WIRED Security is part of WIRED’s “Spoken Edition” range of podcasts, and it’s a little different from the other podcasts on our list. Each episode features a reading of a recently-published WIRED article about cybersecurity. We love this podcast because it’s short and snappy (episodes generally range from 4 to 12 minutes long), released daily, and provides free access to WIRED’s incredible in-depth journalism.  Some great episodes from the past few months include:  A recap of 2020’s worst hacks (there were many to choose from) An analysis of the critical — and possibly permanent — security flaws among Internet of Things devices  A look at how Russia could be exploiting poor cybersecurity practices among remote workers RE: Human Layer Security Launched: December 2020 Average episode length: 22 minutes Release cycle: Weekly RE: Human Layer Security is an exciting new podcast hosted by Tessian CEO Tim Sadler. Sadler talks to business and technology leaders about their experiences running and securing some of the world’s leading organizations. The show flips the script on cybersecurity and addresses the human factor. Join world-class business and technology leaders as they discuss how and why companies must protect people – not just machines and data – to stop threats and empower employees. Guests have included:  Howard Schultz, Starbucks former CEO, on why culture trumps strategy when building and protecting a business Stephane Kasriel, Upwork former CEO, on how companies can embrace remote working Tim Fitzgerald, CISO at ARM, on why security should serve people’s interests and empower employees to take care of themselves New episodes launch every Wednesday. Don’t miss out!  Security Now Launched: August 2005 Average episode length: 2 hours Release cycle: Weekly Security Now is the oldest podcast on our list, but it has truly stood the test of time. Now entering its 16th year, the podcast still has a vast listener base — and continues to provide timely and insightful analysis on important cybersecurity topics. Every Monday, Security Now provides a detailed breakdown of all (and we mean all) the weeks’ security and privacy headlines. If you’re ever feeling out of the loop, spending a couple of hours listening to hosts Steve Gibson and Leo Laporte will bring you back up to date. Recent discussions on Security Now have included: How SolarWinds shareholders are launching a class-action lawsuit following the company’s disastrous hack Why WhatsApp users are flocking to Signal following a privacy policy update How swatters are using IoT devices to misdirect emergency services teams  The Many Hats Club Launched: November 2017 Average episode length: 45 minutes Release cycle: Sporadic The Many Hats Club is a coalition of people from across the information security community, including coders, engineers, and hackers — whether blackhat, whitehat, or greyhat. The Many Hats Club podcast is a great way to get to know the next generation of infosec professionals. Host CyberSecStu interviews a great range of guests about a broad range of topics, including hacking, privacy, and cybersecurity culture. Recent highlights include: A conversation about DDoS mitigation and mental health with security researcher Notdan A discussion about women in infosec with cybersecurity commentator Becky Pinkard  A strictly NSFW interview with the controversial McAfee founder John McAfee   Our Roundup What’s your favorite cybersecurity podcast? Let us know by tagging us on social media! And, if it’s RE: Human Layer Security, make sure you follow it on Spotify or subscribe on Apple Podcasts so you never miss an episode.
Read Blog Post
Integrated Cloud Email Security, Podcast
Episode 3: Security For The People, Not To The People, With Tim Fitzgerald
by Tessian Wednesday, January 6th, 2021
In this episode of the RE: Human Layer Security podcast, Tim Sadler is joined by Tim Fitzgerald, the chief information security officer at ARM and former chief security officer at Symantec.  Now, Tim believes that people are inherently good. And to think of employees as the weakest link when it comes to cybersecurity is undeserving. Tim thinks employees just want to do a good job. Sometimes mistakes happen, which can compromise security. But rather than blaming them, Tim urges leaders to first ask themselves, whether they’ve given their people the right tools, and they’ve armed them with the right information to help them avoid these mistakes in the first place. In this interview, we talked about the importance of changing behaviours, how businesses can make security part of everybody’s job, and how to get boards on board.  And if you want to hear more Human Layer Security insights, all podcast episodes can be found here.  Tim Sadler: As the CISO of ARM, then what are some of the biggest challenges that you face? And how does that affect the way you think about your security strategy?  Tim Fitzgerald: I guess our challenges are, you know, not to be trite, but they’re sort of opportunities as well. That by far, the biggest single challenge we have is ARM’s ethos around information sharing. As I noted, we have a belief, that I think it has proven out to be true over the 30+ years that ARM has been in business, that the level of information sharing has allowed ARM to be extraordinarily successful and innovative.  So there’s no backing up from that as an ethos of the company. But that represents a huge amount of challenge because we give a tremendous amount of personal freedom for how people can access our information and our systems, as well as how they use our data to share both internally with our peers, but also with our customers who we’re very deeply embedded with, you know. We don’t sell a traditional product where we, you know, they buy it, we deliver it to them, and then we’re done. The vast majority of our customers spend years with us developing their own product based on our intellectual property. And so that the level of information sharing that happens in a relationship like that is, is quite difficult to manage, to be candid. TS: Yeah, it really sounds like you’ve been balancing or having to think about not just the effectiveness of your security strategy or your systems but also that impact to the productivity of employees. So has Human Layer Security been part of your strategy for a long time at ARM or even in your career before ARM? TF: In my career before ARM, at Symantec. Symantec was a very different company, you know, more of a traditional software sales company. It also had 25,000 people who thought they knew more about security than I did. So that presented a unique challenge in terms of how we work with that community, but even at Symantec, I was thinking quite hard about how we influence behaviour.  And ultimately, what it comes down to, for me is that I view my job and human security as somewhere between a sociology and a marketing experiment, right? We’re really trying to change people’s behaviour in a moment, not universally and not their personal ethos. But will they make the right decision in this moment, to do something that won’t create security risk for us?  You know, I sort of label that sort of micro transactions. We get these small moments in time, where we have an opportunity to interact with and influence behaviour. And I’ve been sort of evolving that strategy as I thought about it at ARM. It’s a very different place in many respects, but trying to think about, not just how we influence their behaviour in that moment in time, but actually, can we change their ethos? Can we make responsible security decision-making part of everybody’s job? And I know that there’s not a single security person who will say they’re not trying to do that, right. But actually, that turns out to be a very, very hard problem.  The way that we think about this at ARM is that we have, you know, a centralized security team and I guess, ultimately, security is my responsibility at ARM. But we very much rely on what we consider to be our extended employee, or extended security team, which is all of our employees. Essentially, our view is that they can undo all of the good that we do behind them. But I think one of the things that’s unique about how we look at this at ARM is, you know, we very much take the view that people aren’t the weakest link. That they don’t come with good intent, or they don’t want to be good at their job or that they’re going to take shortcuts just to, you know, get that extra moment of productivity, but actually that everybody wants to do a good job. And our job is to arm them with both the knowledge and the tools to be able to keep themselves secure rather than trying to secure around them.
And, just to finish that thought, we do both, right? I mean, we’re not going to stop doing all the other stuff we do to kind of help protect our people in ways that they don’t even know exist. But the idea for us, here, is actually that we have rare opportunities to empower employees to take care of themselves.  One of the things we really like about Tessian is that this is something we’ve done for our employees, not to our employees. It’s a tool that is meant to keep them out of trouble.  TS: Yeah, I think I think that’s a really, really good point. You know, I think a lot of what you’re talking about here, as well as just security culture, and really establishing a great security culture as a company. And I love that for employees rather than to employees. I mean, it sounds like this really, you know, you have to at the core of the organization, and be thinking about the concept of human error in the right way when thinking about security decision making. And I guess, thinking that people are always going to make mistakes. And as you said, it’s just because they, you know, they are people, and maybe walk us through a bit more about how you how you think or what advice you might have for some of the other organizations that are on the line today about how they might talk to, you know, their boards or their other teams about rationalising this risk internally and working with the fact that our employees are only human. TF: Yeah, for me, this has been the most productive dialogue we’ve had with our board and our executive around security. I think most of you on the phone will recognise that when you go in and you start talking about the various technical layers that we have, that are available to protect our system, the eyes glaze over pretty quickly. And they really just want to know whether or not it works.  The human security problem is one that you can get a lot of passion on. In parts, because, I think it’s an unrecognized risk in the boardroom. That while the insider – meaning sort of a traditional insider threat that we think about which is a person who’s really acting against our best interest – can be very, very impactful. At least at ARM, and certainly in my prior career, the vast majority of issues that we have, and that have caused us harm over the last several years have been caused by people who do not wish us harm. 
They’ve been people just trying to do their job, and making mistakes or doing the wrong thing, making a bad decision at a moment in time. And trying to figure out how we help them not to do that is a much more difficult problem than trying to figure out how to put in a firewall or putting DLP. So we really try to separate that conversation. There are a lot of things we do to try and catch that person who is truly acting against our best interest but that actually, in many ways, is a totally different problem. At ARM, what accounts for more than 70% of our incidents, and certainly more than 90% of our loss scenarios is people just doing the wrong thing. And making the wrong decision, not that they were actively seeking to cause ARM harm.  If I might just give a couple of examples because it helps bring it home. The two most impactful events that we’ve had in the last two years at ARM was somebody in our royalties, you know, we sell software, right? So every time somebody produces a chip, we get paid. So that’s a good thing for ARM. But having somebody who’s royalty forecast gives you a really good sense of what markets they intend to enter and where they tend to go as a company.  And most of our customers compete with each other because they’re all selling similar chips, software design into various formats. So having one customer having somebody else’s data would be hugely impactful. And in fact, that’s exactly what we did not that long ago. Somebody pulled down some pertinent information for a customer into a spreadsheet, and then fat fingered an email and sent it to the wrong customer. Right, they send it to Joan at Customer X instead of Joan at customer Y. And that turned out to be a hugely impactful event for us as a company, because this is a major relationship and we essentially disclosed a strategic roadmap from one customer to another. A completely avoidable scenario. And it is a situation where that employee was trying to do their best for their customer and ultimately made a mistake. TS: Thanks for sharing that example with us. I think it’s a really, really good point. And I think for a long time in security, we were talking about insider threats, and people immediately think about malicious employees and malicious insiders. And I think it’s absolutely true what you say that, the reality is that most of your employees are, you know, trustworthy and want to do the right thing. But they sometimes make mistakes. And when you’re doing something as often as, say, sending an email or sharing data, the errors can be disastrous, and they can be frequent as well… TF: …it’s the frequency that really gets us right? So insider threat – the really bad guy who’s acting against our best interest. We have a whole bunch of other mechanisms that, while still hard, we have some other mechanisms to try and find them. That’s an infrequent high impact. What we’re finding is that the person who makes a mistake is high frequency, medium to high impact. And so we’re just getting hammered on that kind of stuff. And the reason we came to Tessian in the first place was to address that exact issue. As far as I really believe in where you guys are going in terms of trying to address the risk associated with people making bad choices versus acting against our interest. TS: This concept of high frequency, I think, is super interesting. And one of the questions I was actually going to ask you was around that. Hackers and cyber attacks get all the attention because these are the scary things. And naturally, it’s what you know, boards want to talk about, and executives want to talk about. Accidents almost seem less scary. So they get less focus. But this frequency point of how often we share data. We send emails, and it’s, you know, it has analogies in other parts, other parts of our lives as well with like, we don’t think twice before we get in a car. But actually, you know, it’s very easy to have human error there. Things can also be really bad. Do you think we need to do more to educate our, again, our boards, our executive teams and our employees to actually sort of open their eyes to the fact that inadvertent human error or accidents can be just as damaging as, as attackers or cyber attacks?  TF: Yeah, it depends on the organization. But I would suggest that generally, we do need to do more. We, as an industry, we’ve had a lot of amazing things to talk about to get our board’s attention over the last 10 years. These major events, and loss scenarios, often perpetrated by big hacking groups, sometimes nation-sponsored, are very sexy to talk about that kind of stuff and use that as justification for the reason we need to, to invest in security.  And actually, there’s a lot of legitimacy behind that. Right. It’s not that that’s fake messaging. It’s just, it’s just part of the narrative. The other side of the narrative is that, you know, we spend more time on now than we do on nation-state type threats. Because what we’re finding is not only by frequency, but by impact right now, the vast majority of what we’re dealing with is avoidable events, based on human error, and perhaps predictable human error.  I very much chafe at the idea that we think of our employees as the weakest link, right? I think it sort of under serves people’s intent and how they choose to operate. So rather than that, we try to take a look in the mirror and say, what are we not providing these people in order to help them avoid these types of scenarios?  And I think if you change your perspective on that, rather than see people as an intractable problem, and therefore we can’t, you know, we can’t conquer this. If we start thinking about how we mobilise them as part of our overall cybersecurity strategy and defense mechanisms, it causes you to rethink whether or not you’re serving your populace correctly.  And I think in general, not only should we be talking to our senior executives and boards more, more clearly about where real risk exists, which for most companies is right in this zone. But we need to be doing more to help those people combat rather than casting blame or thinking that the average employee is not trustworthy, or will do the wrong thing.  You know, I’m an optimist. So I genuinely believe that’s not true. I think if we give people the opportunity to make a good decision, and we make the easiest path to get their job done, the secure path, they will take it. That is our job as security professionals.
TS: Yeah, I think the huge point there and you know, the word that was jumping out for me is this concept of empowerment. And I think it is strange sometimes when you look at a lot of security initiatives that companies deploy, and how we almost don’t factor in that concept of the impact it will have on an employee’s productivity.  And I guess at Tessian, we’re great believers that, you know, the greatest technology we’ve created has really empowered society. So it’s made people’s lives better. And we think that security technology should not only keep people safe, but it should do it in a way that empowers them to do that best work. When you were sort of thinking about how to solve this problem of inadvertent human error on email people sending emails to the wrong people, or dealing with the issue of phishing and spear phishing. What consideration did you have for other solutions that were out there? You know, what did Tessian address for you that you couldn’t quite address with those other platforms?  TF: Yeah, a couple things. So coming from Symantec as you might expect, I used all of their technology extensively and one of the best products Symantec offers is their DLP solution. So I’m very, very familiar with that. And I would argue we had one of the more advanced installations in the world running internally at Symantec. So I’m extremely familiar with the capability of those technologies. I think what I learned in my time and doing that is when used correctly in a finite environment, a finite data set, that type of solution would be very, very effective in keeping that data where it’s supposed to be and understanding movement in that ecosystem. When you try and deploy that, broadly, it has all the same problems, as everything else is, you start to run into the inability of the DLP system to understand where that data is supposed to be. Is this person supposed to have it based on their role and their function? It’s not a smart technology like that. So you end up trying to write these very, very complex rules that are hard to manage. What I liked about Tessian is that it gave us an opportunity to use the machine learning in the background, to try and develop context about whether or not something that somebody was doing was, was either a typical, or perhaps just by the very nature, and maybe it’s not a typical, maybe it’s actually part of a bad process. But by their very nature of the type of information they’re sending around and the characteristics of information, we can get a sense of whether or not what they’re doing is causing us a risk. So it doesn’t require recipes, completely prescriptive about what we’re looking for. It allows us to learn with the technology and with the people on what normal patterns of behaviour look like, and therefore intervene when it matters and not, and not sort of having to react every time another bell goes off.  To be clear, we still use DLP in very limited circumstances. But what we found is that was not really a viable option for us, particularly in the email stream. To be able to accurately identify when people were doing things that were risky, versus, you know, moving a very specific data set that we didn’t want them to.  TS: Yeah, that makes a tonne of sense. And then if you’re thinking about the future, and sort of, you know, what you hope Tessian can actually become, you know, where, where does it go from here? What’s the opportunity for, for Tessian as a Human Layer Security platform?  TF: Yeah, I recall back to talking to you guys, I guess, last spring, and one of the things I was poking at was, you have all this amazing context of what people are doing an email, and that’s where people spend most of their time. It’s where most of the risk comes from for most organizations. So how can we turn that into beyond just you know, making sure someone doesn’t fat finger and email address, or they’re not sending a sensitive file where it’s not supposed to go? Or, you know, the other use cases that come along with Tessian? Can we take the context that we’re gaining through how people are using email, and create more of those moments in time to connect with them to become more predictive? Where we start to see patterns of behaviour of individuals that would suggest to us that they are either susceptible to certain types of risk, or, you know, are likely to take a particular action in the future, there’s a tremendous amount of knowledge that can be derived from that context, particularly if you start thinking about how you can put that together with what would traditionally be kind of the behavioural analytics space. Can we start to mesh together what we know about the technology and the machines with real human behaviour and, therefore, have a very good picture that would help us? It would help us not only to find those actual bad guys who were in our environment that we know were there, but also to get out in front of people’s behaviour, rather than reacting to it after it happened. And that, for me, that’s kind of the holy grail of what this could become. If not predictive, at least start leading us towards where we think risk exists, and allowing us an opportunity to intervene before things happen. TS: That’s great, Tim, thanks so much for sharing that with us. TS: It was great to understand how Tim has built up his security strategy, so that it aligns with and also enhances the overall ethos of the company. More information sharing equals a more innovative and more successful business. I particularly liked Tim’s point, when he said that businesses should make the path of least resistance the most secure one. And by doing that, you can enable people to make smart security decisions and build a more robust security culture within an organization.  As Tim says, It’s security for the people, not to the people. And that’s going to be so important as ways of working change. If you enjoyed our show, please rate and review it on Apple, Spotify, Google or wherever you get your podcasts. And remember you can access all the RE:Human Security Layer podcasts here.   
Read Blog Post