Live Recap: June 22, 2021 How to Properly Prioritize Insider Risk

Programs, Indicators of Intent, and Beer(?): three takeaways from our conversation with Samantha Humphries, Chris Tillett and Mark Wojtasiak

Earlier this week, we got the opportunity to discuss how to properly prioritize Insider Risk Management with Samantha Humphries, Head of Security Strategy, EMEA at Exabeam, Chris Tillett, Senior Security Engineer at Exabeam and Mark Wojtasiak, Head of Security Product Research at Code42. You can read the full transcript or watch the recording below, but here are the key takeaways.

1) Build a program; don’t rely on a product alone

Insider Risk Management is something which involves a complex mixture of business processes and policies, human psychology, and technology; trying to distill it to a single silver bullet will not work. It is necessary to build a program that addresses the problem and engages with business leaders across your organization. The outcome? You will find out what their most valuable data is, how employees within that business unit prefer to work, and begin to establish a baseline of “normal” or expected exfiltration. Merely throwing a technical solution at the problem will not solve it, and in many cases, will only serve to exacerbate the problem. As correctly pointed out during our session, a program will necessarily involve cross-functional leadership and cannot be accomplished from a mindset of security as the “department of ‘No.’”

2) There are indicators of compromise; listen to them.

At Code42, we call these “canaries in the coal mine” Insider Risk Indicators (IRIs) and we have explicitly called out some within our product’s prioritization framework. With that said, here are a few different types of indicators that users (or their accounts) may be compromised that came up in our discussion:

Indicators of potential account compromise:

  • Impossible travel (e.g. logging in from APAC and EMEA within minutes of each other)
  • Odd/new web browser login (e.g. org has standardized on Chrome but an Edge login occurs)
  • New operating system (e.g. login from Windows when whole user is assigned macOS)

Indicators of potential user compromise:

  • Off-hours work
  • Frequent job searches and/or resume editing
  • Forwarding work Outlook to personal email
  • Sharing files with personal accounts
  • Accessing files typically out of work-scope (i.e. team or corporate strategy docs)
  • “Working” while on vacation (in one instance performing “database maintenance” while on vacation in the Maldives)

3) Beer and Outcomes

This one sounds like a stretch, but Chris Tillett brought up a good analogy on the stream. With beer there are countless varieties. If a brewer were to try to make all possible varieties at once, the outcome would be a disaster. The same should be true of your Insider Risk Management program. Begin by defining the specific outcomes that you are looking for so that you can tailor the remediation tactics to those outcomes, rather than attempting to brew a saison, a pilsner, and an IPA all at once.

If you focus on the outcomes you are looking for, and make sure that they’re meaningful to the business (not just some flashy dashboard) then you’re going to be on the right path.

Talking about Insider Risk is apparently thirsty work. For the full readout of what we discussed in the session check out the transcript below or watch the video here:

Tune into the next session of Code42 Live on July 13 at 12PM CT to join a discussion on how to automate response to Insider Risk. For anything else, feel free to reach out to SamanthaChrisMark or Me on LinkedIn and we’d be happy to connect.


Riley Bruce – Hello, everyone and welcome to Code42 Live or as I should say, maybe; Live from a variety of basements and offices around the globe this time. We’ve got a great show today with Sam Humphries and Chris Tillett from Exabeam, as well as Mark Wojtasiak from Code42, where we’re gonna talk about how to properly prioritize insider risk. If you’re following us along on LinkedIn or YouTube live, feel free to throw your questions and comments in the chat, and we will try and address those as they come up. But first things first, what is this thing and who are these people? So I’m gonna throw quickly to each of y’all in turn. And if you can just tell us who you are, kind of what your focus is and maybe a fun fact. And Sam, since you’re on the top of the screen, I’m gonna go with you first here.

Samantha Humphries – I love as a presenter throwing in the back thing at exactly the last minute so cheers that. So I’m Samantha Humphries, I’m the head of security strategy for EMEA, Exabeam, I spent a lot of time talking too much trying to help people understand the risk in their environments and solve some of those of challenges, it is not easy TM. My fun fact is when I was six years old, I met Darth Vader in a shop and I still haven’t got over the terror of that day, okay.

Riley Bruce – Wow, I met Kylo Ren once and that was similarly terrifying. So I can only imagine meeting Vader at that age would not be the easiest thing to deal with emotionally.

Samantha Humphries – He’s pretty tall, well, he was pretty, so six years old I was terrified, the therapy is working.

Riley Bruce – So next up, Chris, you get to follow that.

Chris Tillett – Yeah, so I’m Chris Tillett, I’m a senior security engineer. I essentially help my customers with a lot of their security challenges. I help them kind of focus on outcomes versus just boil the ocean and hope for success. Insider threats, one of my favorite subjects to talk about, and I myself have been a been a victim of insider threat and I’ve worked a lot, many years working with that and in fraud departments and things of that nature. Fun fact about me is that I’ve actually broken all of my fingers playing sports, broken my ankles multiple times and now I have two 20 month old twins and my wife said, she goes, I have a feeling we’re gonna be visiting the hospital quite a bit. And I’m like, yeah, yeah. We can see it already, they’re climbing all over everything. So they’re gonna be just like their dad.

Riley Bruce – Yeah, wow, that is very impressive with regard to the number of broken bones. I have broken a grand total of two in my entire life, so well done, I guess. So let’s get maybe an F in the chat for our friend Chris’s bones. And if you want to throw how many bones you’ve broken in the chat and see if you can break him, go for it. Next up Mark, tell us a little about you.

Mark Wojtasiak – I’m Mark Wojtasiak AKA people call me Woj. You can probably tell from the spelling of the last name. I lead security, product research and strategy at Code42. I spend a lot of my time similar to Sam talking to security leaders, trying to understand, Chris, Chris, I love the word outcomes, Chris, ’cause that’s a lot of, kind of where the discussions that we have too and what we’re focused on. Fun fact, there’s nothing fun or factual about me. No, I’m just kidding. I spent my COVID summer writing a book that sits over my shoulder. I think all my fun facts are behind me. Hey look, oh, I’m a Metallica fan, seen them 11 times, Giant Cubs fan, Bulls fan, Bears, Blackhawks, anything Chicago. That’s about it, no broken bones, not a big risk taker.

Riley Bruce – Well, and it makes sense that you’re focused on insider risk, I guess. So thank you all for joining us today for this session. Like I was saying, for anybody who’s joining us live, please throw your questions and comments in the chat of the platform that you are on. And today I need to talk quickly about who I am because I need to introduce me. My name is Riley Bruce. I’m the security community evangelist here at Code42. I’ll be moderating the discussion today and it’s my job to make sure that y’all members of the insider risk community, get the information that you need to be able to accurately and adequately tackle the problem of insider risk within your organizations. So hopefully that is what we’ll be doing today. And my fun fact is that I got to do effectively a fire drill right before this stream and move up 21 floors from where I had intended to do this due to network difficulties. So that’s always fun with a capital F as it were. So with that said, I’m gonna kick it off with our first question, our first topic for the day. And we are talking about prioritization of insider risk. And the first question is what data do organizations need to collect to be able to begin that prioritization? And I guess, I threw to Sam first, the first time, so Chris you’re in the hot seat first this time, it’s only fair.

Chris Tillett – That’s no problem. So yeah, one of the key things of any insider threat program, and again, it’s not a single activity, you’re building out a program, which is a living, breathing apparatus, the first thing is understanding what data you need to protect, right? So data classification’s a key area, a key part of that program. Part of classifying data is also part of classifying data that you don’t want, that the costs and the risk of the business, it is a worth keeping. I know we always had this idea that we’ve got to hoard all the things and do all the things, but in reality, if you really look at the business and then managing risks, which is all we do in cyber, we manage risk for the business, which information is key to keep the shareholders, to keep the employees going, and to make sure the business survives. Anything else, you should do your best to discard it.

Riley Bruce – For sure. And Sam, if you wanna follow that up and then I’ll have Mark bring it home for this question.

Samantha Humphries – I mean, obviously a lot of what Chris said, all of what Chris said. And the hardest thing you’ve ever lived through, I don’t wanna use the dreaded three letters. DLP But I’m one of those, that’s some fun time, but I that’s a lot. Yeah, you got to know what you got. That’s the first thing. It’s always the biggest struggle in any form of security is the visibility piece. So if you don’t know what devices you’ve got, you don’t know what stuff lives, I think he got a bigger problem than just insider risk at that point. So it always starts with visibility side of things. And until you’ve got that fixed, I think you’re gonna struggle. On top of that, I think some of the things you need to think about with insider risk are similar to security lines, as far as data’s concerned, or there may be some other different things you wanna look at as well, depending on the types of insider risks that you’re worried about ’cause this has gotten more complex over time. So that one of the things is understanding intent and you may need to look at some slightly different data or at least represent it in a different way compared to how you make the just standard security operations.

Riley Bruce – I wanna come back to that really quick, because you mentioned some types of risk and indicators of intent. So I will go to Mark and then maybe come back for a follow up quickly on that. So Mark, go for it.

Mark Wojtasiak – Yeah, I’ll echo or piggyback on a lot of what Chris and Sam said. And I think we might get into this in the conversation where we often talk about is, if the question is what needs to be collected to properly prioritize risk? What risks does the business prioritize? We often start there, what is tolerated, what isn’t tolerated. Sam talked about or brought up that it’s not just about malicious insider intent, it could be good, could be bad, could be indifferent the way we see it and risk is risk especially when it comes to corporate data. So when we think about how do you prioritize it, what data is at risk echoes to Chris’s point about what does the corporation or the organization care about, goes to Sam’s point about visibility and I think a lot of what Code42 and Exabeam do is in with the missing pieces and I’m sure we’re gonna get into it quite a bit is context, right? What data do you collect that will provide the analyst the context they need to prioritize what they need to respond to, what they need to investigate. And that is my ring to stop me from talking, is that a new feature of Code42 Live Riley? Is that when I hear that ring I stop talking?

Riley Bruce – I am gonna start that, I’ll get like a Pavlov, a bell so that hopefully you’ll start salivating or something at some point.

Chris Tillett – I apologize, I had my signal client running and a coworker responded.

Mark Wojtasiak – That’s why I had to go. No need to apologize.

Riley Bruce – No worries.

Mark Wojtasiak – But I can ramble, I can talk, so when Sam said she’s a talker, I’m like, this is perfect. We’ll get one question in an hour, it will be great.

Riley Bruce – Well, actually going back to the types of risk and Mark I will keep you in the hot seat and then we can shift over to Sam ’cause you kind of touched on, there’s malicious risk then I would imagine there’s like accidental risks. Like, what are the different categories that folks should be looking at when starting to do this type of prioritization?

Mark Wojtasiak – Well, I think, I mean, intent is one type of context, right? So like we say okay, employees are gonna introduce risk to the organization, whether it’s good intention, bad intention or indifferent, they’re just negligent, right? The accidental, whatever it might be. And we have tons of examples of those types of employees or persona. But when you look at and I talked about context, one of the things that we consider, or we look at is file activity, your file context. So if an employee is changing the file type, so there’s a file type mismatch, like the mine type doesn’t match the extension, that’s indicative of hiding something, right? Or indicative of maybe a mal-intent. If that’s the case, perhaps that raises a red flag that, okay, what are they trying to hide? They’re not gonna hide personal information or like contacts or photos or whatnot, but they might be hiding source code, or they might be hiding a strategy document or a roadmap or whatever it might be. That red flag should escalate or prioritize that activity, right? And raise the red flag for the analyst to say, this activity has happened. There’s other forms of intent, like indifferent, right? I put a public file share out on Google that is a sensitive document, but it’s open to the public, not malicious, not necessarily kind of indifferent about it. That’s maybe a lower risk tolerance than a malicious insider, but still risk. And maybe that response require that prioritization of that alert maybe takes a different frame of mind or a different response. So yeah, it could differ. You can identify intent through file activity, user activity, and I’m sure Exabeam can talk about, quite a bit about that. Or actual vector activity where the file is going.

Riley Bruce – That’s actually where I’m gonna chime in here, because we did get a question from the chat about what are some indicators of intent when it comes to things like behavior. So, Sam, I’ll go to you first and then Chris, I’m guessing you probably have something you’d like to chime in on that as well. So Sam go for it, what are some indicators of intent to do something?

Samantha Humphries – Right, so it can be a bunch of things. If you’ve got your compromised insider, who’s still nonetheless an insider. And you can pick up things like all of a sudden Sam’s done impossible travel. He was in was in Ailsbury this morning, and he’s in Singapore at lunchtime, not gonna happen, but adding some other bizarre things that are not normal for Sam, such as, different browser, different device, different ISP, those kinds of things, accessing different types of machines, accessing different data, that can show that actually the intent is not Sam necessarily, but Sam’s account has been phoned. If we flip that to someone who’s a malicious insider, all of a sudden you see somebody logging in at a different time today, they might be doing like accessing what they normally access but doing it at a different time of day to their normal working hours, maybe pulling some slightly different reports, some of those types of activities. So if you have something that can show you a normal baseline for those user accounts, you can start to pick up on these nuances and basically come up with a score of risk that will then ultimately add up to something you’d potentially want to go look up. I’m sure Chris has got more on this important to share, so I don’t want to steal all the thunder.

Riley Bruce – Yeah, Chris, go for it.

Chris Tillett – Well, one of the things that we’ve found a couple of times in working closely with question is you’ll start seeing a lot of job searches and at the same time, exactly what Woj and Sam were talking about as far as accessing different files, accessing different reports. And so there’s always these small little canaries in the coal mine, so to speak. But a lot of it has, that I had seen, especially during the times of COVID had been job searches, and then all of a sudden we’re seeing them copy files that they had never even accessed before, or they copied not just a couple of files they’ve used but their entire team has access to. And then you see attempts. One of the things we catch a lot is sending or forwarding to personal email domains. So they’ll create forwarding rules in their outlook. And they may have outlook connected into things like Salesforce and Workday. So the data that they could potentially forward out to a personal email domain, it could be severe. It could cause a severe risk to the business. So those are some of the things that we’re observing out in the field and that has certainly caused my customers when they were looking at it, we had one where we thought there was a compromised insider, and it turned out to be the case. And what it was is that you had an individual who was in the Maldives doing all kinds of work. And I know that sounds completely insane to say, but I’m like, if I’m in the Maldives, I’m not doing anything for work. I’m like, nope, I’m gonna be sitting on that beach, I’m gonna be chilling out, I’m gonna be having a cocktail or two, I as sure as heck I’m not gonna be going and doing all this server activity work and database maintenance and all that type of thing. So it’s kind of funny when you can start piecing together the puzzle, so to speak, but a lot of it starts with just small interactions. And so that’s the kind of hard part about insider threat is that sometimes it’s not gonna be something that’s glaringly obvious, it’s where you’ve got to kind of watch several pieces of behavior to put together.

Samantha Humphries – Yep.

Riley Bruce – Mark, I don’t know if there was anything you wanted to add to that for additional indicators, or we can move on to another question.

Mark Wojtasiak – No, that’s spot on. It’s like sometimes there is the magic bullet, like the, from an insider risk indicator perspective is, yeah, there’s like one activity that raises the red flag, but Chris is right, it’s usually a sequence or grouping of activities over a period of time. And that’s where it gets really interesting in terms of filed behavior, user behavior, vector activity, like where are files going, what applications are being used. There’s a little bit of art behind the science of stringing those things together, right? And looking at that, those are those, there’s, I think we have over 61 insider risk indicators now in our product and they range kind of all file vector and user across file vector and user activities. So we could probably spend a whole hour kind of going through all of those, but we’ll get to the next question.

Riley Bruce – I mean, if we really wanted to… no! So the next question that we had kind of queued up is how, obviously if we’re collecting all of this contextual information, if we’re detecting and trying to correlate a whole bunch of things across a lot of the different vectors that y’all have been talking about, how does a SOC team cut through the noise of that alert fatigue? And I think the only person I haven’t gone to first for a question is Chris. So that’s where we’re gonna go first. What’s a good place to start? How do we cut through some of that noise and actually find the signal?

Chris Tillett – Well, so this is where having a program is important and having individuals that are focused on that, it doesn’t have to be set apart from the SOC, it’s part of the SOC, but it needs to be a program where you’re actually looking at these types of things. It will require things beyond your SIM typically. A lot of times people that dump all the data in the SIM, they dump everything in there, they build up these beautiful dashboards and hope that they find it in that manner. And when I talk with larger organizations that I do here in the Northeast, they’ll tell me that they can only get to their high or medium prioritized alerts and they don’t even bother with the low ones. The problem is that a lot of that intent is at the low side. So how do you do that? You’ve got to have a way to measure behavior. And so that’s gonna require some machine learning. That’s gonna require those types of things, understanding what is normal for this user, and then taking sensors that are provided from all these different technologies and understanding, okay, this is how this user normally interacts with all of these different technologies, the cloud, their local machine, their file servers, their Office 365 One drives, all of those things need to be understand what is normal so that you can start seeing when they have these small indicators, if it’s just a giant dashboard full of alerts, you’re never gonna catch it. And it will be, you’ll catch it after it’s too late. And so like Woj said, I liked what he said, it’s art and science. So you got to have a little bit of science and you’ve got to have a little bit of looking at compliment. You know one of the biggest things that we tell our customers is that take advantage of the watch list feature and doing leavers groups, making sure that you’re looking at those that are leaving the organization and you can kind of put fine tooth focus on that, that can actually help train your SOC for things to look for. So we allow people to go back sometimes six months to a year to see what a user activity has been doing. And that gives you the, ’cause a lot of times, if they’re put in their two weeks notice, a lot of the damage has been done is six months ago. It’s not even 90 days. So being able to look back and look at those types of indicators is key.

Riley Bruce – For sure, Mark, do you wanna follow that up and then I’ll throw it to Sam?

Mark Wojtasiak – Yeah, I think Chris nailed it with the leavers and the departing employee, I mean, use cases. I mean, there’s always use cases to kind of hone in on. But one thing that we talk about as well in terms of cutting through the noise is, this is where in terms of managing insider risk, right? Too many times organizations put that burden on security alone when it’s incumbent upon lines of business to end security, to partner with lines of business to understand what they tolerate or not tolerate, right? So we were just, I was just in another conversation before this one about was talking to a CTO and they’re like, I don’t tolerate source code going anywhere but this location, right? Or these two locations, right? I wanna know whenever it leaves or is anywhere else, but these locations or these destinations. That’s a very low risk tolerance for that. So we can machine and analyst tuition, Exabeam and Code42, we can look for activities or movement of source code to say, okay, according to our CTO, he does not tolerate this being in this location. This is a high priority alert. That’s gonna impact the business. The same kinds of rules for magic math or art or art and science can be applied across all the lines of business. So marketing feels a certain way about what they tolerate and sales from a customer list perspective. I won’t tolerate downloads from Salesforce going to personal devices. I won’t tolerate downloads from Salesforce period at some organization. So prioritization, isn’t all incumbent upon the analyst having Nostradamus like intuition but there is some reliance on line of business to help define what the business truly tolerate and won’t tolerate ’cause that’s gonna harm the analyst to be able to put the right things in place, string the right things together as Chris and Sam and I were talking about, what are those risk indicators and how do we string them together, whether it’s over that happened in a single day or to Chris’s point what has happened over the last six months. What behaviors happen that is indicative of risk to the organization.

Riley Bruce – So I think Sam, we’ll have you kind of bring it home here with how do we cut through the noise of alerts? And I think we’ve kind of pretty resoundingly said that that’s not the way to think about it anymore. However do you have anything to add here on how to cut through that?

Samantha Humphries – I do, I just want to finish up on the source code thing for a second. ‘Cause we actually had this with a customer a while ago, offboarding is a thing that is hard, right? And we’re talking about watch lists earlier, about leaders, but we had a situation where someone had been offboarded and they still had access to Github. So that was not good. They found it fortunately, through our staff, but it’s very easy to leave just one account sitting there that people have still got access. So it’s not just somebody leaving but you got to make sure they’re actually gone as well. So anyway, on source code onto prioritization. So top quiz for the audience, does anyone have 1,736 SOC analysts working in that team? I’ll wait.

Riley Bruce – I’m hearing a resounding no.

Samantha Humphries – Okay, ’cause if you get 100,000 alerts a day, that’s how many you need so you’d be able to go through all of those alerts. So let’s not do that, I think is the first thing. Numbers came from my friend, Rick Ferguson on Trend and to continue on the trend rate, I was reading an article today about 77 analysts recently reported they are not distressed, they are overwhelmed by the number of alerts they are having to deal with right now. So this is a huge problem, just on the human level, it’s not good. Now we’ve had a stressful enough 15 months, thank you very much COVID, but we find ways of not just not just looking at the big scary high level things, because that’s what you wanna run to. And see is your first problem is if you’ve got 1000 alerts and 200 of them are high priority, what do you pick first? You can’t put them all together and call them one, okay. So I think looking at alerts and understanding like, other ways of grouping and being able to tackle like multiples at the same time, but also yeah, as Chris said not just discounting those kind of low value, are they? Alerts that come through because you don’t think they have a problem. If we go back in time in the Sam’s time machine to say the Target breach, for instance, which they wouldn’t have talked about for ages, ’cause there’s been a bajillion breaches since Target, but you know, they had indicators in their network at the time, but something was wrong, but it was just lost in the noise. So using capabilities like machine learning to be able to help you find commonalities and finds that having people focus on the right things, but not just discounting anything that looks like low risk on its own. You can pin those things together back to walking areas in the coal mine, you’re gonna find some of those small, hard to detach to low and slow type stuff, that is often an indicator of any side effects.

Mark Wojtasiak – Yeah, I think Riley, I want to pile on, can I pile on there? Can you hear me?

Riley Bruce – Yes.

Mark Wojtasiak – No, I can’t pile on? Is Chris gonna ring that bell that tells me to stop talking? Like Sam brought up, I just wanted to piggyback on what Sam said around analyst burnout. It’s just a huge, huge problem. And I don’t know if, one thing that when I talk to security leaders and others, it’s like getting the budget and resources to manage this problem is easier said than done, right? Because there’s this general thinking that all the technology they need is in place to manage it. And it’s funny because look at, rewind even pre COVID, COVID just was a catalyst for this. For the last five to 10 years, organizations have been driven largely from the board and C-suites driving speed. Like we need our employees collaborating, innovating, moving super fast, let’s put in all the technology to help them do that. And you’ve got CSOs left in the corner with, well, hey, hey, that’s gonna introduce a ton of risk, can I be part of that conversation? Well, now that culture of speed, which I’m totally for, I’m that guy, obviously we need that to be competitive. We like to think of ourselves as a collaborative, innovative culture but there are ramifications of that. And that culture, why isn’t the security analyst part of that culture or considered in that culture? Because the burnout is real from the level of noise and alerts and what they’re getting. And I think this is gonna get into foray into some of our other questions about metrics and things like that. There’s a genuine sense of the security analyst that they want, they’re like any other employee, they want to deliver value to the organization, but if their entire day is staring at a screen trying to discern what is meaningful or material and what isn’t, who wants that job, right? So how do we elevate this in terms of helping companies and our customers make the business case to manage this problem? ‘Cause the problem’s not gonna go away, it’s just gonna get worse. There’s gonna be more vectors. There’s gonna be more employee turnover and burnout. There’s gonna be more leavers, if you will that we’ve got to get ahead of the problem. And it’s got to start with helping the security analysts and the security team, A, get a seat at the table, but then B, have the resources and means that they need to deliver business value and help make a business case for that. Off my soap box.

Riley Bruce – And I wanna double click even on that, there’s gonna be more leavers point because if we look at the April jobs report from the United States Bureau of Labor statistics, 4 million people quit their job to go to a new job in April which is just an astronomically high number. So people are leaving, this sort of surge of leavers is upon us and happening. So that’s just further sort of evidence to that point that I wanted to bring up quickly because yeah, it’s something businesses need to be prepped for.

Mark Wojtasiak – And there’s real fear in a lot of those leavers could be security analysts that are, they have the same level of burnout as the everyday employee, if not more. So we’ve got to consider the, when we talk about employee experience and making a positive employee experience, that’s for all employees, including the security team, right? Including the analysts that are in the trenches, doing a job that quite frankly, not a lot of people would wanna do. So again, back onto my cell box, now I’m gonna get off my cell box. Where is that bell?

Riley Bruce – So going back to something that you just did bring up Mark, and this is actually one of the questions that we were hoping to talk about today is how can a security team start to define and track risk metrics to actually deliver continuous improvement on the risk posture? Because like effectively the question here is where do you start when trying to quantify this risk? And I think I went to Chris first the last time. So Sam, I’m gonna go to you first this time. And then we’ll go to Mark and then Chris just so that we’re kind of always doing a little round robin here.

Samantha Humphries – Okay, I think, this is a tough one, right? Because if you’ve ever lived a metrics slide, it’s really hard to let go of what you already got. That’s the first challenge, right? And my goal, do we love counting things in security? We wanna tell you how many we’ve got, all of the things and graph goes up. That’s generally good, but with risk, you kind of want it to go down, right? And that in itself is a new mind shift, but again, it’s back to what we had at the beginning. If you understand what you got, that’s your first stop. So understanding that and tracking just what you have and what you care about and it’s hard. The hardest thing always in set ups is the negative, right? How do you track a negative? So you know what didn’t happen and what you see. So solve that you’ve got a, that’s a whole other conversation, but are we driving down risk in the environment? And the answer isn’t, or if I ignore it, it’s not that. I had a side thing. I had a great conversation with someone about GDPR in the early days of that. And they’re like, well, I know I’ve been breached. Like I have to report it. That is not the answer, friends, do not do that. Ultimately, there’s two parts to this. There’s the tracking risk of a security point of it. Like, and this is where you get into the nitty gritty of the operations side of things. The actions you’re taking, the things that you’re solving, but also then there’s less going out to the business. And fortunately the business talks in the language of risk. This is probably one of the few types of metrics that we can take reasonably well outside of the SOC without just going like, hey, I’ve got all these phishing emails, great. And they’re like, is that good or bad? Who knows. To be able to show certainly upwards to senior leaders what you’re doing, and also actually this thing on return on investment, because that’s another challenge on the security metrics I feel is that we are not very good at showing the decreasing of risk, be it we all sure that we’re shutting down accounts as people leave as well as we are in a handling these incidents, we’ve got meantime to respond, meantime to detect all the things we’re never gonna quite be happy getting rid of in seconds, I love them. Flipping that with a risk lens, I think is a good thing to be doing generally in security operations. Very much so around insider risk.

Riley Bruce – Mark, I think that was a great place to start here when talking about that. But do you have anything to follow that up with?

Mark Wojtasiak – Yeah, I think I may have said this on the last LinkedIn live. I’m gonna say it again. Or Code42 Live, sorry. It’s interesting, it’s like human beings make decisions based on three factors, time, risk, and reward, right? How much time does it take me? Is there any risk in this? And then what’s in it for me in the end? But employees, which yes are human beings, but in general, employees make decisions on two of those, time and reward. Like how do they begin to factor risk into decision-making, right? So you look at, we look at insider risk posture and we can even, any organization can measure it in any number of ways, but we look at things like shadow IT usage, like what percentage of the employee base is using unsanctioned applications on a daily basis, right? Right now it’s probably 70%, 80%, right? Those are the numbers I think we had in our last report. What percentage of departing employees are taking data with them when they leave? What, I mean, you can break it down by department, you can look at it an organizational wide, but like, what are the different ways you can measure overall risk awareness across the culture, right? And how do you begin to factor, help employees factor that into that equation? ‘Cause I would love the accompany to say, okay, day one, let’s look at your risk posture. You have, 80% of your employee base is using unsanctioned applications. X percent of them are uploading files to via a web browser. You’ve got a thumb drive use, you’ve got a whole number of different ways to measure risk. Then implement a program to Chris’s point, partner with line of business, put a program together, do your prioritization models and your responses and your controls and then measure it again in 30 days, 60 days, 90 days, 120 days, 180 days, measure it monthly, measure it quarterly because at the end of the year, you’d love to go into the C-suite or the board with, okay, since implementing our insider risk program, we have reduced shadow IT usage by X percent, which in itself reduces risk. Our departing employee program is, which is falls under our insider risk program, we now have lowered the threat of corporate data theft with the departing employees by X percent or but here is where risk isn’t flattening or it’s increasing, we’re seeing increased use of here. Therefore we need more resources or we need to tackle it this way, or here’s a proposal, here’s the, it’s an inherent business case. So when you think about insider risk metrics, you gotta think about the things that are indicative of risk or shadow IT is an easy one, departing employee, that sort of thing, and start there. And then what you’ll begin to unfold or discover is while there’s a bunch of different ways to measure this insider risk, there’s a bunch of different ways to track it, a bunch of different things to focus on, use cases that emerge, et cetera and begin reporting on that, gathering that data and feeding that upstream.

Riley Bruce – I think that that’s a very important point, Mark. And I wanna give Chris some time here to follow up on the question and then Sam, I’ll go back to you. I have a hunch there’s something you’d like to say to kind of close that out.

Chris Tillett – Yeah, so I like beer. And when I look at this-

Samantha Humphries – You never said this.

Chris Tillett – What like and I like all different types of beer, right? So I’ll go from a sour to a stout, to an IPA, doing pilsners now that it’s hot where I live. But the thing is each one of those things are outcomes, right? I wanna make a porter. So I’m gonna go get the ingredients necessary to make a porter. What we do in cyber is we take all the ingredients, we dump it into a giant bin and hope that everything comes out well. So that’s why we’re all overwhelmed, how do I solve this cyber threat thing? What do I do? How do I do it? Take a use case like Woj said, of leavers, all right? So we need to talk to HR, we need to talk to IT, how are we doing our leavers program? Take one beer and focus on that and make it really good and tasty. Now build on that success. You can have metrics based on that beer and go look, we made a really good IPA, it’s making us a lot of money, it’s helping your organization manage risk, starting to get a little cool out, let’s start making some stouts. And then you go to your next, that allows you to actually measure something versus just a bunch of barf alerts on the dashboard. I love going into socks and go, what does that mean? What does that thing, right, it’s red, why is that red? What does that mean? And we built it to make ourselves feel better ’cause in 2013, this was what was cool to do. I feel like in many cases, especially where I work at, there’s a lot of copycats that get like emotional about their cyber security tools. You see stickers all over their laptops ’cause they’ve gone to the conferences. And the way I’ve always viewed it is I never wanna marry my success or measure success based on a technology, I’m gonna measure my success based on, can I drink this beer and it tastes like an IPA? Can I drink this one and it tastes like a stout? That’s how you’re gonna have success. Even in larger organizations. It’s a lot harder to do it in a large organization but in many cases you can start with people that do have to manage this. And it reminds me of going back to my Novell days. I know I’m dating myself here, but back in the day, when we did Novell, we had what you call work group contacts. And those work group contacts is who I relied on to say, hey, who needs access to what data? And then by having that intimate relationship with the actual department versus always being the department of no, this is all cyber security is known for, but having more of a integrative program with each of those different business units, they can start telling you indicators. We’ve actually had customers where their own business departments are like, hey, I need you to put Bob on a watch list. He hasn’t been acting right lately. You see like the problem with cyber is we wanna measure everything with, like Sam said with these insane metrics and things like that. But the people that I talk to most that get insider threat are people that were forensic examiners in police departments, the FBI people that have had to go, okay, so tell me when Bob started acting weird, when did that start? And then from there you can kind of build from there. So it requires kind of a change in thought process. And so that’s a key thing. So when I say metrics to me, I match metrics to outcome. If that’s simple enough and yes, I like beer. Wish I was drinking one now.

Riley Bruce – I think yeah, I agree with the analogy and also I think that will have to be one of the headlines in the blog coming up to sum this all up is make your outcomes like your beer or something along those lines. Sam, is there anything you wanted to add other than, obviously do you also like beer?

Samantha Humphries – I have been known to drink occasionally. Yeah, oh, I would like an Appletini like Chris. I don’t know if I can follow the beer analogy. Like it’s just ruined everything. I was going to [Inaudible] and it’s definitely not as cool as beer, but let’s do this. Get yourself a beer, get comfy, and then we’ll talk. So talking about metrics and not just beer, I think another thing that can be agreed, a good metric to measure as part of this is around your attrition rate, for sure, like we’ve talked about the leavers programs, we talked about who really left, we’ve talked about 4 million people up and moving somewhere to a different job in the US. Those attrition rates do have an absolute correlation, I think is insider risk, be that in the organization, but also within the SOC as well, because how did the goodness that comes from the SOC? Yeah, you can have all the cool tools, you absolutely can, and you can have great processes, but you need also that human knowledge of your organization. And you need those smart people, whether you can hire 1,736 people from the FBI to like part of the team, like that’s great. I don’t have any people in the FBI. I’m not even from America, who knows. First of all, your tools should be working for your people. I think that’s one of my favorite things in the world, but if you’re not investing in your people across your organization, be that security operations teams [Inaudible] Choice and attrition rates are high, your insider risk level is also gonna be higher because of that. So if it was a huge cultural piece to this, that if you’re not looking at that and measuring it, that human aspect absolutely comes into play here. I’ve got nothing else to say about that, I’m sorry.

Riley Bruce – So I think that’s a decent mic drop here. And the thing that also kind of came up as a sequel to this question is, we’ve been talking about, this is how we can create some metrics, but how do we make sure that those metrics are actually valuable rather than just nerdy or geeky side projects. And I think that Chris, you kind of spoke to this with your beer analogy, with your outcomes analogy, but I’m gonna start with you for this question and then we’ll go to Mark and Sam.

Chris Tillett – Yeah, I think that that’s one of the things I learned. And so I’m not coming to people from like a bully pulpit perspective. I remember being all about the dashboards and the metrics and all that type of thing. And then I had someone that I was working with completely steal right out from under our nose. I was focused on the wrong thing. And so a lot of what I did not have in place were processes, it really wasn’t a technology that I needed, I needed to have a verification process for a couple of that. That individual would not have stolen $50,000 from my business, but I failed to think that process group. And so I think that’s a key piece is to think, and that’s where going to your other views are helpful. All right, if I’m sitting here in the accounting department, we’re an accounts payable, what does insider risk look like for that? And how can we measure that? You think about somebody who maybe their husband lost his job and they’ve lost their benefits and this person’s on accounts pay roll, are we tracking all new business units that we’re stroking checks to? ‘Cause I’ve actually caught that where they fire up an LLC, they slowly but surely peel off money and then next thing you know, there’s no other indicator other than the fact that they started writing checks at $5 at a time, then 10, then 15. So how do we catch that? When we start, we have a review of all new vendors that are going through, right? So there’s controls you can put in place in many cases. So it’s already controls that in many times that you’re already doing, it’s just being able to also add on top behavioral intention model. So adding all that together, then you’re able to kind of put those pieces together. So that’s kind of, one of the things I’ve noticed in doing investigations with my customers. And the reason why I’m saying alerts just won’t do it for insider threat is because I’ll have, in one investigation, I’ll have alerts from 15 different security tools. And now in typical SOC format, I’m gonna send those out to two different people and one person talking with Sam talks about the human element, maybe that one person runs down that alert and they find all this other information. The other individual promised their kids that they’re gonna take them to soccer practice at 4:30 PM and it’s 3:30 and they don’t feel like running it down. So it it’s marked as a false positive. But in reality, all 15 were needed to really understand the insider threat and risk. That’s why by taking it be you by be you, you’ll have a lot more success with those measurements versus stuff that’s in your SIEM.

Riley Bruce – Mark, you’re up. How do we make it so that it’s not just a side project and that it’s meaningful to the business?

Mark Wojtasiak – I’m a market research person. So I’m guilty of like, this problem applies whether it’s security or it’s marketing and market research, or you name it. I’m guilty in the past of going into an executive meeting or a meeting with all kinds of really cool charts and graphs, right? Like, look at how cool the numbers are, I can make a chart look as beautiful as possible, right? And that’s what a lot of times what we do insecurity, look at how beautiful this chart is, right? And there’s no insight like the business leaders will then say, oh, great market research, so what? Like, why do I care? Why does this matter? Why is this important to me? What insight are you gleaning from that beautiful chart or graph? What’s the bullet, what’s the three bullets on the slide that you wanna communicate and why it matters to that accounts payable department or that sales leader or whomever it may be, right? And how we measure the business rather than nerdy or geeky. So yeah, you got to have it, you got to glean insights from the data, right? That make business sense to put it into their terms so that they understand the risk that’s being introduced to their project. I talk about, in organizations, there’s always gonna be risk and we talked about risk tolerances, right? But imagine a scenario where, think of some of the things that happen frequent basis at an organization? We talked about leavers, but here’s another one, product launches. Like there’s a cross-functional team working on a product launch. It might be game changing technology, right? And it’s been worked on for the past 12 months. When should the CSO and that team know that this product is launching? How much ahead of the launch should they be pulled in? Is it 90 days? Is it six months? I would argue they should be at the table, they should know that product is launching in November of 2021. Well, guess what, we’re gonna start monitoring for any file activity, user privileged, user activity that is relative to that launch because we don’t want that launch to be compromised. We don’t want a competitor to get the information. We don’t want it to be out in the media. Product launches is one example. What happens when pre IPO, merger and acquisition, there’s all kinds of things happening inside an organization that have business impact that let’s face it security typically isn’t brought into the fold until either right before it happens or never. And then they’re held accountable for when the you-know-what hits the fan. So it’s like, that’s the business conversations that need to be happening, right? And getting more proactive about insider risk indicators and looking at insider risk and managing that risk before it has the impact, right? And you come with a pretty chart that says, look what happened to us over the last six months.

Chris Tillett – I wanna really quick, Chris, I know that you had something that you wanted to add to that quickly, so let’s do that. And then Sam, if you wanna close it out and then that will be the end of the stream for the day. So you get the ultimate mic drop.

Chris Tillett – Yeah, so this is a photo, I’m gonna just hold it kind of far away. But this was a gentleman working, a consultant, working on a Metro North train coming out of New York City. And he was working on the disaster recovery plan for a healthcare organization. And I’ll never forget. Now, there was a place called Beer Table to go in Grand Central station and I used to tell the people when I get there, I want one beer to be crisp and nice and then I want one to punch me in the face. So I get in there and I’m on this train, ad he’s on my same stop. I remember tapping him on the shoulder and going, do you really, should you really be working on that right now? And he looked at me and I think he realized, I’m like, here have a beer. Shut the laptop and have a beer. And we drank a beer together. So part of the program too, is when we are meeting with people that we also educate them on the reasons why we’re doing an insider risk program. Most people, just like that consultant, he wasn’t offended by it, I think he appreciated the tap on the shoulder. Most people wanna do a good job for the organization. So when you go in there with that type of intent from the cyber perspective, then we can, we really will work really well with this business units. And as Woj said, getting invited to those product launches. We’ll start getting invited. We’re not viewed as the department of No.

Mark Wojtasiak – Absolutely.

Samantha Humphries – I’m perfect, I’m a big believer in having a privacy screen ’cause I remember when we used to travel on trains in the old days. So I was asking about, like, how do you know your metrics are working with all before you jumped in again, which is all I really want to talk about. I’ve lived firsthand with this where not only are people turned up with a pretty charts, but if you dare question the chart then get the answer, well, it’s what we’ve always measured or it’s what the executives want to see, that for me, at that point, alarm bells go off. If you are showing 100% across your metrics on everything, right? Either you have 1,763 people, which you don’t, or you’re drunk. I feel well, your metrics are rubbish. So ultimately if you can’t ask questions of it to understand why, or you just wanna keep things running at 100% ’cause that looks good, something’s generally is wrong. We had it in the operations team I was in, the quality was terrible, but they were smacking those time metrics all day long, because that’s what C-suite wanted to see. No one could dig into the data. No one dad question it, that’s when things are wrong. So data always has to give you answers, I feel, if it’s not doing that for you, you’re looking at the wrong data.

Riley Bruce – And I think that is an absolutely fantastic place to leave it. So thank you, Samantha Humphreys and Chris Tillett from Exabeam and Mark Wojtasiak from Code42 for joining us on this session of Code42 Live. For anybody who joined us live, you can rewatch it on YouTube, LinkedIn, or at And we’ll be back in three weeks this time to talk more about insider risk. Thank you for joining us and have a great day everyone.

If you made it down this far in the article, you have won a prize of some ilk for perseverance. Congratulations and give yourself a pat on the back.

Let's talk

If you want to get a free consultation without any obligations, fill in the form below and we'll get in touch with you.