Episodios

  • Episode 8: Class is in session
    May 26 2021
    When Professor Kathleen Carley of Carnegie Mellon University agreed to talk with us about network analysis and its impact on insider risks, we scooched our chairs a little closer to our screens and leaned right in. In this episode of Uncovering Hidden Risks, Liz Willets and Christophe Fiessinger get schooled by Professor Carley about the history of Network Analysis and how social and dynamic networks affect the way that people interact with each other, exchange information and even manage social discord. 0:00 Welcome and recap of   1:30 Meet our guest: Kathleen Carley, Professor at Carnegie Mellon University; Director of Computational Analysis & Social and Organizational Systems; and Director of Ideas for Informed Democracy and Social Cybersecurity 3:00 Setting the story: Understanding Network Analysis and its impact on company silos, insider threats, counter terrorism and social media. 5:00 The science of social networks: how formal and informal relationships contribute to the spread of information and insider risks 7:00 The influence of dynamic networks: how locations, people and beliefs impact behavior and shape predictive analytics 13:30 Feelings vs Facts:  Using sentiment analysis to identify positive or negative sentiments via text 19:41 Calming the crowd: How social networks and secondary actors can stave off social unrest 22:00 Building a sentiment model from scratch: understanding the challenges and ethics of identifying offensive language and insider threats 26:00 Getting granular: how to differentiate between more subtle sentiments such as anger, disgust and disappointment 28:15 Staying Relevant: the challenge of building training sets and ML models that stay current with social and language trends.   Liz Willets: Well, hi, everyone. Uh, welcome back to our podcast series Uncovering Hidden Risks, um, our podcast where we uncover insights from the latest trends, um, in the news and in research through conversations with some of the experts in the insider risk space. Um, so, my name's Liz Willets, and I'm here with my cohost, Christophe Fiessinger, to dis- just discuss and deep dive on some interesting topics.             Um, so, Christophe, can you believe we're already on Episode 3? (laughs) Christophe Fiessinger: No, and so much to talk about, and I'm just super excited about this episode today and, and our guest. Liz Willets: Awesome. Yeah, no. I'm super excited. Um, quickly though, let's recap last week. Um, you know, we spoke with Christian Rudnick. He's from our Data Science, um, and Research team at Microsoft and really got his perspective, uh, a little bit more on the machining learning side of things. Um, so, you know, we talked about all the various signals, languages, um, content types, whether that's image, text that we're really using ML to intelligently detect inappropriate communications. You know, we talked about how the keyword and lexicon approach just won't cut it, um, and, and kind of the value of machine learning there. Um, and then, ultimately, you know, just how to get a signal out of all of the noise, um, so super interesting, um, topic.             And I think today, we're gonna kind of change gears a bit. I'm really excited to have Kathleen Carley here. Uh, she's a professor across many disciplines at Carnigen Melligan, Carnegie Mellon University, um, you know, focused with your research around network analysis and computational social theory. Um, so, so, welcome, uh, Kathleen. Uh, we're super excited to have you here and, and would love to just hear a little bit about your background and really how you got into this space. Professor Kathleen Carley: So, um, hello, Liz and Christophe, and I'm, I'm really thrilled to be here and excited to talk to you. So, I'm a professor at Carnegie Mellon, and I'm also the director there of two different, uh, centers. One is Computational Analysis of Social and Organizational Systems, which is, you know, it brings computer science and social science together to look at everything from terrorism to insider threat to how to design your next organization. And then, I'm also the director of a new center that we just set up called IDeaS for Informed Democracy and Social Cybersecurity, which is all about disinformation, uh, hate speech, and extremism online. Liz Willets: Wow. Professor Kathleen Carley: Awesome. Liz Willets: Sounds like you're (laughs) definitely gonna run the gamut over there (laughs) at, uh, CMU. Um, that's great to hear and definitely would love, um, especially for the listeners and even for my own edification to kinda double-click on that network analysis piece, um, and l- learn a little bit more about what that is and kind of how it's developed over the past, um, couple years. Professor Kathleen Carley: So, network analysis is the scientific field that actually started before World War II, and it's all about connecting things. And it's the idea that when you have a set of things, the way ...
    Más Menos
    32 m
  • Episode 7: Say what you mean!
    May 26 2021
    Oh my gosh Oh my gosh, I’m dying. Oh my gosh, I’m dying.  That’s so funny! And in just three short lines our emotions boomeranged from intrigue, to panic, to intrigue again…and that illustrates the all-important concept of context! In this episode of Uncovering Hidden Risks, Liz Willets and Christophe Fiessinger sit down with Senior Data Scientist, Christian Rudnick to discuss how Machine Learning and sentiment analysis are helping to unearth the newest variants of insider risks across peer networks, pictures and even global languages. 0:00 Welcome and recap of 1:25 Meet our guest: Christian Rudnick, Senior Data Scientist, Microsoft Data Science and Research Team 2:00 Setting the story: Unpacking Machine Learning, sentiment analysis and the evolution of each 4:50 The canary in the coal mine: how machine learning detects unknown insider risks 9:35 Establishing intent: creating a machine learning model that understands the sentiment and intent of words 13:30 Steadying a moving target: how to improve your models and outcomes via feedback loops 19:00 A picture is worth a thousand words: how to prevent users from bypassing risk detection via Giphy’s and memes 23:30 Training for the future: the next big thing in machine learning, sentiment analysis and multi-language models   Liz Willets: Hi everyone. Welcome back to our podcast series, Uncovering Hidden Risks. Um, our podcasts, where we cover insights from the latest in news and research through conversations with thought leaders in the insider risk space. My name is Liz Willets and I'm joined here today by my cohost Christophe Feissinger, um, to discuss some really interesting topics in the insider risks space. Um, so Christophe, um, you know, I know we spoke last week with Raman Kalyan and Talhah Mir, um, our crew from the insider risk space, just around, you know, insider risks that pose a threat to organizations, um, you know, all the various platforms, um, that bring in signals and indicators, um, and really what corporations need to think about when triaging or remediating some of those risks in their workflow. So I don't know about you, but I thought that was a pretty fascinating conversation. Christophe Feissinger: No, that was definitely top of mine and, and definitely an exciting topic to talk about that's rapidly evolving. So definitely something we're pretty passionate to talk about. Liz Willets: Awesome. And yeah, I, I know today I'm, I'm super excited, uh, about today's guests and just kind of uncovering, uh, more about insider risk from a machine learning and data science perspective. Um, so joining us is [Christian redneck 00:01:24], uh, senior data scientist on our security, uh, compliance and identity research team. So Christian welcome. Uh, why don't you- Christian Redneck: Thank you. Liz Willets: ... uh, just tell us a little bit about yourself and how you came into your role at Microsoft? Christian Redneck: Uh, yeah. Hey, I'm Christian. Uh, I work in a compliance research team and while I just kinda slipped into it, uh, we used to be the compliance research and email security team, and then even security moved to another team. So we were all forced to the complaints role, uh, but at the end of the day, you know, it's just machine learning. So it's not much of a difference. Liz Willets: Awesome. And yeah, um, you know, I know machine learning and and sentiment analysis are big topics to unpack. Um, why don't you just tell us a little bit since you've worked so long in kinda the machine learning space around, you know, how, how that has changed over the years, um, as well as some of the newer trends that you're seeing related to machine learning and sentiment analysis? Christian Redneck: Yeah. In, in our space, the most significant progress that we've seen in the past year, was as moving towards more complex models. The more complex models and also more complex way of analyzing the task. So if you look at the models that were very common, about 10 years ago, they basically would just look at words, it's like, uh, a set of words. Uh, so the order of words don't matter at all and that's changed. The modern algorithms, they will look at sen- sentences as a secret before and they will actually think the order of the words into account when they run analysis. The size of models has also increased dramatically over the years. So for example, I mentioned earlier that I've worked the email security at the [monastery 00:03:04] that we had shipped. They were often in the magnitude of kilobytes versus like really modern techniques to analyze the pensive language. They use deep neural nets and the models they can be the sizes of various gigabytes. Christophe Feissinger: What's driving that evolution of the models. Uh, you know, I'm assuming a, a big challenges to, uh, or a big goal is to make those model better and better to really re- reduce the noise and things like false positives or, or misses. Is that what's driving some of those ...
    Más Menos
    29 m
  • Episode 6: Cracking down on communication risks
    May 26 2021
    Words matter. Intent Matters.  And yes, most certainly, punctuation matters.  Don’t believe us? Just ask the person who spent the past five-minutes eating a sleeve of cookies reflecting on which emotion “Sarah” was trying to convey when she ended her email with, “Thanks.” In this episode of Uncovering Hidden Risks, Raman Kalyan, Talhah Mir and new hosts Liz Willets and Christophe Fiessinger come together to examine the awesomely complex and cutting-edge world of sentiment analysis and insider risks. From work comm to school chatter to social memes, our clever experts reveal how the manifestation of “risky” behavior can be detected.   0:00 Hello!: Meet your new Uncovering Hidden Risks hosts 2:00 Setting the story: The types and underlying risks of company communication 6:50 The trouble with identifying troublemakers: the link between code of conduct violations, sentiment analysis and risky behavior 10:00 Getting the full context: The importance of identifying questionable behavior across multiple platforms using language detection, pattern matching and AI 16:30 Illustrating your point: how memes and Giphy’s contribute to the conversation 19:30 Kids say the darndest things: the complexity of language choices within the education system 22:00 Words hurt: how toxic language erodes company culture 26:45 From their lips to our ears: customers stories about how communications have impacted culture, policy and perception Raman Kalyan: Hi everyone. My name is Raman Kalyan, I'm on the Microsoft 365 product marketing team, and I focus on insider risk management from Microsoft. I'm here today, joined by my colleagues, Talhah Mir, Liz Willetts, and Christophe Eisinger. And we are excited to talk to you about hidden risks within your organization. Hello? We're back, man. Talhah Mir: Yeah, we're back, man. It was super exciting, we got through a series of a, a couple of different podcasts, three great interviews, uh, span over multiple podcasts and just an amazing, amazing reaction to that, amazing conversations. I think we certainly learned a lot. Raman Kalyan: Mm-hmm (affirmative). I, I learned a lot. I mean, having Don Capelli on the podcast was awesome, talked about different types of insider risks, and what I'm most excited about today, Talhah, is to have Liz and Christophe on the, on the show with us 'cause we're gonna talk about communication risk. Talhah Mir: Yeah, super exciting. It's a key piece for us to better understand sort of sentiment of a customer, but I think it's important to kind of understand that on its own, there's a lot of interesting risks that you can identify, uh, that are completely sort of outside of the purview of typical solutions that customers think about. So really excited about this conversation today. Raman Kalyan: Absolutely. Liz, Christophe, welcome. We'd love to take an opportunity to have you guys, uh, introduce yourselves. Liz Willetts: Awesome, yeah, thanks for having us. We're excited to kind of take the reins from you all and, and kick off our own, uh, version of our podcast, but yeah, I'm, I'm Liz Willetts. I am the product marketing manager on our compliance marketing team and work closely with y'all as well as Christophe on the PM side. Christophe Eisinger: Awesome. Christophe. Hello everyone, I'm, uh, Christophe Eisinger and similar to Carla, I'm on the engineering team focusing on our insider risk, um, solution stack. Raman Kalyan: Cool. So there's a, there's a ton, breadth of communications out there. Liz, can you expand upon the different types of communications that organizations are using within their, uh, company to, to communicate? Liz Willetts: Yeah, definitely. Um, and you know kind of as we typically think about insider risks, you know, there's a perception around the fact that it's used, um, and related to things like stealing information or, um, you know, IP, sharing confidential information across the company, um, but in addition to some of those actions that they're taking, organizations really need to think about, you know, what might put the company, the brand, the reputation at risk. And so when you think about the communication platforms, um, you know, I think we're really looking to collaboration platforms, especially in this remote work environment- Raman Kalyan: Hmm. Liz Willetts: ... where employees, you know, have to have the tools to be enabled to do their best work at home. Um, so that's, you know, Teams, uh, Slack, Zoom, um, but then also, you know, just other forms of communication. Um, we're thinking about audio, video, um, those types of things to identify where there might be risks and, and how you can help an organization remediate what some of those risks might be. Raman Kalyan: Awesome. And Christophe, as we think about communications risk more broadly, what kind of threats do you... have you start seeing, um, organizations being more concerned about? Christophe Eisinger: Yeah, so exactly to what you just mentioned ...
    Más Menos
    33 m
  • Episode 5: Practitioners guide to effectively managing insider risks
    Sep 21 2020
    In this podcast we explore steps to take to set up and run an insider risk management program.  We talk about specific organizations to collaborate with, and top risks to address first.  We hear directly from an expert with three decades of experience setting up impactful insider risk management programs in government and private sector. Episode Transcript: Introduction: Welcome to Uncovering Hidden Risks. Raman Kalyan: Hi, I'm Raman Kalyan, I'm with Microsoft 365 Product Marketing Team. Talhah Mir: And I'm Talhah Mir, Principal Program Manager on the Security Compliance Team. Raman: Talhah, episode five, more time with Dawn Cappelli, CISO of Rockwell Automation. Today, we're gonna talk to her about, you know, how to set up an effective insider risk management program in your organization. Talhah: That's right. Getting a holistic view of what it takes to actually properly identify and manage that risk and do it in a way so that it's aligned with your corporate culture and your corporate privacy requirements and legal requirements. Really looking forward to this, Raman. Let's just jump right into it. Talhah: Ramen and I talk to a lot of customers now and it's humbling to see how front and center insider risk, insider threat management, has become, but at the same time, customer are still asking, "How do I get started?" So what do you tell those customers, those peers of yours in the industry today, with the kind of landscape and the kind of technologies and processes and understanding we have about the space, what kind of guidance would you give them in terms of how to get started building out an effective program? Dawn: So first of all you need to get HR on board. I mean, that's essential. We have insider risk training that is specifically for HR. They have to take it every single year. So we have our security awareness training that every employee in the company has to take every year, HR in addition has to take specific insider risk training. So in that way we know that globally we're covered. So that's where I started, was by training HR, and that way the serious behavioral issues, I mean, IP theft is easier to detect, but sabotage is a serious issue, and it does happen. Dawn: I'm not going to say it happens in every company, but when you read about an insider cyber sabotage case, it's really scary, because this is where you have your very technical users who are very upset about something, they are angry with the company, and they have what the psychologists called personal predispositions that make them prone to actually take action. Because most people, no matter how angry you are, most people are not going to actually try to cause harm, it's just not in our human nature. Dawn: But like I said, I worked with psychologists from day one, and they said, "The people that commit sabotage, they have these personal predispositions. They don't get along with people well, they feel like they're above the rules, they don't take criticism well, you kind of feel like you have to walk on eggshells around them." And so I think a good place to start is by educating HR so that if they see that, they see someone who has that personality and they are very angry, very upset, and their behaviors are bad enough that someone came to HR to report it, HR needs to contact, even if you don't have an insider risk team, contact your IT security team and get legal involved, because you could have a serious issue on your hand. And so I think educating HR is a good to start. Dawn: Of course, technical controls are a good place to start. Think about how you can prevent insider threats. That's the best thing to do is lock things down so that, first of all, people can only access what they need to, and secondly, they can only move it where they need to be able to move information. So really think about those proactive technical controls. Dawn: And then third, take that look back, like we talked about Talhah, take that look back. Pick out just some key people, go to your key business segments and say, "Hey, who's left in the past" I mean, as long as your logs go back, if they go back six months, you can go back six months. But just give me the name of someone who's left who had access to the crown jewels, and just take a look in all those logs and see what you see. And you might be surprised. Talhah: Yeah, and on this look back piece, Dawn, we're actually hearing that from our customers quite a bit in that, the way they kind of frame it is that, "Why don't you give me an idea, with technology, can you give me some sort of an idea that you can look through some of the logs I already have in the system, parse through that, to give me an insider risk profile, if you will, of what's happening, what looks like potential shenanigans in the environment, so I can get a better sense of where I need to focus and what kind of a case I need to make to my executive sponsor so I can get started." So that's definitely something we're thinking ...
    Más Menos
    23 m
  • Episode 4: Insider risk programs have come a long way
    Sep 21 2020
    In this podcast we discover the history of the practice of insider threat management; the role of technology, psychology, people, and cross-organizational collaboration to drive an effective insider risk program today; and things to consider as we look ahead and across an ever-changing risk landscape. Episode Transcript: Introduction: Welcome to Uncovering Hidden Risks. Raman Kalyan: Hi, I'm Raman Kalyan, I'm with Microsoft 365 Product Marketing Team. Talhah Mir: And I'm Talhah Mir, Principal Program Manager on the Security Compliance Team. Raman: Talhah, this is episode four where we're gonna talk about putting insider risk management into practice. Talhah: That's right, with Dawn Cappelli, somebody who's been a personal inspiration for me, especially as I undertook the effort to build the insider risk program in Microsoft. Somebody who I've admired very much for what she's done in this space, an amazing storyteller, and how she lands the value and importance of insider risk. Super excited to have her here with us today to share some of that with our customers abroad. So really looking forward to this conversation. Raman: Yeah and Dawn is the CISO of Rockwell Automation, and know that this is gonna be great. So let's do it, man. Talhah: Let's do it. Raman: So thank you Dawn for being on our podcast. Tallah and I started this about two years ago at Microsoft, where we started looking at insider risk management in Microsoft 365. Of course had been doing it a lot longer for Microsoft as part of our insider threat group and he talked a lot about you and so we're really excited to have you on the podcast. And the interesting thing is is that everyone that we've actually had a conversation with thus far actually knows you. So I'm excited to finally meet you virtually. We met once before, but thank you again and very much appreciate it. Dawn: You're welcome, thank you for the invitation. Raman: Yeah, absolutely. Just for people listening, would be great to get your background, what is it that you do now, how did you get into insider threats, all that sort of stuff? Dawn: Okay, so right now I am the VP of Global Security and the Chief Information Security Officer for Rockwell Automation. We make industrial control system products. I came to Rockwell in 2013 as the Insider Risk Director. So I came to Rockwell to build our Insider Risk Program and at that time not many companies in the private sector had Insider Risk Programs. Financial did, Defense Sector of course, they counterintelligence, but not many other companies had Insider Risk Programs. I came here from Carnegie Mellon, the [CERT] program, which for those that don't know, CERT was the very first cyber security organization in the world. It was formed in 1988 when the first internet worm hit and no one knew what it was or what to do about it and Carnegie Mellon helped the Department of Defense to respond. So going back, I actually started my career as a software engineer, programming nuclear power plants for Westinghouse. From there, I went to Carnegie Mellon again as a software engineer, but I became interested in security and SERP was right there at Carnegie Mellon, so I tried to get a job there. Fortunately, they hired me. I didn't know anything about security, but I got a job there as a technical project manager so that I could get my foot in the door and learn security. So I was hired by CERT, CERT is a federally funded research and development center. So it's primarily federally funded. They had funding from the United States Secret Service to help them figure out how to incorporate cyber into their protective mission. So at this point, this was August 1st, 2001 when I started, the Secret Service, their protective mission was gates, guards, guns. It was physical and they knew they needed to incorporate cyber. So my job was to run this program and the first thing that we had to do was protect the Salt Lake City Olympics, which were in February 2002. So I thought, "How cool is this? I get to work with the Secret Service, protecting the Olympics and I know nothing about security. How did I ever get this job?" And it was very cool. I thought this is the greatest thing. "I can't believe they're paying me for this," but then a month later, September 11th happened and suddenly the Olympics they thought that would be the next terrorist target. And so that cool fun job became a very real, very scary job and when we first went to Salt Lake City to talk to the Olympic Committee about how could a terrorist bring down the network or harm attendees? And someone just, the security experts were looking at network diagrams and trying to figure this out. Someone just happened to say, "So have any network administrators or system administrators left on bad terms?" And they gave us a list of 20 people. So we're like, "Oh my gosh, these 20 people they could get right into this network. They know what all the vulnerabilities are." So we decided we needed an insider ...
    Más Menos
    30 m
  • Episode 3: Insider risks aren’t just a security problem
    Sep 21 2020
    In this podcast we explore how partnering with Human Resources can create a strong insider risk management program, a better workplace and more secure organization.  We uncover the types of HR data that can be added to an insider risk management system, using artificial intelligence to contextualize the data, all while respecting privacy and keeping in line with applicable policies. Episode Transcript: Introduction: Welcome to Uncovering Hidden Risks. Raman Kalyan: Hi, I'm Raman Kalyan, I'm with Microsoft 365 Product Marketing Team. Talhah Mir: And I'm Talhah Mir, Principal Program Manager on the Security Compliance Team. Raman: And this is an episode three with Dan Costa, talking about how do you bring in HR, legal, privacy, and compliance into building an effective insider risk management program. Talhah: Yeah, super important. This is not like security where you can just take care of this in your SOC alone, you need collaboration, and he's gonna tell us more on why that's critical. Raman: Yeah, it was awesome talking to Dan last week. So I'm, let's do it. Talhah: .... when you talk about these predispositions, these stressors. You gave a great example of a organizational stressor, like somebody being demoted or somebody being put on a performance improvement plan. You can also have personal stressors outside of work that you guys have talked about openly in a lot of your guidance and whatnot. When you look at these, at least the organizational stressors, that a lot of times they reside with your human resources department, right? So this is a place where you have to negotiate with them to be able to bring this data in. So talk to me about that. How do you guide the teams that are looking to establish these connections with their human resources department, the HR department, and negotiate this kind of data so that it's not just for... It's for insider risk management purposes. So talk about that and also talk about, are there opportunities that you see where you could potentially infer sentiment by looking at, let's say, communication patterns or physical movement patterns or digital log-in patterns and things like that? So how can you help to identify these early indicators, if you will? Dan: Yeah. So let's start with how we bridge the gap between the insider threat program and stakeholders like human resources, because Talhal, you're spot on. They're one of the key stakeholders for an insider threat program, really in two respects. One is they own a lot of the data that will allow us to gather the context that we can use to augment or supplement what we're seeing from our technical detection capabilities, to figure out was that activity appropriate for the job role, the responsibility of the individual associated with the activity. How can we pull left relative to an incident progression and find folks that might be experiencing these organizational stressors, right? That's data that our human resources stakeholders have and hold. We've seen insider threat programs over the years struggle with building the relationships between stakeholders like human resource management. A lot of the challenges there, from what we've seen, come down to a lack of understanding of what it is that the insider threat program is actually trying to do. In many cases, the insider threat program isn't necessarily without fault in making that impression stick in the minds of human resources. So this goes back to the insider threat program's not trying to be duplicative or boil the ocean, or carve off too big of a part of this broader enterprise-wide activity that needs to happen to manage insider risk. In that early relationship building and establishment, there's an education piece that has to happen. Human resources folks aren't spending all day every day thinking about how insiders can misuse their access like we are, right? So much of it is these are the threats that our critical assets are subject to, by the nature of our employees having authorized access to them. We understand that this isn't always the most comfortable subject to talk about, but here's a myriad of incident data that shows where vulnerabilities existed within a human resource process, or a lack of information sharing between HR and IT enabled an insider to carry out their attack or to evade detection for some significant amount of time. So, so much of it just starts with education. Once we've got them just aware of the fact that this is something that the organization has to consider as a part of its overarching security strategy, we need to help them understand the critical role that they play. Understanding how we use contextual information. Understanding how we don't use contextual information and helping them understand what, really, what an insider threat program is designed to do is help them make better data-driven decisions faster by giving them access to analysis that can only be conducted by folks that can take the data that they have ...
    Más Menos
    32 m
  • Episode 2: Predicting your next insider risks
    Sep 21 2020
    In this podcast we explore the challenges of addressing insider threats and how organizations can improve their security posture by understanding the  conditions and triggers that precede a potentially harmful act.  And how technological advances in prevention and detection can help organizations stay safe and steps ahead of threats from trusted insiders.  Episode Transcript: Introduction: Welcome to Uncovering Hidden Risks. Raman Kalyan: Hi, I'm Raman Kalyan, I'm with Microsoft 365 Product Marketing Team. Talhah Mir: And I'm Talhah Mir, Principal Program Manager on the Security Compliance Team. Raman: All right, Talhah, episode two. We're gonna be talking about insider threat challenges and where they come from, how to recognize them, what to do. And today we're talking to Dan Costa. Talhah: Dan Costa, that's right, the man who's got basically the brainpower of hundreds of organizations that he works with across the world, and given a chance to talk to him and distill this down in terms of what are some of the trends and what are some of the processes and procedures you can take to manage this risk. Super excited about this, man. Let's just get right into it. Talhah: Looking forward to this very much, man. And today we have Dan Costa. Dan, you want to just introduce yourself, give a little background on yourself and Carnegie Mellon and all that stuff? Dan: Yeah, sure thing. So Dan Costa, I'm the Technical Manager of the CERT National Insider Threat Center here at Carnegie Mellon University Software Engineering Institute. We're a federally funded research and development center solving longterm enduring cybersecurity and software engineering challenges on behalf of the DOD. One of the unique things about the Software Engineering Institute is that we are chartered and encouraged to go out and engage with industry as well, solving those longterm cybersecurity and software engineering challenges. And my group leads kind of the SEI's insider threat research. So collecting and analyzing insider incident data to gain an understanding of how insider incidents tend to evolve over time, what vulnerabilities exist within our organizations that enable insiders to carry out their attacks, and what organizations can and should be doing to help better protect, prevent, detect, and respond to insider threats to their critical assets. Raman: Wow. Nice. That's awesome. Dan, how did you get into this space? Dan: Yeah, so I've been with the SEI since 2011. I came onboard actually to work on the insider threat team as a software engineer, developing some data collection and analysis capabilities for some of our early insider threat vulnerability assessment methodologies. And since 2011, have really gotten a chance to have my hand in nearly every phase of kind of the insider threat mitigation challenges that organizations experience, not only on the government side, but in the industry as well. So since 2011, I've been able to stand up insider threat programs within the government, within industry, help organizations measure their current security posture as it pertains to insider risk, and try to find ways that organizations can collect and aggregate data from disparate sources within their organization that can help them more proactively manage insider risk. So that's been work, rolling my sleeves up, working with insider threat analysts, spending lots of time with insider threat analysts in the early years, conducting numerous vulnerability assessments and program evaluations, helping organizations explain to their boards and their senior leadership team the scope and severity and the breadth of the insider threat problem, and help folks understand kind of what they already have in place that can form the foundation for an enterprise-wide insider risk management strategy. So I've been very fortunate since 2011 to really have a hand in almost every aspect of insider threat program building, assessment, justifying the need to have an insider threat program in the first place. Obviously since then had a lot to do with actually collecting and analyzing insider incident data, not only what we have access to publicly, but also learning from how we've collected and analyzed data here at the SEI over almost 20 years, and help organizations understand how they can use their own data collection and analysis capabilities to bolster their insider threat programs. Talhah: Awesome. Okay. So Dan, one of the things that roam and I talked about quite a bit is my own journey in this space. I mean, I haven't been fortunate to be in the space as long as you have, but I remember when I came into this space a couple of years back, one of the first places I turned to was Carnegie Mellon. And specifically, CERT. And one of the places you pointed us towards was this treasure trove of knowledge that you have, that you then sort of complement with the OSIT Group to really drive awareness and learning, cross-learning across different subject ...
    Más Menos
    30 m
  • Episode 1: Artificial intelligence hunts for insider risks
    Sep 21 2020
    In this podcast we explore how new advances in artificial intelligence and machine learning take on the challenge of hunting for insider risks within your organization.  Insider risks aren’t easy to find, however, with its ability to leverage the power of machine learning, artificial intelligence can uncover hidden risks that would otherwise be impossible to find. Episode Transcript: Introduction: Welcome to Uncovering Hidden Risks. Raman Kalyan: Hi, I'm Raman Kalyan, I'm with Microsoft 365 Product Marketing Team. Talhah Mir: And I'm Talhah Mir, Principal Program Manager on the Security Compliance Team. Raman: All right, welcome to episode one, where we're talking about using artificial intelligence to hunt for insider risks within your organization. Talhah, we're gonna be talking to Robert McCann today. Talhah: Yeah, looking forward to this. Robert's been here for 15 years, crazy-smart guy. He's an applied researcher, a Principal Applied Researcher at Microsoft, and he'd been like a core partner of ours, leading a lot of the work in the data science and the research space. So in this podcast, we'll go deeper into what are some of the challenges we're coming across, how we're planning to tackle some of those challenges, and what they mean in terms of driving impact with the product itself. Raman: I'm excited. Let's do it. Talhah: Let's get it. Raman: Robert has been focused on the insider risk space for us for, Robert, how long you've been in this space now? Robert: I've been doing science for about 15 years at Microsoft. The insider risk, about a year I think? Talhah? Something like that. Raman: Nice. What's your background? Robert: I am an applied researcher at Microsoft. I've been working on various forms of security for many years. You can see all the gray in here, it's from that. So I've done some communication security, like email filtering or attachment, email attachment filtering. I've done some protecting Microsoft accounts or user's accounts, a lot of reputation work. And then the last few years I've been on ATP products. So basically, babysitting corporate networks, looking to see if anybody had got through the security protections, post breach stuff. So, that's a lot of machine learning models across that whole stack. The post breach thing is a lot about looking for suspicious behaviors on networks or suspicious processes. And then the last year or so, I wanted to try to contribute to the insider threat space. Raman: What does it mean to be an inside ... or to be an applied researcher? Robert: An applied researcher, that's a propeller head. So we all know what propeller heads are. Basically, I get to go around and talk to product teams, figure out their problems, and then go try to do science on it and try to come up with technical solutions. AI is a big word. There's a lot of different things that we do under that umbrella. A lot of supervised learning, a lot of unsupervised learning to get insights and to ship detectors. I basically get to do experiments, see how things would work, and then try to tech transfer it to a product. Raman: So, you said you spend most of your time in the external security space, [crosstalk]- Robert: That's right. Raman: ... things like phishing, ransomware, people trying to attack us from the outside. How is insider threat different? What do [crosstalk] like to be, "Wow, this isn't what I expected," or, "Here are some challenges," or, "Here's some cool stuff that I think I could apply." Robert: Yeah. It's a very cool space. Number one, because it's very hard from a scientist's perspective, which I enjoy. So the first thing that you hit on, that's really the sort of fundamental first thing that makes it hard is that they're already inside. They're already touching assets. People are doing their normal work and they inside threaten might not even be malicious. It might be inadvertent. So it's a very challenging thing. It's different than trying to protect a perimeter. It's trying to sort of watch all this normal behavior inside and look for any place that anybody might be doing anything that's concerning from a internal assets perspective. Raman: So when you think about somebody doing something challenging, is it just like, hey, I've downloaded a bunch of files. Because today I might download a bunch of files. Tomorrow, I might just go back to my normal file thing. But if I look across an organization, besides a Microsoft, that's 200,000 people. That could probably produce a lot of noise, right? So how do you kind of filter through that? Robert: So actually, the solutions that are right now in the product and what we're trying to leverage to improve the product are built on a lot of AI things. There's very sophisticated algorithms that try to take documents and classify what's in those documents, or customers might go and label documents, and then you try to use those labels to classify more documents. There's a lot of very sophisticated, sort of deep learning...
    Más Menos
    30 m