We do have potential problems associated with this approach. However, we have false positives so we may tag something as being a certain thing when indeed it's not and identify it miscorrectly. We have to figure out how to weed out the false positives because they get in the way of us identifying legitimate vulnerabilities. We may have crash problems as we talked about. The system may fall prey to crashing if we're not careful about the nature and the strength of the particular examination of the particular vulnerability assessment that we are doing. A full scale vulnerability assessment firing all sorts of tests and all sorts of port maps and different things against a system can actually crash a production system and we do want to make sure we're aware of that. And then we have temporal information, the information that is related to time. There may be timeliness involved. Is that vulnerability something that's still there after we've run a patching cycle? May or may not be and we have to understand that. When was the vulnerability discovered? How long is it available for? These are things we'd want to be aware of as well. Maybe an old vulnerability. It's no longer in the system in other words. I just want to make sure we're aware of that and we know that. When we think about host scanning with regards to vulnerability assessments, we have to really think about the fact that the systems we're going to target are going to be systems that other people would likely target as well. In other words, an internal system. One that is never going to be connected to by anybody outside the organization may not really be all that valuable for us to spend a lot of time doing a vulnerability assessment on because most people are not going to be able to connect to it. But a system that sits in the DMZ that is going to be publicly exposed and anybody can get to is incredibly important for us to do a vulnerability assessment again. So, I just want to understand that. It says here, as we talk about this, organizations serious about security create what are called hardened hosts, hosts that are going to be stripped out that don't have a lot of services running, that minimize what is known as their attack surface quotient or their ASQ, which is the attack profile that they have. And we can measure that. We can assess that using tools. If we don't need certain services, we don't need to run certain applications. We should get rid of them. The less stuff on a system that an attacker can hook into, the less vulnerabilities that we will tend to find. And so, we want to harden the hosts that are going to be available in facing the attackers and then we want to scan them to ensure that there is a locked down and a solidly built as possible. Host security considerations, like we were talking about, disable any unnecessary and unneeded services, any unsecure services, like Telnet for instance, should not be allowed to run even if they're not being used, they should just not even be on the system in the first place. And sure at least, privilege file system permissions, the concept of least privilege is one we've often talked about. If all you need is read permission to a directory, don't give me anything else. Give me a read. If you give me more than read, somebody may take advantage of my access and use that to gain an advantage in the system. So, lock down those accesses, lock down those permissions. Make sure file system permissions are effectively as we say as tight as possible. Establish and enforce a patching policy, examine applications for known weaknesses, patch them when and where appropriate, test firewalls, test routers, do security monitoring testing. Remember, firewalls and routers face the outside world. They're going to be hammered on all day by all sorts of bad actors, all sorts of threat sources. They may be known vulnerabilities that we can patch for, they may be unknown vulnerabilities that we're not aware of but somebody else may be. Got to make sure we're checking. Got to make sure we're monitoring. Got to make sure we're paying attention at all times to these kind of things. Let's take a look at some review questions add up to wrap up this part of our conversation around vulnerability assessment and host security. Before we move on and talk about some other things, got three questions up on the screen. Give you a moment just to go ahead and review those before we take a look at the answers. And then as soon as you come back, you take a moment to do that, we'll throw them up on the screen and make sure we know what they are. Let's take a look at what the answers to those questions are. Question number one, what problems may arise when using vulnerability analysis tools? Well, there's lots and lots of problems that could arise. We talked about four in particular: false positives, weeding out false positives, crash exposure, and temporal information, the timeliness of the information. I just want to make sure we're aware of that. What are some benefits of vulnerability testing? Well, obviously, we're identifying system vulnerabilities so we can allow for prioritization in dealing with them in a force ranked way that's going to take the most critical issues and deal with them first and the minimum or medium critical issue, second, and a low critical issue is third, and it's obviously going to then allow us to compare over time what our security posture looks like and make sure we understand progressively for getting better or worse at dealing with those issues. And then number three, what are the two broad categories of vulnerability testing software? We talked about general vulnerabilities and we talked about application and specific vulnerabilities. So, I want to make sure we're thinking about both categories. Again, make sure you have the right answers. If you need a moment or two just to gather your thoughts and make notes on, pause here for a minute. Please feel free to do that. We're going to go ahead and continue our discussion in this area. We're going to talk a bit about traffic types and the kind of traffic that we may want to think about. And we get a lot of traffic in our networks, all sorts of stuff, right? Data patterns that have multiple packets, that have single packets, obfuscated data, fragmented data, flooding data, protocol embedded attacks, all sorts of different stuff, and so, we have to think about as how are we going to separate, how are we going to segment, how are we going to differentiate, how are we going to analyze, and how are we going to classify that data so we know which data is good, which data is bad, how are we going to deal with that, and what are we going to get and how are we going to know? We probably need to use some tools that will help us to examine data flows. We have to have some knowledge of data, data types. We have to have some knowledge of the kind of systems that we can use to defend against certain attacks and which kinds of traffic they will produce. Firewalls will allow or deny certain traffic based on pattern matching, routers will allow or deny certain traffic based on pattern matching but from a different perspective than a firewall, IDSes or IPSes will allow or deny certain traffic based on pattern matching. So again, we have to understand how to monitor traffic, how to identify traffic, to then be able to go in and actually deal with all of these different traffic that we are seeing and producing. Network monitoring may actually be very helpful