Implementing DLP: Deploy

简介: Up until this point we’ve focused on all the preparatory work before you finally turn on the switch and start using your DLP tool in production.

Up until this point we’ve focused on all the preparatory work before you finally turn on the switch and start using your DLP tool in production. While it seems like a lot, in practice (assuming you know your priorities) you can usually be up and running with basic monitoring in a few days. With the pieces in place, now it’s time to configure and deploy policies to start your real monitoring and enforcement.

Earlier we defined the differences between the Quick Wins and Full Deployment processes. The easy way to think about it is Quick Wins is more about information gathering and refining priorities and policies, while Full Deployment is all about enforcement. With the Full Deployment option you respond and investigate every incident and alert. With Quick Wins you focus more on the big picture. To review:

  • The Quick Wins process is best for initial deployments. Your focus is on rapid deployment and information gathering vs. enforcement to help guide your full deployment. We previously detailed this process in a white paper and will only briefly review it in this series.
  • The Full Deployment process is what you’ll use for the long haul. It’s a methodical series of steps for full enforcement policies. Since the goal is enforcement (even if enforcement is alert/response and not automated blocking/filtering) we spend more time tuning policies to produce desired results.

We generally recommend you start with the Quick Wins process since it gives you a lot more information before jumping into a full deployment, and in some cases might realign your priorities based on what you find.

No matter which approach you take it helps to follow the DLP Cycle. These are the four high-level phases of any DLP project:

  1. Define: Define the data or information you want to discover, monitor, and protect. Definition starts with a statement like “protect credit card numbers”, but then needs to be converted into a granular definition capable of being loaded into a DLP tool.
  2. Discover: Find the information in storage or on your network. Content discovery is determining where the defined data resides, while network discovery determines where it’s currently being moved around on the network, and endpoint discovery is like content discovery but on employee computers. Depending on your project priorities you will want to start with a surveillance project to figure out where things are and how they are being used. This phase may involve working with business units and users to change habits before you go into full enforcement mode.
  3. Monitor: Ongoing monitoring with policy violations generating incidents for investigation. InDiscover you focus on what should be allowed and setting a baseline; in Monitor your start capturing incidents that deviate from that baseline.
  4. Protect: Instead of identifying and manually handling incidents you start implementing real-time automated enforcement, such as blocking network connections, automatically encrypting or quarantining emails, blocking files from moving to USB, or removing files from unapproved servers.

Define Reports

Before you jump into your deployment we suggest defining your initial report set. You’ll need these to show progress, demonstrate value, and communicate with other stakeholders.

Here are a few starter ideas for reports:

  • Compliance reports are a no brainer and are often included in the products. For example, showing you scanned all endpoints or servers for unencrypted credit card data could save significant time and resources by reducing scope for a PCI assessment.
  • Since our policies are content based, reports showing violation types by policy help figure out what data is most at risk or most in use (depending on how you have your policies set). These are very useful to show management to align your other data security controls and education efforts.
  • Incidents by business unit are another great tool, even if focused on a single policy, in helping identify hot spots.
  • Trend reports are extremely valuable in showing the value of the tool and how well it helps with risk reduction. Most organizations we talk with who generate these reports see big reductions over time, especially when they notify employees of policy violations.

Never underestimate the political value of a good report.

Quick Wins Process

We previously covered Quick Wins deployments in depth in a dedicated whitepaper, but here is the core of the process:

The differences between a long-term DLP deployment and our “Quick Wins” approach are goals and scope. With a Full Deployment we focus on comprehensive monitoring and protection of very specific data types. We know what we want to protect (at a granular level) and how we want to protect it, and we can focus on comprehensive policies with low false positives and a robust workflow. Every policy violation is reviewed to determine if it’s an incident that requires a response.

In the Quick Wins approach we are concerned less about incident management, and more about gaining a rapid understanding of how information is used within our organization. There are two flavors to this approach – one where we focus on a narrow data type, typically as an early step in a full enforcement process or to support a compliance need, and the other where we cast a wide net to help us understand general data usage to prioritize our efforts. Long-term deployments and Quick Wins are not mutually exclusive – each targets a different goal and both can run concurrently or sequentially, depending on your resources.

Remember: even though we aren’t talking about a full enforcement process, it is absolutely essential that your incident management workflow be ready to go when you encounter violations that demand immediate action!

Choose Your Flavor

The first step is to decide which of two general approaches to take: * Single Type: In some organizations the primary driver behind the DLP deployment is protection of a single data type, often due to compliance requirements. This approach focuses only on that data type. * Information Usage: This approach casts a wide net to help characterize how the organization uses information, and identify patterns of both legitimate use and abuse. This information is often very useful for prioritizing and informing additional data security efforts.

Choose Your Deployment Architecture

Earlier we had you define your priorities and choose your deployment architecture, which, at this point, you should have implemented. For the Quick Wins process you select one of the main channels (network, storage, or endpoint) as opposed to trying to start with all of them. (This is also true in a Full Deployment).

Network deployments typically provide the most immediate information with the lowest effort, but depending on what tools you have available and your organization’s priorities, it may make sense to start with endpoints or storage.

Define Your Policies

The last step before hitting the “on” switch is to configure your policies to match your deployment flavor.

In a single type deployment, either choose an existing category that matches the data type in your tool, or quickly build your own policy. In our experience, pre-built categories common in most DLP tools are almost always available for the data types that commonly drive a DLP project. Don’t worry about tuning the policy – right now we just want to toss it out there and get as many results as possible. Yes, this is the exact opposite of our recommendations for a traditional, focused DLP deployment.

In an information usage deployment, turn on all the policies or enable promiscuous monitoring mode. Most DLP tools only record activity when there are policy violations, which is why you must enable the policies. A few tools can monitor general activity without relying on a policy trigger (either full content or metadata only). In both cases our goal is to collect as much information as possible to identify usage patterns and potential issues.

Monitor

Now it’s time to turn on your tool and start collecting results.

Don’t be shocked – in both deployment types you will see a lot more information than in a focused deployment, including more potential false positives. Remember, you aren’t concerned with managing every single incident, but want a broad understanding of what’s going on on your network, in endpoints, or in storage.

Analyze

Now we get to the most important part of the process – turning all that data into useful information.

Once we collect enough data, it’s time to start the analysis process. Our goal is to identify broad patterns and identify any major issues. Here are some examples of what to look for:

  • A business unit sending out sensitive data unprotected as part of a regularly scheduled job.
  • Which data types broadly trigger the most violations.
  • The volume of usage of certain content or files, which may help identify valuable assets that don’t cleanly match a pre-defined policy.
  • Particular users or business units with higher numbers of violations or unusual usage patterns.
  • False positive patterns, for tuning long-term policies later.
  • All DLP tools provide some level of reporting and analysis, but ideally your tool will allow you to set flexible criteria to support the analysis.

Full Deployment Process

Even if you start with the Quick Wins process (and we recommend you do) you will always want to move into a Full Deployment. This is the process you will use anytime you add policies or your environment changes (e.g. you add endpoint monitoring to an existing network deployment).

Now before we get too deep keep in mind that we are breaking things out very granularly to fit the widest range of organizations. Many of you won’t need to go this in depth due to size or the nature of your policies/priorities. Don’t get hung up on our multi-step process since many of you won’t need to move so cautiously and can run through multiple steps in a single day.

The key to success is to think incrementally; all too often we encounter organizations that throw our a new, default policy and then try to start handling all the incidents immediately. While DLP generally doesn’t have a high true false positive rate, that doesn’t mean you won’t get a lot of hits on a policy before tuning. For example, if you set a credit card policy incorrectly you will alert more on employees buying sock monkeys on Amazon with their personal cards than major leaks.

In the Full Deployment process you pick a single type of data or information to protect, create a single policy, and slowly roll it out and tune it until you get full coverage. As with everything DLP this might move pretty quickly, or it could take many months to work out the kinks if you have a complex policy or are trying to protect data that’s hard to discern from allowed usage.

At a high level this completely follows the DLP Cycle, but we will go into greater depth.

Define the policy

This maps to your initial priorities. You want to start with a single kind of information to protect that you can well-define at a technical level. Some examples include:

  • Credit card numbers
  • Account numbers from a customer database
  • Engineering plans from a particular server/directory
  • Healthcare data
  • Corporate financials

Picking a single type to start with helps reduce management overhead and allows you to tune the policy.

The content analysis policy itself needs to be as specific as possible while reflecting the real-world usage of the data to be protected. For example, if you don’t have a central source where engineering plans are stored it will be hard to properly protect them. You might be able to rely on a keyword, but that tends to result in too many false positives. For customer account numbers you might need to pull directly from a database if, for example, there’s no pattern other than a 7 or 10 digit number (which, if you think about it and live in the US, is going to be more than a little problem).

We covered content analysis techniques in our Understanding and Selecting a Data Loss Prevention Solution paper and suggest you review that while determining which content analysis techniques to use. It includes a worksheet to help guide you through the selection.

In most cases your vendor will provide some prebuilt policies and categories that can jump start your own policy development. It’s totally acceptable to start with one of those and evaluate the results.

Deploy to a subset

The next step is to deploy the policy in monitoring mode on a limited subset of your overall coverage goal. This is to keep the number of alerts down and give you time to adjust the policy. For example:

  • In a network deployment, limit yourself to monitoring a smaller range of IP addresses or subnets. Another option is to start with a specific channel, like email, before moving on to web or general network monitoring. If your organization is big enough, you’ll use a combination of both at the start.
  • For endpoints limit yourself to both a subset of systems and a subset of endpoint options. Don’t try and monitor USB usage, cut and paste, and local storage all at once – pick one to start.
  • For storage scanning pick either a single system, or even a subdirectory of that system depending on the overall storage volume involved.

The key is to start small so you don’t get overloaded during the tuning process. It’s a lot easier to grow a smaller deployment than deal with the fallout of a poorly-tuned policy overwhelming you. We stick to monitoring mode so we don’t accidentally break things.

Analyze and tune

You should start seeing results pretty much the moment you turn the tool on. Hopefully you followed our advice and have your incident response process ready to go since even when you aren’t trying, odds are you will find things that require escalation.

During analysis and tuning you iteratively look at the results and adjust the policy. If you see too many false positives, or real positives that are allowed in that context, you adjust the policy. An example might be refining policies to apply differently to different user groups (executives, accounting, HR, legal, engineering, etc.). Or you might need to toss out your approach to use a different option – such as switching to database fingerprinting/matching from a pattern-based rule due to the data being too close to similar data in regular use.

Another option is if your tool supports full-network capture, or you are using DLP in combination with a network forensics tool. In those cases you can collect a bunch of traffic and test policy changes against it immediately instead of tuning a policy, running it for a few days or weeks to see results, then tuning again.

You also need to test the policy for false negatives by generating traffic (such as email messages, it doesn’t need to be fancy).

The goal is to align results with your expectations and objectives during the limited deployment.

Manage incidents and expand scope

Once the policy is tuned you can switch into full incident-handling mode. This doesn’t include preventative controls like blocking, but fully investigating and handling incidents. At this point you should start generating user-visible alerts and working with business units and individual employees to change habits. Some organizations falsely believe it’s better not to inform employees they are being monitored and when they violate policies so they don’t try to circumvent security and it’s easier to catch malicious activity. This is backwards, since in every DLP deployment we are aware of the vast majority of risk is due to employee mistakes or poorly managed business processes, not malicious activity. The evidence in the DLP deployments we’ve seen clearly shows that educating users when they make mistakes dramatically reduces the number of overall incidents.

Since user education so effectively reduces the number of overall incidents we suggest taking time to work within the limited initial deployment scope since this, in turn, lowers the overhead as you expand scope.

As you see the results you want you slowly expand scope by adding additional network, storage, or endpoint coverage, such as additional network egress points/IP ranges, additional storage servers, or more endpoints.

As you expand bit by bit you continue to enforce and tune the policy and handle incidents. This allows you to adapt the policies to meet the needs of different business units and avoid being overwhelmed in situations where there are a lot of violations. (At this point it’s more an issue of real violations than false positives).

If you are a smaller organization or don’t experience too many violations with a policy you can mostly skip this step, but even if it’s only for a day we suggest starting small.

Protect iteratively

At this point you will be dealing with a smaller amount of incidents, if any. If you want to implement automatic enforcement, like network filtering or USB blocking, now is a good time.

Some organizations prefer to wait a year or more before moving into enforcement and there’s nothing wrong with this. However, what you don’t want to do is try implementing preventative controls on too wide a scale. As with monitoring we suggest you start iteratively to allow you time to deal with all the support calls and ensure the blocking is working as you expect.

Add component/channel coverage

At this point you should have a reliable policy on a wide scale (potentially) blocking policy violations. The next step is to expand by adding additional component coverage (such as adding endpoint to a network deployment) or expanding channels within a component (additional network channels like email or web gateway integration, or additional endpoint functionality). This, again, gives you the time to tune the policies to best fit the conditions.

As we said earlier many organizations will be able to blast through some basic policies pretty quickly and not be overloaded with the results. But it’s still a good idea to have this more-incremental process in your head in case you need it. If you started with the Quick Wins you’ll have a good idea on the amount of effort needed to tune your policies before you ever start.

—Rich

目录
相关文章
|
敏捷开发 定位技术 开发者
poc Proof of Concept
Proof of Concept(简称 POC)是概念验证的意思。在软件开发领域,POC 通常用于验证某个想法或概念是否可行。它通常是一个小型项目或原型,可以通过实际操作来证明某个想法或技术的有效性。POC 可以帮助开发者在项目开始之前确定技术的可行性,减少开发过程中的风险。
1500 3
《Automated-Testing-Of-Crypto-Software-Using-Differential-Fuzzing》电子版地址
Automated-Testing-Of-Crypto-Software-Using-Differential-Fuzzing
71 0
《Automated-Testing-Of-Crypto-Software-Using-Differential-Fuzzing》电子版地址
|
Go 数据中心
Terraform Alicloud provider开发之Trouble Shooting
实现OTS resource `alicloud_ots_table`的表创建编排后,完成了[Terraform Alicloud provider开发入门](https://www.atatech.org/articles/104556),发现表更新删除还有不少坑。 # Terraform provider怎么debug ``` export TF_LOG=DEBUG ``` 设置后,
153 0
|
弹性计算 安全
Q&A with Cloud Expert on Building an E-Commerce Website with Magento & SAS
Wen Chen-yu from the Training and Certification Team offers his tips and knowledge on how to launch and scale an e-commerce website with Magento and Alibaba Cloud Simple Application Server.
4389 0
Q&A with Cloud Expert on Building an E-Commerce Website with Magento & SAS
|
安全 Devops
Using Secrets Management to Enhance DevOps
Effective secrets management is essential for a successful DevOps strategy.
1640 0
Using Secrets Management to Enhance DevOps
|
中间件 Serverless Go
Backend-as-a-Service (BaaS) for Efficient Software Development
The adoption of the Internet and mobile technologies has revolutionized the business ecosystem, with entrepreneurs able to implement ideas quickly by .
1754 0
Backend-as-a-Service (BaaS) for Efficient Software Development
|
Oracle 关系型数据库 数据库