Extending Data Loss Prevention through ACM

Two things that always interest me are new uses for ACM, and how to relate ACM to other tools in the market (usually ECM and BPM). As I was thinking down those lines (on a very long flight to California) it occurred to me that I think about process management in graph theory terms. For me, process management is about managing data and the flow of data between actors in the process.  That led me to the following (and I know this is not 100% complete, but bear with me for a second):

BPM (Business Process Management) – The focus is on structured data (forms) and structured flow.

ECM (Enterprise Content Management) – The focus is on unstructured data (documents) and structured flow.

DCM (Dynamic Case Management) – The focus is on structured+unstructured data (forms and documents) and semi-structured flow.

ACM – The focus is on structured+unstructured data (forms and documents) and unstructured flow.

 I think one of the reasons that we continually get into arguments about what is x vs. y from the above list is that the difference is mainly one of focus. That isn’t a trivial difference (especially in real world processes), but it does make it hard to truly differentiate the different tools and approaches just on the basis of features. In the field that difference in focus makes one tool much better than another for certain processes – but on paper it seems just like nuances and features.

 For example – take a BPMS, add some provisions for unstructured data and unstructured flows and voila – you have an ACM tool.  In the real world it really doesn’t work that way – those “nuances” add up into a big difference – choosing the right tool can make all the difference in the end result to the customer. That why in a process automation project you need to take into account the specifics of the process management tool used and not do it as a purely theoretical exercise.

In my world (ACM) actors in the process are people which use and modify forms and documents and the flow of data between actors is not predefined, but evolves as a result of the interactions between those actors. So actors are nodes in a graph, the flow of the data between actors are depicted by the edges.

 So how does this help understand where ACM can make a difference in an existing business process? It helps by allowing a simple framework for thinking about a process in ACM terms:

  1. Is it a process mostly between people?

(a) Is it a data driven process that would benefit from taking into account process context?

(b) Is it a process that would benefit from paying closer attention to the “human” data being generated and used?

This sometimes leads me in strange directions. The latest being ACM for Data Loss Prevention (DLP). For those of you that have never heard of the term DLP here is the Wikipedia definition – “Data Loss Prevention (DLP) is a computer security term referring to systems that identify, monitor, and protect data in use (e.g., endpoint actions), data in motion (e.g., network actions), and data at rest (e.g., data storage). The systems are designed to detect and prevent the unauthorized use and transmission of confidential information.” Data loss can be either accidental or malicious.

Now if you look at DLP tools – they are very data oriented (looking at the data to determine what to do to protect the data). That is the only way that works if your focus is trying to stop people from maliciously misusing data in any context. Having talked to some friends in the area – it turns out that a much more common scenario is accidental data loss – sending sensitive data to the wrong people.

The reason that people are sending data around in the first place is because that data is used within the context of a business process. By looking only at the data – DLP tools are ignoring the process context, making (1a) relevant. Which led me to wonder – What if DLP tools were process aware?

In that case you could extend DLP tools and solve some of the problems that plague DLP today – especially false positives. By knowing the process context – it is much easier to understand if the data is being used “legally” and whether only the right people have access to the data. For example – an ACM system could make it harder to send data outside of a “white list” of people that are relevant to the process. It could also lock down the data to ensure that only people part of those processes can see the data, even “at rest”. So if you send sensitive data to the people that are naturally part of the process – no problem. Outside that list – well then the system should make sure that you really mean it and that you understand the potential consequences. There are many other benefits to making DLP process aware – but this post is getting too long as it is.

This also brings me back to my first point – could you use BPM instead of ACM for this?  Theoretically I guess you could, but in the real world – it just wouldn’t work – BPM is just too heavy duty for most of the type of everyday (email based) processes that benefit from DLP. For ACM to work in this context, the ACM process must be as easy for the participants as email processes, and people see the benefit in using ACM instead of email.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


%d bloggers like this: