Thursday 27 March 2014

Privacy Checklists and a Study in Ontario

Not that particular study in Ontario, but another in Ontario regarding the Surgical Checklist and its "ineffectiveness", which was rebutted by many including Atul Gwande which ended with this quote:

Perhaps, however, this study will prompt greater attention to a fundamentally important question for health care reform broadly: how you implement an even simple change in systems that reduces errors and mortality – like a checklist. For there is one thing we know for sure: if you don’t use it, it doesn’t work.

Relating this back to my experiences in deploying and using checklists for privacy is that THEY ARE NOT A TOOL FOR IMPROVING PRIVACY DIRECTLY but a TOOL for organising your workflows, your actions and ensuring that all members of a team are actively cross-checking each other; and even then this is just a small part of the overall effect. Let us for a moment rewrite Gwande's statement a little:

Perhaps, however, this study will prompt greater attention to a fundamentally important question for privacy engineering: how you implement an even simple change in systems that reduces errors and non compliance– like a checklist. For there is one thing we know for sure: if you don’t use it, it doesn’t work.

In the paper [1] (emphasis mine)
The checklist approach to privacy protection has been debated.[24] Checklists have become important safety elements in airplane and medical procedures and are quite common in security auditing. However, their utility for privacy remains questionable. It might be possible to design privacy checklists for frequent and standardized use cases, but the breadth of potential projects makes a standard checklist for everything an unlikely tool. 

[24]  Ian Oliver, “Safety Systems – Defining Moments” Available at http://ijosblog.blogspot.com/2013/07/systems-safety-defining-moments.html

Indeed the two paragraphs on checklists and privacy impact assessments fails to properly understand the former and compares it against the latter which is a different kind of tool altogether. In fact, a PIA should be done and this would be ensured or reminded by having it included as a point on a checklist for privacy.

Indeed no mention was made, nor has been made of any "standardised checklist". In fact there is a capitalised statement on the bottom of the checklist:
THIS CHECKLIST IS NOT INTENDED TO BE COMPREHENSIVE. ADDITIONS AND MODIFICATIONS TO FIT LOCAL PRACTICE ARE ENCOURAGED.
Which can be read about in this article published back in February.

The point here in both cases: surgical and privacy engineering is that the checklist needs to be accompanied by a procedural and "societal" change for it be successful. One only needs to read about Pronovost's work with a simple checklist and the changes surrounding that to understand how checklists work in practice - that and the other experiences presented in Gawande's excellent book on the subject: The Checklist Manifesto. Our experiences can be read about in the presentation Flying Planes, Surgery and Privacy.

* * *

References:


Sunday 23 March 2014

Not quite as naive about privacy as before

A while back I wrote an article on how were we being naive about privacy in that we're all talking about the subject but no-one seems to be actually asking the question of what it is.

In order to answer this we've* taken an ontological approach by decomposing concepts into things such as information type, security class, jurisdiction, purpose, usage, provenance etc. All those concepts which make sense to the engineers who have to build information systems.

*Ora Lassila (he of RDF fame) has had a pretty big (huge!) hand in all of this too. Hey! We even got demonstration and prototype implementations working!

No work like this is even done in isolation - ontological approaches aren't new and certainly security, privacy, risk management etc have been tackled in one way or another - Solove, Schneier just to name two big names and a host of other researchers along too.

Now this is where I have a lot of hope: there is quite a bit a work in this area - that is, formalising concepts of privacy and in particular risk and risk avoidance in this ontological manner. There's even work on matching ontologies together. We start to see the real, fundamental structure of privacy and its conceptual basis.

What this means in the long term (and even the short!) is that we have a common terminological and semantic framework from lawyers to programmers coming into place.

We're missing some parts of course: how do all these ontologies fit together?  Can we unify the notions of consent used by lawyers with the [boolean] data types used by programmers?

"Your privacy is important to us"


bool optedIn = False  //sensible default

Actually we do in part - myself and Ora did develop quite a nice unification framework to link the ontologies together, link with the idea of information, link it with the notions of database table, CSV structures, classes etc; and even link it with how systems such as HADOOP process data.

So this gets me to a few places:
  1. There is work on this being made - various groups are developing ontologies to express relevant concepts about information and aspects of information
  2. Some groups are unifying those and drawing out subtle semantic differences
  3. Some groups are applying these to more abstract areas such as the notions of consent and notice and how these may be made more meaningful to machines, and I hope humans too

References


Cena, Dokoohaki, Matskin. (2011) Forging Trust and Privacy with User Modeling Frameworks: An Ontological Analysis. STICS2011 (Draft)

Anya Kim and Jim Luo and Myong Kang (2005) Security Ontology for Annotating Resources. Research Lab, NRL Memorandum Report, pp51.

Kost, Freytag, Kargl, Kung. Privacy Veriļ¬cation using Ontologies

Golnaz Elahi, Eric Yu, Nicola Zannone (2009) A Modeling Ontology for Integrating Vulnerabilities into Security Requirements Conceptual Foundations?

Tuesday 18 March 2014

A Particle Physics Approach to Privacy

A while ago I read on the LtU programming language blog a discussion about the future of programming language research - is the going to be any? Haven't we metaphorically illuminated all there is to see about programming languages?  Surely the future discoveries and developments in this area are going to be very small and very deep?

One reply caught my eye:
"When you've searched around the lamp, trying looking underneath it."
And so I feel the same about privacy...we're spending huge amounts of time looking at its effects and very little and looking at what it is.

What are the fundamental structures of privacy and how to these manifest themselves upon the World at large?

Should we take a highly deconstructive approach to privacy? Break it apart into its constituent blocks, its fundamental atomic and sub-atomic structure?

In the same way as the LHC breaks apart subatomic particles to reveal the inner structure of the Universe, should we take a similar approach to privacy?

What are the subatomic particles, the quarks, the bosons, the fermions of privacy? Does it have a metaphorical Higgs-boson and related field which gives privacy its "mass"?

Monday 17 March 2014

Structuring Privacy Requirements pt 1

One of the toughest problems I'm having to solve, not just for my book on privacy engineering, but in my daily job as well is to formulate a set of privacy requirements to the software engineers and the R&D teams.

Actually it isn't that the privacy requirements themselves are hard, we've plenty at the policy level and extrapolating these down into the more functional and security related requirements at the implementation level (ok, it is difficult but there are harder things in this area).

Placing all these in a structure that ties together the various classifications and aspects of information, data flow modelling, requirements and risk management has been interesting and fiendishly difficult to say the least. Getting this structure into a state that supports all of these and getting the semantics of the kinds of thing it is structuring is critical to understanding how all of this privacy engineering works.

We assume that we understand the classification systems, for example the almost traditional Secret-Confidential-Public style security classifications and the Location, Time etc classifications of information type; as well as the other aspects such as purpose, usage, provenance, identity etc. Each of these has its own set of classification elements, hierarchy and other ontological structure. For example for information type:

Information Type Hierarchy
We also assume we understand data flow modelling with its processes, flows, sources, sinks and logical partitionings. We can also already see the link between elements here (as shown below) and the classification systems above
Example Data Flow with Annotations from Previously Described Ontologies/Classification Systems
Now the structure of our requirements needs to take into consideration the various elements from the classification systems, the aspect of the requirement we want to describe (more on this below) and the required detail of requirement relevant to the stage in the software process. This gives us the structure below:



So if we wish to find the requirements for User's Home Address Policy for Local Storage then we take the point corresponding to those coordinates. If there happens to be nothing there then we can use the classification systems' hierarchies to look for the requirement corresponding to a parent; a "user's home address" is-a "Location":

So if we take the example data flow from earlier then for each of the flows, storages and processes we can construct a set of requirements simply by reading off the corresponding requirements from the structure above.

This leads to an interesting situations where it is possible to construct a set of requirements which are overconstraining. That is simply that we can not build a system that can support everything, for example, one data point might trigger a secret classification and key management, encrypted content etc.

We then need to weaken the requirements such that a system can be constructed according to some economic(!) model. As we weaken or retrench our requirements we are introducing risk into the system,

Aside: Retrenchment - here's a good reference: Banach, Poppleton. Controlling Control Systems: An Application of Evolving Retrenchment. ZB 2002:Formal Specification and Development in Z and BLecture Notes in Computer Science Volume 2272, 2002, pp 42-61.

This gives us our link to risk management and deciding whether each "weakness" we introduce is worth the risk. And as risk can be quantified and we can perform further tasks such as failure mode and effects analysis then we obtain a rigorous method to predict failures and make informed decisions about these. I'll describe more on this in later articles.