Identity Process Utopia

A light-hearted view at some idiosyncrasies of naming processes in Identity & Access

A few of you may know that process definition in general and Identity & Access processes in particular are the special object of study for me since several years already. As a tiny indication how serious I took this self-imposed duty the formation of the standardisation initiative GenericIAM.org may be taken.

But before I will impertinently demand of you to confront the insights and results of more than a decennium of intellectual efforts, a more light-weight menu awaits you here.

It all started, when I stumbled across a process designated the “Rejoiner Process”.

With utter dismay I already had to experience the surging popularity of the Joiner, Mover- and Lever-Processes during the recent years.

Processes should be named according to their essential property. This is trivial at first and easily accepted. Essential business processes transform an initial state into a target state, a source material into a desired result, maintain (create, change or eliminate) an object - in computer science an information object.

Consequently, they should carry exactly that essence in their name: "Achieve target state", " Create result" or "Maintain object" - i.e. a verb that characterizes the transformation and a noun that designates the object to be transformed or which emerges from the transformation. This is how canonical process designations are created.

Designations like Joiner, Mover & Leaver more hint at the actors who perform the activities, than to the activity itself. Moreover, the complete process chain which encompasses the ‘onboarding’ of an individual to a corporation pertains to typical traditional HR-processes. While the mere notion of ‘Human Resources’ is so yesterday and an approach addressing a corporations’, total workforce would be more appropriate, we anyway have to accept, that Identity Management usually start after old-fashioned HR-processes had their lengthy run. And Access Processes only start thereafter. So, a closer look anyway reveals a more complex picture.

Nevertheless, despite all fruitless complaining, the Joiner, Mover & Leaver found their way into process reality. I fear, we henceforth have to live with them. Realising this undeniable truth, I finally found my peace of mind.

But then the Rejoiner suddenly popped up in a low profile and low quality conceptual corporate paper. The rationale behind that game-changing invention was to give new hire in one of the groups companies, who once were employed (or had some other relationship) by another or the same of the group’s members, should be given a special treatment to reflect this continuity – as if this pre-employment / pre-relationship-check shouldn’t be part of the regular onboarding anyway.

Driven by strong inventive spirit and unlimited creativity the team soon gave birth to a zoo of more exciting process variants. Yes, they come in all shapes, flavours and colours.

Among the artefacts which sprung from of the mad scientist minds were: The Multiple Joiner, the first Mover, the final Leaver, the Releaver (or reliever?). Obviously, the Believer would be welcome. And what about the Rejoicer? In times of mass layoffs certainly the Remover Process would make ultimate sense.

In the end - and after sustainably sobering out - we came the common conclusion that it would be best to better hit the undo-button and rollback to the state we were in prior to the creative explosion and after passing through the ages of the great process extinction and purge the Identity & Access process Utopia - the Rejoiner included.

To diffidently voice my very personal concern: the spirit may have left the bottle irreversibly however.

Take this short story as a hint to stay tuned as more about the results of the longstanding GenericIAM-effort will soon be presented here. Unfortunately, however it will represent heavier stuff that this tiny contribution.

Meanwhile all of you may enjoy the coming year end festivities.


GDPR & Digital Transformation - What do they have in common?

At first sight nothing – you would say, except perhaps that both of them, the General Data Protections Regulation and the change imperative digital transformation, are currently hot topics in the public professional debate. And I would even agree – at least at first sight.

When digging a bit deeper into the very nature of both concepts, the necessary preconditions, the resulting effects, we might feel compelled to paint a different picture. There might even be a common layer of overarching or underlying principles both concepts need to follow in order to be successfully implemented.

Digital Transformation

Much has been written about this fashionable term – not least by myself. So I will spare you elaborating at length and in depth about this topic. Let’s just focus on some characteristics to be further discussed in the course of this article.

Here we define digital transformation being a transformation of a business aiming at a competitive advantage in its market by profoundly making use of latest digital technology.

By latest technology we mean such, which has sufficiently matured to be seriously considered with acceptable risk as a foundation for the new transformed business.

Like in the past this approach rarely results in re-inventing the business totally, rather more often than not it boils down to the automation of processes, previously done manually.

Nevertheless meanwhile some change has occurred, some kind of the often cited transition from quantity to quality:
  • Artificial intelligence, belittled for many years as a lab only technology, has grown up,
  • Advanced analytics, mature enough now for in-process decision taking,
  • Connecting ordinary “things” to the internet broadens the range of processes to automate
  • and some more
… have meanwhile evolved into powerful tools.

By automating most of the operational layer, making most of the management layer obsolete, adding a new breed of change agents instead, and requiring a much more technology aware strategy process, nevertheless the entire corporation may hereby undergo a fundamental transformation.


The General Data Protection Regulation (GDPR) apparently is quite a different story.

The GDPR intends to strengthen and unify data protection for individuals within the European Union. It also addresses the export of personal data outside the EU. Citizens and residents benefit by getting back control over their personal data. For international business the unification of the regulation within the EU is a welcome side effect as it simplifies the regulatory environment.

The GDPR is driven by some major underlying Principles relating to processing of personal data as expressed in its Article 5: lawfulness, fairness and transparency, purpose limitation, accuracy, storage limitation, integrity and confidentiality, accountability.

While this sound fine and most of us might intuitively agree to it, for enterprises there is reason to be concerned, as the regulation opens a new compliance frontier. Some of its requirements represent rather new concepts like: 'privacy by design' and 'privacy by default', the right to data portability on request of the data subjects, explicit consent, minimal data, or the right to be forgotten, just to name a few .

Hence to comply with the regulation will require changes and enhancements deep in the practiced processes and implemented data structures. In addition regular risk assessments, called Data Protection Impact Assessments (DPIA) in GDPR, will become mandatory once you deal with ‘high risks’, e.g. sensitive personal data. Doubts are justified that both can be achieved within the few months left. But rather it may need years of maturing, at least when starting form a low level of process maturity – which can safely be assumed in the majority of cases.

The volume of the resulting activities too may not be neglectable as a recent OliverWyman survey of 1,500 British consumers, revealed that as many as half of the respondents said they were already leaning toward reclaiming their information.

Regarding the requirement to report a data breach to the supervisory authority within 72 hours, a recent survey illustrates this statement as it found that only 2% of responding companies actually appeared to be compliant, although almost half (48%) of the respondents reported that they were.

In most cases this discrepancy is not due to unwillingness but due to severe deficits in the mere underpinnings. Most often no data encryption is applied by default, may it be structure retaining (pseudonymisation or tokenisation) or not. No company-wide and cross-process identity concept implemented, no role-based or attribute based access management, no executable security policies are in place.

From the regulators perspective these all are elements of ordinary housekeeping which have to be in place to comply with GDPR. And as well they are a necessary precondition for any digital transformation.

GDPR may drive digital transformation. Why so? Let’s randomly take one of the requirements as a small however important example: As mentioned above GDPR obliges companies to report data violations within 72 hours. If they cannot prove that the data were encrypted and the private keys have been sufficiently protected, they will face a severe fine. As traditionally reliable end-to-end data encryption whether it is "at rest" or "in flight" was difficult to achieve and rather costly, new solutions need to be put in place: new processes, new software and most probably even new, specialized hardware. This might further drive the move towards cloud solutions, which in the end will turn out to offer a higher security than in-house solutions.

Thus we here have an example of GDPR paving the way for a further digital transformation, as vulnerabilities due to insufficient IT security measures are the major concerns, withholding the transformation towards truly digital corporations.

Data portability and the right to be forgotten also are examples where the data architecture has to follow a holistic identity concept. It has to include all kinds of stakeholders like customers, vendors and all parts of the workforce – not just employees, hereby inflating the data volume by several orders of magnitude.

Additionally the relationship to planned, on-going and past business activities and of legal obligations must be reflected here to be able to determine the purpose for which the data are actually held for and to effortlessly decide if the and be safely deleted.

The necessary defragmentation of the underlying data architecture and the explicit expression of relationships which to date are often only implicitly stated in no-related documents, too can be welcomed as an enabler for further automation


With only a few months to go GDPR seems to be by far more urgent to be taken serious than any digital transformation. This impression is strongly supported by the looming penalties of up to 4% of annual global turnover or €20 Million (whichever is greater).

Lagging behind the competition however is not much less of a threat. Market dynamics has increased considerably. While in the recent past it took about 20 years for a company to reach sufficient size for a considerable market visibility, today it can well happen after one year. Meanwhile the corporate average life span has shrunk to about 12 years. These numbers might give an impression that by missing the train in the realm of digital transformation might come with penalties in a similar order of magnitude.

There is definitely no time to loose. The good news however is: Doing both is not exactly double the work. There are several commonalities and reason to assume substantial synergies, when addressing both of them.

And by the way: Both have to be done anyway.

Further readings and references …


Just the compliance issues …

During a long professional career a lot is supposed to happen along the intricate windings of our mortal life and hence some folklore piles up to be drawn from when telling our grandchildren our adventures.

Once upon a time for example there was a large corporation from the financial sector I was working for for a while. One day the board of directors was confronted with some nasty audit finding, which would prevent them from being compliant to a considerable range of regulations.

As most of the findings were more or less related to IT security, the order to get things right immediately at no costs quickly trickled down the long command chain (well, as I told you, it was a large and prestigious corporation) until it finally pounded the desk of the chief information security officer, also known as CISO in the corporate jargon.

For those, who are not familiar with tribal rites of large corporations I like to reveal a common habit. If new and challenging problems arise on corporate level, which could neither be ignored nor annihilated through a onetime bold & swift strike by top management, but needs long and tedious work on several levels of the hierarchy, we use to assign this task to a new responsibility. By this mechanism special corporate functions like e.g. the Quality Manager (although “quality is everybody’s job”), the Risk Manager (although conscious risk taking is the prime entrepreneurial task of top management) or finally the IT-Security Manager was born.

No one – and this even for good reason – dared to bestow the CISO with sufficient power to really mitigate the root causes of the reported security holes: He might bring the business operations to a grinding halt – secured however. Moreover some responsibilities are loaded on the shoulders of this poor creature, which should not necessarily be included into his role.

Identity & access is quite a good example. Management of identities certainly is not an IT function and even less an IT-security topic. Rather it must be considered as a necessary general organisational infrastructure for any organisation interacting with human (and even non-human) actors. However a functioning Identity Management is a necessary prerequisite to achieve a sufficient security level (as it is for fine grained cost control, process automation, digitalisation …). Even the access part is Janus-headed with one faced toward providing access (e.g. for automation) and one face towards preventing access (hence the IT-security part). Like nearly all business tasks however its implementation eventually needs a heavy IT involvement.

The fatal Todo

So after setting the scene for the drama to unfold, let’s continue with the story. So the CISO was told to deal with the issues and come back with a detailed plan to be presented at the next board meeting (however with 1 week notice period for each intermediate management level). As the CISO was an honest man his proposal was quickly refused by the board as totally unrealistic and amended with the advice to his line manager to straighten out that strange security guy and educate him on how to serve the company interests best.

Ok folks hear the news. The board is not amused. The situation is serious. It’s not the right time to present your wish list, of what you always wanted to address. They are not stupid up there. They know all these tricks for long. So, no gold plating, don’t’ try to boil the ocean – just the security issues and nothing else.

Poor CISO, he desperately tried to explain, that IT-security is not an add-on, but that it rather is deeply rooted in the IT processes and even more in the whole organisational framework, which he was not the least mandated to address.

Listen to his plea:

Look, Compliance is just the result of a long-term effort. That’s the bad news. The good news is that after investing all this effort it comes as a by-product. I am tempted to borrow the quote from Philip Crosby: (quality is free)

The compliance pyramid

1. Identity & Access depends on Business

Representing an organisational infrastructure layer, Identity & Access processes depend on a sufficient maturity of the of underlying Business processes, which they are meant to support. A major part of these business processes is represented by the workforce management processes (aka Human Resources).

Roles are the business

To get a grip on the inherent complexity of a large organisation, it has become commonly accepted practice to express a person’s task in roles (if they are to be considered as sufficiently static) or business rules (in a more dynamic environment). They jointly with some other dimensions or constraints like location, amount authorisation, contract type, organisational Unit and the like determine the necessary and hence maximum access to corporate resources. These business roles however as well as business rules have - the naming gives a strong indication already - to be defined in business terms by business literate staff. Only after that is done they should be underpinned by low level permissions (aka permissions, entitlements, privileges, access rights, …). Ideally the job description is linked to a set of business roles / rules already.

Workforce Management Policies

Moreover policies should be in place to provide the Identity & Access domain with the necessary guidance. Policies with some influence on Identity & Access may pertain the scope of the workforce (e.g. Contractors, trainees, apprentices, interns, temp. staff included?), automation of time & attendance tracking, automation of employee/manager self-service functions, deputy procedures in case of planned / unplanned absence, formalisation of a flexible, remote and mobile working strategy, retention times for personal information / digital identities, and finally the standardisation of processes & policies itself.

When taking digitisation seriously, the processes of the identity & Access should be automated in all standard cases – relying however on timely and meaningful triggers fired by the workforce management.

So there is a lot of solid ground which can and should be provided by the business side to support a rock solid Identity & Access layer. If workforce management is however only not rigorously enough defined and only loosely coupled to Identity & Access, no one should be surprised if the latter remains shaky and unreliable.

On the other side – the assets

Let’s remember. The Access part of Identity & Access is about the relationship of 2 major objects: the digital identity (most often representing a person) and the asset to be protected. So, not surprisingly, not just the person has to be well known and properly embedded in workforce processes – the assets have to be too. So first there must be a registry or repository of all assets. The assets documented therein must be sufficiently characterised. A responsible owner has to be assigned and – most importantly – the asset must be assigned to a sensitivity class after undergoing a thorough sensitivity analysis.

The enterprise model

As I mentioned above, roles are the business and it is necessary to express a person’s task in roles. But where do roles come from? They are not invented on the fly during the recuting process. Nor do they emerge somewhere form thin air. They are to be populated by business functions from an enterprise model. Well and this should better be handy to do so. In cases when such models exist, most often they are functional enterprise models, hierarchically structured and named canonically and via aliases. Canonical naming is required for methodological rigour and to easily spot commonalities, aliases for the sake of comforting business by mirroring their folkloric designations from their business as usual. Functional model are often well suited, as regulations requiring e.g. Segregation of Duties (SoD) are overwhelmingly expressed in functions to be assigned to different actors. Even more helpful would be the use of an object oriented enterprise model.

2. The Management layer feeds Governance

Governance is defined as giving direction to and exerting oversight over the underlying Management processes in the focus area.


We talked about giving direction already. Good governance here has to craft and publish a domain strategy, closely in line with the overall enterprise strategy. Its results should be fed into policies for the business as usual or action for the defined change activities. The role of corporate policies can’t be overemphasised here.


It is so obvious that knowing what’s going on in the domain of your responsibility, is a key requisite of all governance efforts – and it as obviously so difficult to achieve. The mechanisms how to exert oversight, are already laid out in a bit more detail elsewhere.

For the sake of clarity and to provide a good fit to the next – the compliance layer – it is advisable to compile a list of control objectives and implement them in one or more management controls each. Even in the absence of compliance requirements good guidance abound in several standardisation or management models like CoBIT5, ISO 27000 series and more.

It is not to concealed here, that gaining the necessary overview on may require a massive involvement of technology use, like advanced analytics or even big data.

3. Governance feeds Compliance

As implied by the illustration with its pyramidal appearance, this chapter should even be shorter than the one before, which in turn was shorter than the first one. The major amount of work indeed should have been done in the lower levels, so that the compliance layer should become a cheap one.

This doesn’t mean that no more work is involved. As I mentioned here before, Thomson Reuters once counted a mere number of ~100 minor or major regulatory changes per day to be taken into account, most of them in the financial sector, many of them IT-security related. This sheer number, which is even expected to rise, justifies assigning the responsibility of watching out for new / changed regulations, assessing their relevance, operationalising them as controls, matching with existing ones and if necessary, initiating change activities to get them implemented, to an own function.

So once we have done all our homework, which are anyway elements of good conduct, compliance does not need to be artfully crafted. Rather it just bubbles up from the layers below – nearly for free.

But you can’t fool a strong leader

Hours later after patiently waiting and pretending to carefully listen to the CISO’s lengthy, while still not exhausting, elaboration.

Dear colleague, I really don’t understand, what you are saying. First you presented a huge bill to us, containing lots of items; we all would have to pay for dearly, besides that you threatened us with a huge effort and a yearlong duration. Now you tell me it comes for free. Isn’t it a bit strange – to say the least. I don’t want to repeat myself. Hopefully you listened carefully to the message from above. I strongly recommend: Just the compliance issues – and just do it!

Having said that the line manager left in a good mood. Didn’t he just demonstrate strong leadership, after all?

You like to know, how the story ended, if it had a happy end? Well, I think you may not really want to know that. You most probably can already quite easily sense it ...

So this could become one of the often repeated talks form the past. However as I am not blessed with grandchildren yet, the public fell victim of my insatiable talkativeness.


Challenges ahead for a digital transformation agenda

In last week's contribution (From ‘oversight’ to the algorithm-driven company) I contemplated about the necessary underpinnings of a digital transformed corporation, gave some justification why it is so hard to answer the obviously simple question, which is at the core of any oversight: Who has (had) access to which Resources? And I mentioned how oversight is executed according to the state-of–the-art. In this third and final post I will discuss current trends and - with the help of professional analysts - try to look ahead.

What does Pythia foresee for us?

Despite Mark Twains (among others) warning “It is difficult to make predictions, especially about the future.” in 2015 modern days Pythia, The Gartner Group, predicted that “By 2020, 70 percent of enterprises will use ABAC as the dominant mechanism to protect critical assets, up from less than 5 percent today.

Why do they come up with such a radical opinion and what are the driving forces behind? Well, unfortunately, after all these years in business I am still not capable to read the mind of a Gartner analyst. However there are some evident trends, which even I stumbled upon. And so might have done those augurs.

ABAC stands for Attribute Based Access Control as opposed to RBAC (Role Based Access Control). It is a policy-based approach, where machine executable policies (executable business rules) act on certain attributes (well, parameters, as a programmer would say). If invoked at runtime a highly dynamic and responsive authorisation infrastructure can be created this way.

And this is exactly the point. Agility is not only required on project level, but on corporate level as well. It goes without saying that just the board of directors being agile will not be sufficient. Its rulings need to take immediate effect, without trickling down the organisation throughout the following years.

Dealing with compliance for example has become more complex and costly than ever before.

Thomson Reuters once counted a mere number of ~100 minor or major regulatory changes per day to be taken into account, most of them in the financial sector. You certainly need to be fast in order not to be breathlessly chasing after their implementation, before finding them already outdated, but instead get into the driver seat again and take advantage of market opportunities.

A policy-based approach means a centralization of management with executable policies as its key element. As the total of interacting policies on all levels can be considered as the central governance processing machine, direction & oversight will be executed by running these governance programs.

Also decision making can be centralised and implemented in a redundancy-free way – decluttered.


Combining RBAC and ABAC

I took both four-letter-acronyms RBAC and ABAC as antagonists for the (old) static and the (new) dynamic world of “real-time enterprises”. Well, static is not all bad and the world is not black and white. Static structures will remain and they will do so for the benefit of the corporations.

My statement here is: Roles are just the result of rules applied on the access space – however most often without being documented. Implement those rules directly and RBAC will appear to you as a special case static ABAC. This striking similarity has been recognised by the “inventors” of ABAC too. The NIST proposes 3 different ways to take advantage of both worlds by a model extension.

First of all, roles already were capable of being parametrized. This easily overlooked little, yet powerful, feature was initially designed to cope with non-functional attributes and dynamic decisions based on attributes.

Some attributes however are independent of roles. A combined model was sought therefore. The NIST came up with a 3-fold proposal. Note: All three variants can even be combined and used within one single access model.

Dynamic roles

Dynamic attributes like time or date are used to determine the subject's role, hereby retaining a conventional role structure but changing role sets dynamically. For further reading I refer to R. Fernandez, Enterprise Dynamic Access Control Version 2 Overview.


Here a role name is just one of many attributes – without any fine structure. The role is not any longer a collection of permissions like in conventional RBAC.

The main drawback is the rapid loss of RBAC's administrative simplicity as more attributes are added (IEEE Computer, vol. 43, no. 6 (June, 2010), pp. 79-81). In this approach you may have problems determining the risk exposure of an employee's position.

This 2nd scenario could serve as a good approach for a rapid start, generating early results of automatic entitlement assignment - without deep knowledge of the job function.


In the 3rd variant attributes are added to constrain RBAC. Constraints can only reduce permissions available to the user not expand them. Some of ABAC's flexibility may get lost because access is still granted via a (constrained) role. On the other hand system retains the RBAC capability to statically determine the maximum set of user-obtainable permissions.

The RBAC model in 1992 was explicitly designed, to apply additional constraints to roles. This approach is the one envisioned as the natural RBAC approach by KuppingerCole.


Governance in a flexible RBAC & ABAC world

A question remains to be answered: How to do recertification if there are no static entitlements? We remember that re-certification is one of the traditional key elements within the detective controls of the oversight part of Identity & Access Governance.

First of all, don't leave rules unrelated. Provide a traceable deduction from business- or regulatory requirements, e.g.:

  • Regulations (external) → Policies (internal) → Rules (executable, atomic) → Authorisations (operational)

Second, attributes must be provided on demand during runtime during invocation of the authorisation sub-system by calling an attribute server, e.g. an operational Data Warehouse, which in turn collects them from various corporate or external sources.

  • However, some limitations may remain: In the end there is no static answer the “who-has-access-to-what” question in a dynamic environment.

Third, there is no way around the enumeration of the same rules for reporting & audit, which are used for the authorisation act as well. And maybe the auditor's questions have to be altered & more explicitly specified too.

  • Re-certification of dynamic entitlements will feel more like debugging JavaScript code than ticking off long entitlement list twice a year.


Requirements to I&A technology

So what will be the requirements to the supporting technology? As I mentioned IAM, IAG & IAI are by no means isolated disciplines. They operate on highly fragmented yet massively overlapping information in arbitrary formats following different retention policies.

If different tools are used for specific sub-tasks, the underlying data have to be kept in tight sync. Hereby single duty services, operating in an SOA fashion, are to be preferred over all encompassing monolithic suites.

In addition in attestation runs business line representatives reassess past business decisions. Information hence needs to be expressed and presented to them in business terms.

Finally Information security demands a holistic approach. Entitlement information and operational access information have to span all relevant layers of the IT stack (Applications, Middleware, operating systems, hardware and – of course – physical access).

For forensic investigations assessments have to be performed back in time. Past entitlement situations hence need to be stored in a normalized structure, reaching sufficiently back and easy to query in its historic context (aka ‘temporal’ functionality).

Deciding on the implementation of appropriate activities however needs a solid foundation. Data analytics applied to I&A provide the equivalent of switching on the light before cleaning up a mess. The resulting architecture hence should be layered into at least the following:
  • A Business Layer
  • A Technical Layer and 
  • A Data Layer.

Each layer itself may be expressed in its own Business-, Technical or Data-architecture.

Based on a sufficiently rich set of data the compilation of the most basic I&A health indicators allows for directing effort in the most promising IAM and / or IAG activities. Hence IAI should be the first of the three disciplines to invest into. Identity & Access Governance needs to be built on top of a powerful data warehouse. Discovery & warehousing hence enter centre stage of I&A Governance.

A caveat should be mentioned however: In addition to I&A knowledge this approach requires sound data analytics skills – usually not found in I&A but rather in marketing- or product-Q&A departments.


Outlook - dynamics blends in to the static approach

Although a powerful technology needs to be invoked, in order to keep the complexity on a manageable level, the transformation my not need to be performed in one revolutionary big bang step. Rather an agile, evolutionary approach will lead to faster and better results and a higher degree of user (i.e. Management level) acceptance.

The changes to be expected are …
  • All privilege determining parameters expressed as static roles.
  • Complex roles
  • All access expressed as roles

  • Manual processes

  • Recertification campaigns

  • Necessity for management interaction
  • Easy to re-certify static entitlements

  • Roles augmented by rules / attributes

  • Reduced role complexity
  • Roles complemented by rules / attributes
  • Automated access assignment and removal
  • Policy driven entitlement assignment
  • Risk driven on-demand re-certification
  • Real-time analytics

To summarize all the sections in three (although lengthy) sentences:

  1. In essence it thus turns out that after undergoing a digital transformation not only the business operations will be automated.

  2. Management of these operational processes as well as the overarching governance will need to follow that automation trend too.

  3. This is certainly still a long way to go.


From ‘oversight’ to the algorithm driven company

In last weeks contribution (Identity & Access Governance in the age of digital transformation) I was outlining the general picture, answering the question, what Governance is after all, what it means, when applied to Identity & Access, emphasizing the need to look at Identity and Access separately, and finally breaking ’direction' down, following the downstream path from strategy to executable rules. Today I will cope with how to make policies & guidelines actionable.

About the necessary underpinnings of a digital transformed corporation

When considering the quality of everyday management decisions in major corporations, the well-known Nobel laureate Daniel Kahneman found himself not exactly awed: “You look at large organizations that are supposed to be optimal, rational. And the amount of folly in the way these places are run… is actually fairly troubling.”

Even more worrying was the insight that this routinely making poor decisions did not correlate with experience, training and other factors usually considered having a positive effect. Rather the less encouraging conclusion was that this nearly unavoidable “noise” was the effect of the very human nature – the traps and biases we use to run into during our daily life, whether job or business.

And the cure? Well, Kahnemans advice is “Algorithms”. Let algorithms run the company? Yes! That's what he meant. As radical as this advice sounds, it is not an entirely new view. We have them since long. Policies & guidelines, Procedures & standards and Specifications & work instructions, representing a layer of abstraction each.

However these business rules are meant to be processed by humans – not by machines. They still need some degree of translation, interpretation and situational judgement. And even worse, they usually don't provide a complete set of guidance even for the majority of the “Business As Usual” cases.

While it still might take a while until we will see governance performed by robots (although in some companies it already might look like that), the operational layer of the traditional corporate pyramid can well be, and quite often already is, run in an automated way. Next target now is the Management layer, where less frequently decisions are taken to keep operation within the pre-defined policies & guidelines channel. This will be the battlefield where the success of the digital transformation, many companies lately decided to head for, will be archived – or not.
Nevertheless, giving “direction” needs to be expressed in a formal way. And it is still a good start for many corporations to fill the voids in the document pyramid, as shown in fig. 1.

It might be a disturbing idea which Kahnemann conveys, when he expects systems powered by artificial intelligence (AI) one day to be able to execute professional judgement even better than humans. For now however laying the necessary foundation as the necessary underpinnings for a (more) digitized corporation, will be already enough of a task for most of us.
So let's do our homework first.

Oversight starts with a simple question

Oversight starts with a simple question: Who has (had) access to which Resources?
Simple question – simple answer? Yes? No! Rather only few corporations are currently able provide sufficient evidence of their access situation as outlined below.


Let's first look at the ‘who’: usually you may think of (fixed term) employees. And indeed, providing them with the appropriate access to corporate resources causes headache enough and keeps hordes of colleagues, consultants, system integrators and auditors busy. However the subject behind the ’who' needs to be looked at more fine-grained. It can be other staff, like contractors or those with elevated rights like admins. It could be suppliers or customers and even their respective administrators in case some limited delegated administration is implemented. Increasingly non-human actors like other systems interact via more or less controlled APIs and need to be included into the access control focus. And finally the IoT age is dawning, bringing new challenges to the table, let them be the sheer number, the often external nature or the limited capabilities of those ’things'.

Has (had)

The innocent word ’has' can be broken down into sufficiently complex cases too. It is not just about listing all resources any digital identity has access too – now. Not just listing them by resource, by digital identity, by system, content authorisation level, or context exclusion rule. Also it must be immediately back-traceable why this privilege exits, who (person or policy) granted it and when last has been checked. For audit purposes or forensic investigations these answers have to be given for any chosen period of time, which legal and corporate retention rules permit.

Access to

What about the ’access'? Is it uniform? What a stupid question. No, it is not. Next to the trivial CRUD-access (Create, Read, Update, and Delete): There are risk-mitigating content-based access limitations in place, restricting access according to pre-defined authorisation levels: “You are allowed to close contracts up to 1 million US-Dollars.” Next to content, the context might add to the sensitivity, like: “Well, you might close that contract but not during your vacation, from a nightclub in Shanghai, during (local) night-time, using your private smartphone, which hasn't been updated to the latest security patch level.” The last example could even contain several policy violations. A third restriction is process based and prevents a digital identity from running a complete business process just by his/her own. Also known as Segregation of Duties (SoD) this risk minimizing step can be performed at administration time (static SoD) or at run-time (dynamic SoD). Privileged access finally is quite a different breed and should again be handled completely different, e.g. via granting completely monitored and recorded session-based access.

Which Resources

After talking about the subject of the access act, what about the object, the corporate ’resource'. The sensitive corporate resources, which need to be protected, are not the ERP-, CRM- or HR-systems but the underlying information objects, the employees, customers, contracts payments, … . They should be well-known, classified by their sensitivity, assigned into areas of responsibility and expressed in a formal model. As information objects don't interact by their own and are unable to protect themselves, access to them goes through a whole stack of systems, which are usually object of access control in lieu of them. This IT stack comprises, but is not limited to, applications as the most obvious part, but also middleware, operation systems, networks, telco-systems and physical assets, e.g. premises, as well. There are no logical – only practical reasons – why the entry of humans into buildings is handled by independent PACS (physical access control systems) and not by the access control systems, which shields digital resources.

Executing oversight for I&A Governance

When it comes to implementation of Governance usually 3 types of controls are considered:
  • Preventive controls
  • Detective controls and
  • Corrective controls
There is no question that it would be optimal to prevent any deviations from our policies, hence fully rely on preventive controls. This however would mean that the ’direction' part of I&A Governance would have got sufficient traction to rely on it. It further means that you have to declutter your architecture, mature your administrative processes to a high level of maturity and - as we learned from the introduction above – automate all administrative processes to a high degree.
As these prerequisites are rarely fulfilled, we have to rely on the second best set of controls, the detective ones, which belong to the oversight part of I&A Governance.

A few standard implementations of detective controls are required by major regulatory bodies and hence found wide acceptance. Detective controls therefore dominate the IAG processes. They should be gradually reduced in favour of preventive controls once the necessary preconditions are given.
The three top-level detective controls in use today are:
  • Reconciliation - Does the implementation reflect the intended state?
    This daily health check is only necessary, if the access definition is done on a different location (Policy Administration Point or PAP) than the policy decision (Policy Decision Point or PDP) and the policy execution (Policy Enforcement Point or PEP) and the target systems still maintain their native Administration Interface. In an architecture where there is (at least logically) just a single policy store, there is no need for this control; in reality however it quite often is.

  • Attestation - Is our decision still valid?
    Also known as Re-certification this regular (quarterly to biannual) check on validity just reconfirms the decision once taken during initial grant of the privilege in question. This check become necessary (and hence is required by regulatory bodies) as we don't have sufficient trust in our administrative processes, that they would properly, immediately and automatically react on change events in the real world and reflect them in the access structure accordingly.

  • Expiration - To limit risks for domains outside your own control.
    Expiration of once granted privileges is a widely underestimated and thus underutilised detective control. Its use is evident for granting access in the context of limited endeavours, like task forces or projects. Also in environments outside of the direct control, like vendor employees authorised via delegated administration, whose leaving and changing positions would otherwise go undetected. But also for regular employees on BAU tasks (Business As Usual) it would be beneficial and could even replace attestation. Prerequisite however is a proper implementation of time-out dates and a powerful workflow support.
One important point to mention is that I&A Governance is by no means an IT task. It is rather purely organisational. Therefore all decisions must be well understood and taken here by representatives of the business side. As this can only be expected when all access objects like roles, rules, privileges, or information objects are named and described in business terms, it is only a minor step from here to find the find and implement the appropriate business rules (Kahnemann calls them algorithms) to drive the process henceforth.

In these two postings I described the current status of what is expected of corporations to have implemented today. In my third and last part next week I will focus on the challenges lying ahead and what they will mean for us.


Identity & Access Governance in the age of digital transformation

Identity & Access Governance obviously is a difficult task. Many major corporations struggle to meet their various compliance criteria, which could be expected as a natural by-product of good governance. But having hardly completed this job, the next one, innocently called “digital transformation” knocks at the door.

Will governance thus become even much harder by then: At least I was asked that question recently. Ok, let me quickly give an introduction to the total topic, go into a little more detail where it appears appropriate to me and eventually come up with a couple of brave conclusions.

You might have heard of the new esoteric trend “Declutter your life”. Some very similar recipe I would prescribe the majority of today's companies: “Declutter your infrastructure (before going to digitize it)!” So, with all right, you can expect a decluttered contribution too, dear reader. However, the text nevertheless has become slightly lengthy. I will therefore publish it in three parts - one per week:
  1. Governance and Identity & Access
  2. From ‘oversight’ to the algorithm driven company
  3. Challenges ahead for a digital transformation agenda

What is Governance after all?

The term Governance was coined and defined during the last years of the previous century. However before that time too some form of ’governance', i.e. oversight, strategic change & direction was always expected from high ranking positions like non-executive directors.

In the beginning it was all about corporate governance, as senior management first had to be convinced of the usefulness of handling this new discipline explicitly – before it was applied to sub categories, like e.g. Identity & Access. By now it is accepted that a governance layer should reside on top of each management layer.

In case you want to get an in-depth introduction into Corporate Governance, its Principles, Policies and Practices I recommend the voluminous authoritative guide by the 'father of corporate governance', Bob Tricker, surprisingly named, 'Corporate Governance'.

Identity & Access Governance

So, how did we discover Governance in the I&A world?

Historically we started with the attempt to manage Identity & Access – as it became time to do so. This task alone turned out not to be easy going. While by then I expected the corporate world to do their homework within a timeframe of 3 to 5 years, it isn't even achieved today to a sufficient degree. And more challenges are looming around the corner, not least the digital transformation.

But even when companies succeeded with the introduction of I&A Management, the questions arose: Are we doing the things right? Are we doing the right things? Therefore, and as any management layer needs a governance layer on top of it to stay healthy, I&A Governance appeared out of the dark.

But IAG itself turned out not to be an easy task. The sufficiently powerful equipment for data analytics was missing and, more often than not, is still missing today. I&A Intelligence was born - the application of data analytics to the domain of Identity & Access.


Separating into Identity and into Access

While working hard on making Identity & Access Management (IAM) become reality some fine structure was discovered in what had been reluctantly lumped together into one discipline. The equation hence became: IAM = Identity Management (IM) + Access Management (AM).

Identity Management being a genuine Management discipline on its own, is the necessary organisational foundation for many corporate necessities like business automation, fine grained cost controlling, classical disciplines like human resources management and – of course – access management. So access needs identity a solid foundation – but not the other way round.

Hence one can imagine 6 distinct disciplines, as for identity all 3 layers (operations, management and governance) have to be performed, as has it for the access part.

Direction – we need a strategy

Remembering the definition of Governance as ’direction & oversight' let me quickly have a look at the 1st half of the world: direction. Certainly you should have to follow a strategy while directing a whole business towards its future.

This insight is not entirely new and so the procedure of defining a strategy is pretty well understood by now. Strategy development is merely a high level planning process, leading from the current state to some assumed future state. To do this with sufficient rigour, some prerequisites need to be fulfilled.

First you need to have meaningful mission. As for a corporate mission “Earning tons of money” might not be a good enough driving mission, so “Securing the business” would not suffice for Identity & Access. Good news is that nearly every company has started with a clear mission. By the time it may however need some adjustment or even re-invention, enough in each case to keep top management busy for a while.

Second you should now your current “As is” status, as ”if you don't know where you are, every direction might be the right one”. As trivial as this “know thyself” sounds, given the complexity of today's major institutions, you can easily run into the “analysis paralysis”-trap.

And thirdly you should have an idea of what lurks around the corner, the future drivers, influences, trends, new technologies, which may have an impact on your business.

Hence “Strategy Development” can be understood in a narrow and abroad sense, depending on whether the necessary foundation is laid already, or the entire work lies ahead still.

Strategy development - a cyclic process

Strategies often bear the stigma of being fuzzy, general, overambitious or even outright unrealistic. At least they are blamed to talk about a distant future in abstract terms. This perception is not completely wrong and not entirely right. Strategy development follows a cyclic process. And as its goal is to transform an organization from a defined here-and-now state to a specific future state, during this process it deals with abstract and far-off future issues, just to come back to the here-and-now, the cruel dirty world, with change items to be implemented tomorrow.

Expressing it as guidance

Having been perhaps too generously spending 356 words on a well-known corporate discipline like strategy development, I cannot afford the luxury to do the same for the subsequently necessary change activities. Let's assume however, that one fine day the projects will have come to an end, yielding new corporate processes – and altered corporate guidance.
The pyramid of corporate regulatory documents traditionally looks like this:
  1. Strategic level: Policies & Guidelines


    Policies are binding corporate documents, usually issued by top management. They express goals, principles, focal areas and responsibilities. They represent the top level of the documentation pyramid.


    Guidelines like policies are of a high level of abstraction. However they don't come with a binding character.

  2. Managerial level: Procedures & Standards


    Procedures lay out all management controls for a defined problem domain on an essential level. They contain (static) functions & responsibilities and (dynamic) processes.


    They state requirements for generic minimum standards, a choice of good practice examples or a bandwidth of tolerable quality parameters.

  3. Operational level: Specification & work instructions


    The Implementation of controls on a physical level is specified in operational specifications, work flows, specifications, … Techniques, configurations of solutions and organisational processes are documented on this level.

    Work instructions:

    Based on the defining procedures work instructions specify the volatile details like configuration parameters or physical techniques.

Traditionally these documents on each level are written as some kind of narrative to be read and followed by its target group. This group evidently is meant to be made of humans. Automated processors usually are not in scope – however they increasingly need to be.

To let process definitions seamlessly translate into executable workflows, to automatically check human and automated activities against corporate policies, to authorise digital identities (human 'users' or automated processors) dynamically and aware of its context, expressed as rules and attributes (ABAC), much more rigour has to be applied to definition of regulatory documents.

As those documents become the central code, whose rules are executed in an unattended manner they need to be considered as the sensitive core of the entire organisation – and hence protected accordingly against failure, inadvertent or malicious alteration and creeping degradation.

Ok, that enough for now. Next week I will outline how to make policies & guidelines actionable. So please stay tuned.


RBAC first – ABAC next, or what?


Recently, at a customer’s site, I heard an enlightening response to a simple and straightforward question.

The question was: “Why do we implement our access management system according to the old fashioned RBAC model and don’t follow the modern ABAC approach instead?”

The answer came quickly and was as simple: “As we are a large organisation, to go for ABAC would be a step too big for the start. So first let’s implement according to RBAC and later we go on to the ABAC model.”

Figure 1: The (perceived) Evolution of Access control
The answer left me wondering whether there is a logical sequence in which to implement an access model according to the ABAC approach: first RBAC then followed by ABAC? Indeed, while researching literature I found some proposed or perceived evolution paths, like the one illustrated in the graphic above (e.g. here:, also conveying the promising news that XCML support all of these models) here .

However, why couldn’t it be the other way round? Or can’t we have both at the same time, as roles are a good idea. But the dynamic access control, which often comes in the wake of attributes, might be very beneficial as well. What’s about a blended model, having the best of both worlds?

What is RBAC?

Role based access control (RBAC), as defined in the US standard ANSI/INCITS 359-2004, Information Technology, controls all access through roles assigned to users. Each role assigns a collection of permissions to users.

Herby RBAC assumes that, in most applications, permissions needed for an organization’s roles change slowly over time, but users may enter, leave, and change their roles rapidly. RBAC meanwhile is a mature and widely used model for controlling access to corporate information.
To cope with its early limitations, inheritance mechanisms have been introduced, allowing roles to be structured hierarchically. So some roles may inherit permissions from others.

Intuitively roles are understood as functions - or bundles thereof - to be performed within a corporation. Not surprisingly they offer a natural approach to express segregation-of-duty requirements, where no single individual may be assigned all functions for critical operations such as expenditure of funds.

It is evident, that roles are global to a given context by their very nature. Proper operation of RBAC hence requires that roles fall under a single administrative domain or have a consistent definition across multiple domains. In contrast employing distributed role definitions may become challenging.

But not all permission determining dimensions are functional. What is about location, legal entity, customer group, cost centre and the like? Those ‘attributes’ in conjunction with the job function span a multidimensional vector space. A Point in this space defines the package of permissions.

Figure 2: A simple (static) role meta model
The separation of functions & constraints pays off even without complex rules
  • In the (simplest) role meta model …
  • Roles express the function
  • Parameters are used as constraints
  • They combine to several business roles
  • Business roles are defined in pure business terms
  • Business roles must be mapped to entitlements.
  • Entitlements are operations on objects
  • Business roles may be statically generated.
  • They may be determined dynamically at run time.

Of course, if the only tool, you have at hand is a hammer, all the world may look to you as a nail – or a variety of them. And so the inevitable happened and roles, with their functional nature, were abused for the assignment of any bundle of permissions, quickly leading to the well-known role explosion.

Also the static nature of roles is increasingly felt as a severe limitation - in some cases. Where does agility enter the game? Well the context is to blame – and requires dynamic constraints. To make this cryptic statement a bit clearer, let’s take some examples …
  • Device
    The device in use might limit, what someone is allowed to do.
    Some devices like tablets or smartphones might be considered to be less secure than others.
  • Location
    The location the identity is at, when performing an action. Mobile, remote use might be considered less secure than access from within the headquarters.
  • System health status
    The current status of a system based on security scans, update status, and other “health” information, reflecting the attack surface and risk.
  • Authentication strength
    The strength, reliability, trustworthiness of authentications. You might require a certain level of authentication strength to apply. Otherwise you might want to restrict access in a certain way.
  • Mandatory absence
    Traders may not be allowed to trade in their vacation. Mandatory time Away (MTA) is commonly used as a detective / preventive control for sensitive business tasks.
  • Many more
It is evident, that static role models cope badly with such dynamic requirements.

What is ABAC?

To avoid “role explosions” and to provide for a higher agility, several attempts have been made. Likewise recent interest in attribute-based access control (ABAC) suggests that attributes and rules could either replace RBAC or make it more simple and flexible.

The attribute-based access control (ABAC) model to date is not rigorously a defined approach. Its central idea expresses that access can be determined based on various attributes presented by a subject. These contemplations can be traced back to A.H. Karp, H. Haury, and M.H. Davis, “From ABAC to ZBAC: the Evolution of Access Control Models,” tech. reportHPL-2009-30, HP Labs, 21 Feb. 2009).

Hereby rules specify conditions under which access is granted or denied.

For example ...
  • A bank might allow access if the subject is a teller, working during the hours from 7:30 am to 5:00 pm, or the subject is a supervisor or auditor working those same hours who also has management authorization.
  • This approach at first sight appears more flexible than RBAC because it does not require separate roles for relevant sets of subject attributes, and rules can be implemented quickly to accommodate changing needs. The trade-off for this flexibility is the complexity introduced by the high number of cases that must be considered.
Providing attributes from various disparate sources adds an additional task. Attributes may stem from different sources with different reliability resulting in different trust we place in them.

ABAC (sometimes referred to as Policy Based Access Control or PBAC or Claims Based Access Control or CBAC) was proposed as a solution to these new issues.

As it evolved it was also called Risk Adaptive Access Control (RAdAC). As the model and its application are still emerging, there is currently no widely accepted ABAC model as there are for DAC, MAC and RBAC. Although considerable literature has been published, there is no agreement on what ABAC exactly means.

It is however safe to state, that attributes and policies (made of rules) are used in ABAC to derive access decision. As those attributes my change during run time, the ability is assumed to perform these policy decisions at runtime too.
Figure 3: Agility creeps into the model from the increasingly dynamic corporate context


As discussed above, both approaches RBAC and ABAC have their specific advantages and disadvantages. RBAC trades up-front role structuring effort for ease of administration and user permission review, while ABAC makes the reverse trade-off: it is easy to set up, but analysing user permissions can be problematic.

This insight comes less as a surprise, if we start viewing both models as incomplete fragments, just projections of some richer model. And in fact, rarely does any organization use a pure RBAC or ABAC approach. Typically attributes augment or parameterise roles. They can be used independently to assign basic resources to digital identities, which are not linked to functions performed within the organization.

This view supports the strong impression that the discussions of RBAC vs ABAC tend to search for the right answers to the wrong question. It would rather make sense to ask how much R vs. A should be taken to our xBAC model to best serve our needs.


Coming to the above conclusion it is not surprising to find that the ‘inventors’ of RBAC themselves come up with even three different suggestions how to combine RBAC and ABAC.

As D. Richard Kuhn, Edward J. Coyne and Timothy R. Weil, in their article “Adding Attributes to Role-Based Access Control” (IEEE Computer, vol. 43, no. 6 (June, 2010) IEEE Computer, vol. 43, no. 6 (June, 2010), pp. 79-81, pp. 79-81) present, there should be three approaches to handle the relationship between roles and attributes, all retaining some of the administrative and user permission review advantages of RBAC while allowing the access control system to work in a rapidly changing environment:
Figure 4: There are 3 basic models  discussed here to combine RBAC and ABAC
  1. Dynamic roles. Attributes such as time of day are used by a front-end module to determine the subject’s role, retaining a conventional role structure but changing role sets dynamically (R. Fernandez, Enterprise Dynamic Access Control Version 2 Overview, US Space and Naval Warfare Systems Centre, 1 Jan. 2006; http://csrc.nist.gov/rbac/EDACv2overview.pdf). Some implementations of dynamic roles might let the user’s role be fully determined by the front-end attribute engine, while others might use the front end only to select from among a predetermined set of authorized roles.
  2. Attribute-centric. A role name is just one of many attributes. In contrast with conventional RBAC, the role is not a collection of permissions but the name of an attribute called role. This approach’s (IEEE Computer, vol. 43, no. 6 (June, 2010), pp. 79-81) main drawback is the rapid loss of RBAC’s administrative simplicity as more attributes are added. It also suffers from potential problems with ABAC when determining the risk exposure of a particular employee position. However this 2nd scenario could serve as a good approach for a rapid start, generating early results in automatic the assignment of all those permissions, which can be granted without having deeper knowledge of the digital identities job function.
  3. Role-centric. Attributes are added to constrain RBAC. Constraint rules that incorporate attributes can only reduce permissions available to the user not expand them. Some of ABAC’s flexibility is lost because permission sets are still constrained by role, but the system retains the RBAC capability to determine the maximum set of user-obtainable permissions. As an aside, developers explicitly designed the formal model for RBAC, introduced in 1992, to accommodate additional constraints being placed on a role. This approach by the way is the one envisioned as the natural approach by KuppingerCole ).


In a dynamic role meta model …
  • Roles can be created at runtime
  • So can constraints
  • They are rule / attribute pairs
  • Roles & constraints can be deployed dynamically too.
  • Dynamicity is propagated from constraints and/or from functional roles to business roles and authorisations
  • Entitlements and identities remain static at the same time.
Figure 5: Roles and constraints may be created and / or used dynamically

Having said all this, we can safely conclude that sticking with one model in its pure breed will limit our expressive power and lead to suboptimal results. There should be less the question whether we have a preference for the A or the R in the respective xBAC-model. But rather we should decide on how much of each to be introduced at what point in time. Even a combination of the three approaches as mentioned in the above chapter may not lead to model degeneration but would rather have the potential to lead to an optimal model.


  1. D.F. Ferraiolo and D.R. Kuhn (1992) "Role Based Access Control", 15th National Computer Security Conference, October, 1992
  2. M. Blaze, J. Feigenbaum, J. Ioannidis, “The KeyNote Trust-Management System Version 2”, IETF RFC 2704, September 1999
  3. K. Brown, “Exploring Claims-Based Identity
  4. A. Pimlott and O. Kiselyov, “Soutei, a Logic-Based Trust-Management System”, FLOPS 2006, 8th International Symposium on Functional and Logic Programming.,Fuji-Susono, 12 Japan, April 24-26, 2006. Also in Springer's Lecture Notes in Computer Science 3945/2006, pp. 130-145.
  5. D. Richard Kuhn, Edward J. Coyne, Timothy R. Weil, “Adding Attributes to Role-Based Access Control”, IEEE Computer, vol. 43, no. 6 (June, 2010), pp. 79-81

More readings

  1. Anderson, A. 2004. XACML Profile for Role Based Access Control (RBAC).
  2. Anderson, R.J. 2001. Security Engineering: A Guide to Building Dependable Distributed Systems. New York: Wiley Computer Publishing.
  3. Barkley, J.F. (no date). “Workflow Management Employing Role-Based Access Control.” U.S. Patent #6,088,679.
  4. Barkley, J.F. 1995a. “Application Engineering in Health Care.” Second Annual CHIN Summit. Chicago, IL..
  5. Barkley, J.F. 1995b. “Implementing Role-based Access Control Using Object Technology.” First ACM Workshop on Role-Based Access Control.
  6. Barkley, J.F., and A.V. Cincotta. 1998. “Managing Role/Permission Relationships Using Object Access Types.” Third ACM Workshop on Role-Based Access Control, Fairfax, VA.
  7. Barkley, J.F., and A.V. Cincotta. 2001. “Implementation of Role/Group Permission Association Using Object Access Type.” U.S. Patent No. 6,202,066.
  8. Barkley, J.F., A.V. Cincotta, D.F. Ferraiolo, S. Gavrila, and D.R. Kuhn. 1997. “Role-based Access Control for the World Wide Web.” 20th National Computer Security Conference.
  9. Barkley, J.F., D.R. Kuhn, Rosenthal, Skall, and A.V. Cincotta. 1998. “Role-Based Access Control for the Web.” CALS Expo International & 21st Century Commerce 1998: Global Business Solutions for the New Millennium.
  10. Bednarz, J. 2005. “Compliance: Thinking Outside the Sarbox.” Network World. As obtained on 10/31/2008.
  11. Bertino, E. and R. Sandhu. 2005. “Database Security—Concepts, Approaches, and Challenges.” IEEE Transactions on Dependable and Secure Computing 2(1): 2-19.
  12. Bokhari, Z. 2009. Standard & Poor’s Industry Surveys, Computers: Software. (April 23) and company Web sites. U.S. Code 44 (2006). Information Security, § 3532 (b) (1).. Accessed February 5, 2009.
  13. Bureau of Economic Analysis. 2009. “National Income and Product Accounts: Table 5.3.5. Private Fixed Investment by Type.”. Accessed April 14, 2009.
  14. Byrnes, C., Vice-President: Services and Systems Management, The META Group. June 13, 1997. “Security Administration Grows Up.” An analyst report produced for Tivoli, an IBM company.


Authorisation – what does it mean after all?

In the field of Identity- & Access Management terms like authentication an authorisation are well understood, frequently used and everyone knows what they mean. Really?

Well, Identity Management is about managing identities, e.g. of employees. Access Management consequently deals with access, e.g. to information objects. And it is quite obvious that, before you may access any protected information object, you 1st have to be authenticated (Are you the one you claim to be?) and 2nd you need to be authorised (are you allowed to perform that particular activity on a specific information object?).

In a contemporary architecture, which may be considered as such when being ‘service oriented’, there hence would be an authentication service, taking care of the authentication task, and an authorisation service involved. Both are run time activities on an operational level, rather than administrative tasks on a management level.

So it’s clear now, isn’t it?

But what does authorisation mean? When is a digital identity authorised to access a protected information object in a defined way? Is it done 1) when the privilege is assigned to her / him (logically at administration time) or 2) when this authorisation is enforced (physically at runtime)?


There might be even 2 warring factions – and I have been member of each of them – each at a time. In the essential world (
http://genericiam.blogspot.de/2010/08/modelling-fundamentals.html) of course 1) applies, because once the role / attributes are assigned, nothing more is left to be done (http://genericiam.blogspot.de/2012/02/apply-approve.html). For the SOA people, who live in the real – physical – world, it might rather 2). As here you may easily design a single-tasked service, an equivalent to an authentication service.


It might not appear worth to discuss these topics here. But I encountered this discussion once at one of my customers. The good news however is, we are not the first and only ones to be confronted with this schism.
And I think the XACML people (http://xml.coverpages.org/XACML-v30-HierarchicalResourceProfile-WD7.pdf) have done quite a good job. You may remember that with PRP, PIP, PAP, PDP & PEP they defined four fundamental processors.

They perform the following tasks …
  1. The PRP does the policy retrieval,
  2. The PIP does the policy information,
  3. The PAP does the policy administration,
  4. The PDP does the policy decision and finally
  5. The PEP does the policy enforcement.
The 2nd P at the acronyms end obviously means ‘point’. In process notation the five processors do …
  1. Retrieve policy
  2. Inform about policy
  3. Administer policy
  4. Decide policy and
  5. Enforce policy.
All of them may be seen as processes from the authorisation ecosystem.

As ‘policy retrieval’ and ‘policy information’ can be matched with the well-known directory service and / or database, where the ingredients for the following activities are stored, this activity can well be seen outside of the core authorisation.

‘Administer policy’ however is the type 1) essential activity from above.

Perhaps the illustrations created by Axiomatics may help here:
see: www.axiomatics.com

The remaining two activities ‘decide policy’ and ‘enforce policy’ are performed at run-time and they would be part of the type 2) authorisation activity of the SOA people.

The confusion is also related to the role based (RBAC) vs. attribute based (ABAC) access control discussion.
  • Whereas in (static) RBAC thinking an Identity is assigned at least one role (The R in RBAC) and this role comes along with the elementary entitlements dangling from them, on essential level all is done to authorise this identity. The entity containing this assignment can well be called ‘authorisation’.
  • In the (dynamic) ABAC approach rules operate on attributes (the A in ABAC) which in turn are associated with the identity. In case the attributes used here can be considered as being static, i.e. stay unchanged until next policy administration, on the essential level authorisation would happen – as in the RABC world – when the rules are set into place. However as rules might be complicated and are not directly assigned to an identity this case is less obvious and reveals its truth after closer examination only.  
If however attributes (not to talk about rules) may change from one policy decision to the other, policy decision would be the authorisation step.

For real world static RBAC authorisations you would anyway need roles and rules in combination. So changing the R for an A makes less a difference than the increase of dynamicity.

I think I will adapt my essential processes to reflect this thinking. And time has come anyway to amend them with a ‘physical ring’in order to cover the physical runtime processes as well.