Back to sbisbee.com

This is an archived copy. View the original at https://www.f5.com/labs/articles/cisotociso/a-model-for-leveraging-the-complexity-of-identities

A Model for Leveraging the Complexity of Identities
August 8, 2023
Sam Bisbee

Introduction

Identity is one of those bedrock concepts in security that seems simple and intuitive when we use it in our daily lives (“Hi Bob!”), about ourselves (“I'm a chef!”), and in personal (“You’re my friend!”) and intimate (“I love you!”) contexts. Yet when we build or deploy systems that rely on identity — a customer facing application, an internal IT system, Single Sign On (SSO), etc. — we rapidly ascend to the coarsest forms of identity like a username or email address. We ignore the complexity instead of embracing it.

Identity

Identity is multifaceted, changes over time, and is far greater than a username or email address. It crosses system and organizational boundaries, directly affects system usability, and has privacy and regulatory implications. Identity is like data in its ubiquity and being inextricably linked to risk. This necessitates a holistic view across system implementation, architecture, threat modeling, incident response, training curriculum, and many other areas.

Identity is seeing some evolution as security programs and their vendors iterate on the identity-centric Zero Trust security model, but few-to-no commercial solutions go beyond tying a device and its security posture to the authenticated user on that device. This is despite the principles of continuous assessment and verification in Zero Trust, which many vendors speak to in sales pitches (“it’s like a credit score for your users and systems”) but rarely implement. Most commercially available Zero Trust solutions are little more than user-credentials-to-cryptographic-certificate workflow automation for initial authentication and authorization.

One successful evolution in identity has been the inclusion of non-human Identities like servers, cloud workloads, laptops, and mobile devices. Identity applies to both human (employee, third party, customer, etc.) and non-human actors as they both share roles and responsibilities in a system. This is especially critical since non-human Identities are increasing in population and privileged access faster than human Identities as companies rapidly integrate systems and data internally and externally.1

Identity as a Layered Model

An identity’s facets layer on top of each other (see diagram below), relying on the integrity of their supporting layers. Facets will change over time as employees transfer between teams and departments, new applications are onboarded, processes change, and organizational structures evolve.

Layered Identity Model
Behavioral Identity Observed Behavior
Expected vs Actual Behavior
Configured Identity Human Attributes
Permissions & User Metadata
Endpoint/Workload Attributes
Permissions, Configuration, & Supply Chain
Cryptographic Identity Public Key Infrastructure (PKI)
Proving with Math and Process

We will walk these layers bottom-up to understand their objectives, challenges, likely available data sources, and how they interact with the other layers.

Cryptographic Identity

Objective Prove identity with well-established cryptographic methods that take the human out of the loop to reduce error rates and enable automation. Commonly implemented as a public-key cryptosystem.
Challenges Key management discipline and PKI are notoriously difficult to get right and scale, requiring expertise and regular maintenance to balance confidentiality, integrity, and availability.

Modern KMS and secrets management systems have solved many operational concerns but are still fragile when bootstrapping remote identities, like when onboarding a new employee or launching infrastructure in a public cloud, which may cause an untrustworthy identity to become trusted.
Likely Data Sources Your PKI ecosystem such as Key Management Systems (KMS) and Certificate Authorities (CAs).

History is littered with failed cryptosystems running afoul of technical, usability, social, and political issues, and they are a phenomenal example of how security is a multi-disciplinary specialty. Specific to identity, security issues often arise from misinterpreting what the possession of a Cryptographic Identity like a public key means.

Possession of a Cryptographic Identity typically asserts the successful completion of some attestation process. For a human identity, this could be the successful completion of onboarding an employee by Human Resources (HR) or a Know Your Customer (KYC) process for a new account; NIST SP 800-63A Digital Identity Guideliness Enrollment and Identity Proofing Requirements is a great resource to begin learning about these processes. For a non-human identity, the Secure Production Identity Framework For Everyone (SPIFFE) Project is a reasonable approach with growing adoption that includes aspects of a workload’s Configured Identity in its assertions.

Possession of a Cryptographic Identity is like an office badge for physical access in that they both assume they weren’t shared with a coworker or stolen. Theft in this context often means malware running on an endpoint and exfiltrating cryptographic material found in common locations, like VPN or public cloud provider keys, or an engineer accidentally uploading source code to a public repository with secrets embedded in the source code or configuration. See MITRE ATT&CK T1649 Steal or Force Authentication Certificates for more information including forgery.

Possession of a Cryptographic Identity is also separate from presence. Even if the secret material is safely stored against theft, like in the device’s TPM, this does not protect against the employee leaving their device unattended and unlocked. While examples of malicious use of unlocked devices by other employees or device theft are reasonable, the less nefarious example of the employee’s child or roommate using the device and unintentionally introducing malware is much more likely. Mitigations like requiring a pin or some other memorized secret to step up access privileges may be reasonable.

None of this is to say that Cryptographic Identities shouldn’t be trusted. They are extremely practical and powerful, particularly to increase security in communications and data storage without burdening the end user, as has been seen by the adoption of TLS on commercial websites.

Configured Identity

Objective A positive security model that enumerates the identity in terms of what it is and what it is allowed to do, such that every disallowed action won’t have to be enumerated.
Challenges First, integrating systems to keep this data fresh and relevant can introduce new types of risk, such as concerns over what HR Information System data is exposed to non-HR systems and personnel.

Second, while permissions tooling has gotten better for teams with programming capabilities (ex., Open Policy Agent and Rego) it is still a notoriously error prone and labor-intensive task.
Likely Data Sources Human: Authentication and authorization systems (SSO, IAM, directories, etc.), and Finance and HR Information Systems after discussion with stakeholders like Legal, HR, and Privacy.

Endpoint/Workload: Configuration management, infrastructure metadata, asset inventory, CI/CD output, software composition analysis (SCA), vulnerability management, and configuration management database (CMDB).

The Configured Identity is where the organization enhances the identity with their context and expectations of it. For example, both human and non-human identities will typically be added to a Role Based Access Control (RBAC) system to grant privileges, like giving the CI/CD system the role to deploy a release artifact but not push code and an engineer the role to push code but not deploy.

The metadata added to the Configured Identity is often the most interesting for making authorization decisions and to inform detection programs, leading to Attribute Based Access Control (ABAC). For example, an employee may be allowed to just-in-time grant themselves access to a production system if they are in the right department, are in the on-call rotation, and have completed the necessary security awareness training for production access. Depending on the environment’s availability risk the self-granted access could simply be logged or only permitted if there is also an active availability incident.

Non-human endpoint and workload identities often have more information in their Configured Identities compared to humans by using configuration and asset management machine data that’s already available. This is especially straightforward for companies that have invested in Infrastructure as Code and automation. For example, HashiCorp Terraform and Chef configurations will explicitly declare how the operators expect the environment to behave, “it’s a multi-geo stateless API service that runs X version of Y app on the company’s standard machine image, accepts HTTPS on 443/TCP from the API gateways, and persists data to Z database as a service from the cloud provider.” This is gold for security teams and machine policy generation.

Similarly, the spike in investment in software composition analysis (SCA) and software bill of materials (SBOMs) should supply inherited traits for an application’s identity if the SBOM is a fully recursive enumeration of all internal and external dependencies. There are two ways of looking at this:

  1. If the same release version of a release artifact (“customer-api-1.0.0”) is running on multiple systems and the SBOMs are different for multiple instances, then this would suggest either the build system is not consistent which will create operational debugging issues or that some of the release artifacts may have been injected with malicious code.
  2. If different versions of a release artifact are running (“customer-api-1.0.0” and “customer-api-1.1.0”), then internal systems may decide to only allow traffic from certain versions. For example, to assure both employees and systems show a “constantly upgrade and roll forward” behavior, systems storing sensitive data may stop accepting traffic from releases that aren’t the latest version after the organization’s release window or security patching SLA has passed (often mitigating contractual risk).

    Similarly, some critical systems may deny traffic from a version of “customer-api” that sits on the Internet (ex., public API) if it has a known and trivially exploitable remote vector vulnerability in its SBOM. The risk of that workload already being compromised or being compromised in the future after accidentally storing some sensitive data, like in a debug or log file, may be unacceptable.

It is easy to look at some of these more advanced examples, especially those that weigh system availability versus confidentiality and integrity, as futuristic or unobtainable. It’s true that many security teams are contending with much earlier Configured Identity details like a clean directory structure and ubiquitous SSO.

These later maturity targets are shared because they show how Configured Identity extends beyond just authentication and authorization. Knowing what you have in your environment allows you to build processes, training, feedback loops, and automation. It allows you to monitor the environment to understand how those Configured Identities drift over time, measuring the gap between the Configured Identity and its ground truth behavior.

Behavioral Identity

Objective Continuously monitor for expected versus actual behavior for both threat detection and to grow the team’s understanding of new behaviors in their environment.
Challenges Observability is still a challenge for many organizations. While its technology has gotten easier and more ubiquitous, operationalizing the data to turn telemetry into knowledge and action is still a challenge despite massive expenditures on technology.
Likely Data Sources Security and operational monitoring systems like your SIEM, EDR, centralized logging, and network monitoring.   Dedicated systems and data collection may not be needed if the organization has historically prioritized investment in observability, treating Behavioral Identity as an output or feedback loop from that investment.

Behavioral Identity can be the largest mental hurdle since it goes beyond the traditional view of identity to include an identity’s actions. We regularly see this in our day-to-day interactions, “That work goes to Bob but make sure you document and follow up because he has a habit of not following through on his commitments.” Bob’s identity is more than his role and responsibilities, it's also how he behaves.

For example, traditionally network intrusion detection systems (NIDS) tried to understand a system’s Configured Identity based on its Behavioral Identity. This was typically done by fingerprinting network traffic. For example, observing HTTP vs SQL traffic would suggest a web server instead of a database server, and therefore allow us to infer the Configured Identity from the Behavioral Identity. This kind of inference historically had a significantly lower confidence and higher false positive rate compared to the NIDS simply being told the Configured Identity by the system, which has been made easier by subsequent Infrastructure as Code developments.

This helps to highlight the difference between Expected and Actual behavior:

  1. The expected behavior is the Configured Identity. A workload is configured to be a web server, or a remote employee is configured to be a non-privileged user. Operators have certain expectations about how a Configured Identity will behave, like a web server only performing HTTP traffic or a non-privileged user never taking privileged actions or accessing sensitive data.
  2. The actual behavior is the ground truth that occurs whether you observe it or not. When actual behavior drifts from expected behavior it’s usually due to a system change, like an OS upgrade or unexpected employee international travel, which should usually be reflected into the Configured Identity or monitoring. Less often this drift is due to an unaccepted risk like “shadow IT” or a security incident like a remote threat actor.

These divergences of expected vs actual behavior must be embraced by teams as they learn how their system behaves, feeding that knowledge back into the rest of their program.

Zooming Back Out

This identity model takes a systems view across an entire organization not limited to authentication and authorization. Technology like PKI and SSO are important for supportability and usability, but the material leverage comes from the architecture, processes, and feedback loops that connect an identity’s layers. Even extremely resource constrained teams working in risk-laden or archaic environments that will never see a software update, let alone a Zero Trust architecture, will benefit from simply changing how they talk about and work with identity.

Footnotes

[ 1 ] A stark example of non-human identity is the major cloud providers (Infrastructure as a Service) including their compute services in their access layer, like AWS IAM. This allows workloads to automate the “rewiring” of the cloud with the same APIs and permissions a human operator uses, extending the human’s privileged access threat model to those compute services. This pattern is the new normal as APIs proliferate, companies automate more of their business, and current generation generative AI companies launch natural-language-to-API bridges. [return to text]