Digital Transformation Starts with a Strong Identity

When we set out to build the new identity platform, we started with the overall requirements below:

  • Support multiple different types of users and applications and clean up our various domains to make routing easier.
  • Authentication for all our clients: APIs, people that purchase our products, our 3rd parties that service our products, 3rd parties that integrate our products, and our own internal applications
  • As much continuity as we can bring to stop major disruptions in our existing systems
  • Must be extensible for additional factors of authentication
  • Scalable authentication that can be used for backend services, as well as mobile and web applications
  • Applications can make decisions about the expiry and factor requirements
  • Consolidation of the 5 existing authentication systems
  • Decoupled microservice that supports other sites and services

Domain based Login

We started out by designing the overall login experience in concert with our domain strategy and a person-based architecture. Each domain runs multiple applications for a specific “Realm” of users. Using independent realms allowed us to solve authentication needs based on the type of person logging in. An example is our Internal authentication for PPLSI employees uses federated authentication with OKTA, while our Member authentication allows for more general username and password authentication. Cookies are issued in the root domain based on realm. Each application that serves the users of the realm are then run on the sub-domains.

The backend Identity Service was built with .NET core 3.1 (and later moved to .NET 5 and 6), PostgresSQL and deployed into our standard Kubernetes clusters. Our Login microsite, running on login.<root-domain>.com was built on React with a .NET webserver that is responsible for authenticated calls to the identity service and issues cookies in the various root domains. Our Login site is also responsible for the front-end OAuth and other 3rd party federated authentications. The identities service validates these federated tokens and exchanges the credentials for authentication cookies in our domains.

Our Login application was set up to handle authentication with the identity service based on the root domain. Login runs on the login subdomain for every root domain of the realm. All navigations to the login service pass an application and path parameters to it. These are stored so we can redirect the person back once authentication is successful. For instance, our employees attempt to access the portals where they can manage their work. We redirect to a specific login domain with the correct app and path parameters. The login site understands the domain, stores the app and path temporarily, and pushes our employees through the OKTA login process. Once OAUTH using OKTA is completed, the Login application calls identity with the Internal realm. Identity then exchanges the OKTA token for a common JWT that the rest of our architecture can work with.

Our Login application takes back over once it receives this token, issues cookies, and redirects the person back to their desired source. This redirect is of the form “https://<app>.<rootdomain><:port>/<path>. An HTTP Only, secure cookie is created with the JWT in full, while another secure cookie that contains basic information from the JWT is created. This second cookie is where we store the name of the person that authenticated and identity id of the authorized person. With our cookies ready to go, we can allow access to the domain, and let the various responsible for authorization handle things like permission and resource access.

It’s all about the JWT

Before we started implementing the new microservice and microsite, we needed to determine what access tokens would look like. Researching known JWTs, our existing implementations of one and reviewing our requirements lead us to a reasonable design. I won’t exhaust the full claims but we designed the JWT with a standard header and body structure and chose RSA-SHA256 as the encryption for public / private signature verification.

We kept the standard claims for subject data, issued at, id, and issuer, but this is where we stopped with the standards. We added claims for refresh time, a key value pair for factors that included the factor of authentication and time of issuance, the username at login, the machine key (OAuth follows would know this as the ID for an Api Client), and a multifactor enabled claim to determine if the person has required MFA.

One of the more common claims we did not add is expiry. For our standard authentication checks, the identity service will check the issued at of the JWT and do a comparison of that timestamp with a variable we internally call the Impact. Impact is set by the caller when validating the cookie or JWT with the identity service and is strictly a Low, Medium, or High enumeration. By passing in the token and the expected authentication impact of whatever action is being taken, the auth service will return a result informing the caller whether the token is still valid or not. The cool feature this allows is variable authentication with hard time limits. Something that is a low impact operation can be allowed to have an older token (ex: getting the list of entitlements for a person) while a high impact operation (ex: changing a password) can require a much newer token.

With the core JWT designed, we set off to dive into the factors of authentication.

Exchanging credentials for an access token

From the original set of requirements, we knew that we needed to handle multiple factors of authentication. Of the 5 different factor types, we choose two to start with:

  • Knowledge: something you know, such as a password or security question
  • Possession: something you possess, such as a Yubikey, authenticator app, or mobile device

We started with a focus on knowledge factors. Our first versions of the new identity microservice needed to support Okta and a password knowledge factor.

We wanted to have a single API endpoint to exchange credentials and be able to receive a signed JWT back from the service. To support the various types of login and the different social providers, we devised a simple JSON that could be posted:

{

“realm”: “lawyer”,

“username”: “<matching a username / email from the PostgresSQL db>”,

“password”: “<when password auth>”,

“federatedAuthType”: “<enum of the various 3rd party providers>”,

“federatedAuthId”: “<id from the 3rd party provider>”,

“federatedAuthToken”: “<token from the 3rd party provider>”,

“factorId”: “<id for the factor if a possession factor>”,

“factorCode”: “<code for the factor id to validate>”

}

The realm for the request is decided by the domain of the Login site. Login uses the domain and maps that domain to a realm. The request for a token is then decorated with this property, and the data relevant to the token request is sent to the identity service. Identity first does an authentication check to ensure it’s from the login site, then jumps through normal cryptographic checks of the password, codes, and token validation for 3rd party authentication and then returns a signed JWT. Our Login application then creates the cookies in the domain and redirects to the calling application. With this flow complete, the cookies can then be used in a shared capacity across the rest of the architecture, app and service alike.

Validating tokens

In order to drive adoption of new service and march towards a truly identity driven architecture, we turned our sights to the various technical stacks and building libraries to validate the tokens against the identities service.

Validation of the token requires loading details about the key that was used to sign the JWT itself. Once the basic details of the public key are loaded, we can decode the JWT and ensure its validity. We used standard RSA crypto libraries in each language to complete the validation. Public keys are available from the identities service in the JWK format. These keys are used to validate the JWTs.

A version of the authentication validation was implemented for our current and legacy technical stacks: .NET, node, java, Ruby on Rails, and python. This has been extremely useful and powerful as we’ve been able to build net new on top of the new architecture and has allowed us to move our legacy web systems over too.

Each implementation required the same general data flows: pull the token from the cookie or the authorization header, decode the JWT into an object, validate via the public / private key pair, validate the issued at and refreshed at is within hard upper limits we set based on the impact of the action. If validation fails, the applications and services can fail an action, return a 401, or route people to log back in.

Using the subject and arriving at an identity centric model

As we migrate, create, and manage the various objects in the platform we leverage this identity of the user. The use of this identity architecture is not limited to our websites. Our backend microservices use this architecture equally. When a resource is created, the owner of the resource is assigned as the identity ID, or the subject ID from the validated JWT. For each major resource, we can load data for the specific verified identity. Each microservice implements one of the authentication libraries and allows validation of the subject ID with the resource being loaded. In this way, we can ensure each service has a strong identity implementation and ensure that each resource is being managed only by its owner.

Welcome

Welcome to the PPLSI Blog. At PPLSI, we started a transformational technical journey a couple of years ago. We’ve had some great success, made some mistakes, but always learned along the way. We decided to start this blog to share some of these learnings.

As a first post, we’ll give a little background on where we started and how this journey began.

PPLSI is a 50-year-old company founded by Harland Stonecipher. Finding himself in a car accident, Harland realized that having access to equal legal protection was something all people could use, leading to the creation of LegalShield.

This was in 1972 and was well before the advent of modern-day computers. Outside of a few rare exceptions, people didn’t have personal computers in their homes or businesses. And certainly, Harland’s accident wasn’t caused by someone using a mobile device.

Naturally, the company integrated technology over the years. In the early 1990s, the company adopted a popular platform based upon IBM DB2 running on an AS400 iSeries.  This was the backbone of the company for several decades.

During the years, more modern technology was introduced through mobile and web applications. But the core system continued to run on the IBM Workhorse.

An attempt was made to replace this system with a more modern cloud-based architecture in 2018, but this project was struggling.

The team had taken an “all or nothing” approach to the design and deployment. When dealing with a 30-year-old platform, this is an extremely risky bet. There were multiple skeletons in multiple closets in the old system, and discovering and unpacking these would have taken years.

Equally problematic, the team was designing the new system as a monolith. There was one database, one middleware API, and one front end. While this may work well for a small team on a small to medium-sized project, it wouldn’t scale well from a development or deployment point of view.

In 2020 the company decided to head in a different direction. Instead of an all-or-nothing approach to a monolithic architecture, the company switched to taking a more modern approach. The team would leverage a common architectural pattern using micro-services. Smaller “bite-sized” backend components would be built and adapted individually.

On the front end, a similar decision was made. Instead of one monolithic website, smaller websites would be built that were specially purposed.  Logging in, accessing an account, and using individual products could all be smaller web applications.  A similar pattern could be adopted for other tools and services used by employees and partners. We call these individual small web applications “microsites”.

Our first decision was to standardize the platform for building this.  One for the backend and another for the frontend.

For the backend, there were many choices.  The company had experience with Java, .Net, Python, PHP, and Ruby. Our requirements narrowed things down quickly. We wanted a language that was strongly typed. And we wanted something a bit more mainstream to widen the talent pool.  We settled on .Net as our backend platform of choice.  Java was a close second, but in the end, we felt .Net required fewer “decisions” for us to make.

For the frontend, the team had a great deal of experience with Angular; but little in the way of standardization, shared code, or patterns.  Based on some of the experiences of our newly hired personnel, we went with React on the Web and React Native on mobile.  Since we were starting a lot of our work from scratch, this was an easy decision.

Once we picked the technology platform, we needed to figure out where to start. And since the core of any platform is identity, we started there. Users needed to sign in across web applications and access backends, and a strong notion of identity would be the glue for the entire system.

The identity architecture itself is simple and powerful. Users would sign in using a username/password. As part of this process, we would write a cookie as a JSON Web Token (JWT) into our root domain.  Individual microsites and microservices would receive this JWT, verify the token, and allow access as appropriate. JWTs are a standard, with many tools and libraries for creating, parsing, and validating them.

To build the identity system, we needed a backend to store the data and verify users and a frontend application for the user to sign in. We established a pattern and tooling to build backend microservices to make this rapid and easy. To build frontend applications, we created a library of React Controls and other UX Components to allow rapid but consistent development.

The next two posts will get into the detail of the frontend and backend architectures built to support this and future initiatives.