Skip to main content

Using Hashicorp Vault

Why a Vault?

The reason for having a vault is to have the secrets for the application stored in a very "safe" / "secure" location so that only trusted clients of the Vault (Web Apps / Services etc ) can get access to the secrets.

When we do not use a vault, the dev team will check-in the secrets like Database Credentials, AWS credentials etc to the source control, which leaves it open to be accessible to anyone that gets access to source code.

There are a lot of Vaults out there in AWS, Azure etc, but this system is opensource and we have full control of the installation and security, which piqued my interest and hence I started exploration.

Dangers of exposing Secrets

  1. In the hands of wrong person, it could be disastrous, especially if the secret has more privileges, more the depth of the damage
  2. Developers might get to try out changes with these credentials. In recent years, a colleague of mine forgot to append the where clause in the production database and ran the query, given that he was self-confident in production database access. Its easy to err, but to learn and move forward is important.
  3. When exposed to hackers, this could be very disastrous like leakage of data, loss of critical / sensitive data. I always keep an eye out for the mails from "Have I been Pwned"

Benefits

  1. Central Storage of Secrets
  2. Easy to Scale and audit the usage
  3. Easy to rotate the keys and apps do not need to update their config or worry about the change to secrets
  4. Easy to revoke access to a compromised application, just revoke the token / user access to Vault, nothing could go wrong

My Experiment

In order to get a feel of how the vault works, I have tried to setup in an EC2 server within a VPC. You can get good lot of details on how to setup vault in server through this link

Once Installed, I had the UI enabled so that I can use through the UI. In the real time, these will be run only through scripts like Terraforms that create and configure the vault as part of the DevOps pipeline flows

Tokens

  1. Admin / Root
    1. This is the super user of the vault, this user gets to enable the engines (KV / cubbyhole etc)
    2. create the keys if it is known before hand
    3. Create access policies (these are the rules that govern who gets to access what). Every policy will decide what parts of the system is accessible and how far is the depth of the access granted (Ex: Admins can Create, Update, Delete, whereas the end user can just Read)
    4. Creates the tokens by choosing the policies.
    5. Though the tokens are created, this can be renewed later at any given point of time. Ideally every 45 to 90 days renewal window should be good and safe
  2. End User
    1. Consume the secrets that have been granted access to by the policy.
    2. Renew tokens
    3. Access the Vault through the UI for easy use
    4. Use the tokens to access through HTTP API / CLI (Command Line Interface)

Policy

Herewith I have given the sample policy that I have created, it took a while for me to get started with the right policy through the help of the Vault support team, it was very easy for me. My sincere thanks to the Vault Support Team :)

Sample Policy

# Allow tokens to look up their own properties
path "auth/token/lookup-self" {
  capabilities = ["read"]
}
​
# Allow tokens to renew themselves
path "auth/token/renew-self" {
  capabilities = ["update"]
}
​
# Allow a token to look up its own capabilities on a path
path "sys/capabilities-self" {
  capabilities = ["update"]
}
# <-- SNIP -->
# Allow a token to look up its resultant ACL from all policies. This is useful
# for UIs. It is an internal path because the format may change at any time
# based on how the internal ACL features and capabilities change.
path "sys/internal/ui/resultant-acl" {
  capabilities = ["read"]
}
# <-- SNIP -->
​
# grant the permission to the entire KV engine
path "kv/*" {
  capabilities = ["read","list", "create", "update"]
}

# grant list permissions to the metadata for the secrets inside the kv engine
path "kv/metadata/secret/*" {
  capabilities = ["list"]
}
​

# grant list, create and update permissions for managing the secret data. 
# Notice that this data and the above one is Metadata
path "kv/data/secret/*" {
  capabilities = ["list","create", "update"]
}

# Provide the list and read permission for a specific kv secret.
# Any token with this policy can just list and read the values, they cannot tamper the secrets or delete them​
path "kv/data/secret/clients-integration" {
  capabilities = ["list","read"]
}

Creating and consuming tokens

In order to test the policy and create a token for a end-user, I have used below like command

vault token create -policy=readonly-policy default -display-name=svc-app-readonly

Once the token generation is done from the above, the token can be used directly in postman to fetch the secrets. The below screenshot from postman will help view how to setup the header and get the value from vault with right token

how to use the token to access secrets from Vault using Http API

Conclusion

As always, getting started with the vault was a bit tough, but with perseverance and support from the Vault team, I was able to get this sample PoC done.

Note: This post illustrates a sample PoC that I have tried out and is not suitable for production environments, Please exercise caution.

Hope this helps anyone exploring in Hashicorp Vault.

Comments

Popular posts from this blog

User Authentication schemes in a Multi-Tenant SaaS Application

User Authentication in Multi-Tenant SaaS Apps Introduction We will cover few scenarios that we can follow to perform the user authentication in a Multi-Tenant SaaS application. Scenario 1 - Global Users Authentication with Tenancy and Tenant forwarding In this scheme, we have the SaaS Provider Authentication gateway that takes care of Authentication of the users by performing the following steps Tenant Identification User Authentication User Authorization Forwarding the user to the tenant application / tenant pages in the SaaS App This demands that the SaaS provider authentication gateway be a scalable microservice that can take care of the load across all tenants. The database partitioning (horizontal or other means) is left upto the SaaS provider Service. Scenario 2 - Global Tenant Identification and User Authentication forwarding   In the above scenario, the tenant identification happens on part of the SaaS provider Tenant Identification gateway. Post which, the

SFTP and File Upload in SFTP using C# and Tamir. SShSharp

The right choice of SFTP Server for Windows OS Follow the following steps, 1. Download the server version from here . The application is here 2. Provide the Username, password and root path, i.e. the ftp destination. 3. The screen shot is given below for reference. 4. Now download the CoreFTP client from this link 5. The client settings will be as in this screen shot: 6. Now the code to upload files via SFTP will be as follows. //ip of the local machine and the username and password along with the file to be uploaded via SFTP. FileUploadUsingSftp("172.24.120.87", "ftpserveruser", "123456", @"D:\", @"Web.config"); private static void FileUploadUsingSftp(string FtpAddress, string FtpUserName, string FtpPassword, string FilePath, string FileName) { Sftp sftp = null; try { // Create instance for Sftp to upload given files using given credentials sf

Download CSV file using JavaScript fetch API

Downloading a CSV File from an API Using JavaScript Fetch API: A Step-by-Step Guide Introduction: Downloading files from an API is a common task in web development. This article walks you through the process of downloading a CSV file from an API using the Fetch API in JavaScript. We'll cover the basics of making API requests and handling file downloads, complete with a sample code snippet. Prerequisites: Ensure you have a basic understanding of JavaScript and web APIs. No additional libraries are required for this tutorial. Step 1: Creating the HTML Structure: Start by creating a simple HTML structure that includes a button to initiate the file download. <!DOCTYPE html> < html lang = "en" > < head > < meta charset = "UTF-8" > < meta name = "viewport" content = "width=device-width, initial-scale=1.0" > < title > CSV File Download </ title > </ head > < body >

Implementing Row Level Security [RLS] for a Multi-Tenant SaaS Application

Row Level Security The need for row level security stems from the demand for fine-grained security to the data. As the applications are generating vast amounts of data by the day. Application developers are in need of making sure that the data is accessible to the right audience based on the right access level settings. Even today, whenever an application was built, the application development team used to spend a lot of time researching the approach, implementing multiple tables multiple logics 25 queries to add filters to manage the data security for every query that gets transferred from the end user request to the application database. This approach requires a lot of thought process, testing and security review because the queries needs to be intercepted, updated and the data retrieval to be validated to make sure the end-users see only the data that they are entitled to. Implementation With the advent of of row level security feature being rolled out in main d

Async implementation in Blazor

Step-by-Step Guide to Achieving Async Flows in Blazor: 1. Understanding Asynchronous Programming: Before delving into Blazor-specific async flows, it's crucial to understand asynchronous programming concepts like async and await . Asynchronous operations help improve the responsiveness of your UI by not blocking the main thread. 2. Blazor Component Lifecycle: Blazor components have their lifecycle methods. The OnInitializedAsync , OnParametersSetAsync , and OnAfterRenderAsync methods allow you to implement asynchronous operations during various stages of a component's lifecycle. 3. Asynchronous API Calls: Performing asynchronous API calls is a common scenario in web applications. You can use HttpClient to make HTTP requests asynchronously. For example, fetching data from a remote server: @page "/fetchdata" @inject HttpClient Http @ if (forecasts == null ) { <p> < em > Loading... </ em > </ p > } else { <table>