Securing Application Secrets with EC2 Parameter Store


When developing a non-trivial application, an important early step is to decide what to do with your application secrets.

These can be API keys, database passwords, or other special configuration values your application needs to function, but that you don’t want everyone to have access to. Very often, developers wind up taking an insecure or difficult-to-manage (or both) approach to application secret storage, either due to time constraints or uncertainty around best practices.

In this article, we’ll learn about the best way to secure your application secrets — EC2 Parameter Store. But first, let’s take a look at a few less secure methods for managing application secrets that are still commonly used.

Storing plaintext secrets in source control

While this may seem like a convenient way to manage secrets, its crucial flaw is that even in a closed, private project, it exposes your application secrets to anybody with read access to the repository. This approach would certainly not pass an audit, and the security implications are considerable even if your project doesn’t have any formal compliance requirements.

Storing plaintext secrets on the server

Another approach is to store secrets on the server where your application is running. This can be in the form of an application configuration file that doesn’t get committed to source control. In the case of a web application, you can store secrets in the web server configuration to be injected into your application’s runtime as environment variables.

This is a safer approach than having plaintext secrets in source control, as it limits the number of people who would have access to them, but it has a big weakness that the previous approach doesn’t: it’s inconvenient.

Let’s say you have a dozen servers running the same application. How are you going to modify existing application secrets or create new ones? Logging in to each server individually would take a while and is very inconvenient. You could introduce some automation around configuration management, which would address how your application is going to access the secrets it needs to run and how you’re going to keep them up to date, but it doesn’t address where you’re going to store those secrets in the first place.

In other words, how is your automation going to know what secrets it needs to deposit on each server?

We already covered why you shouldn’t store them in your source control repository, and it wouldn’t make sense to only store them at their destination. So where would you store them? Somewhere else on your computer? That’s not necessarily secure either, but even if it were, how would other members of the team who should have access to those secrets get them? Perhaps everyone could get a copy when they change, but then how do you distribute them securely and make sure they’re all in sync? There are a lot of logistical problems with this approach.

So, what can you do?

Storing encrypted secrets in source control

If the problem is due to plaintext, then let’s start encrypting things, right? Well, yes, but it’s not that simple. If you used a modern, secure algorithm, then you alleviate a lot of problems with storing plaintext secrets in source control, but you encounter another problem — how do you decrypt the secrets? To decrypt an encrypted string, you’ll need the right key, but the key itself is also a secret that needs to be kept safe!

You can’t store the key in your source control for the same reason you can’t store the secrets themselves in source control. If you store the key on your servers, then you encounter the same maintenance complications you’d see if you were to store the secrets on the server. Even if you did have an approach that allowed you to store and easily maintain the decryption key (or the secrets themselves) on your servers, there’s still a problem — what if there are no servers?

Lambda, EMR, CodeDeploy, etc., are all “serverless” technologies available to you in the AWS portfolio of features and services. None of them are meant to run on any sort of permanent server that you would access in order to store your encrypted application secrets or the keys that unlock them. If your application stack involves any of these services, then all the approaches to application secret management we’ve covered so far are not going to work.

Managing secrets with Amazon EC2 Parameter Store

Now that we’ve gotten a few suboptimal approaches out of the way, let’s dive into the best approach – the Amazon EC2 Parameter Store, which provides a centralized store to manage your configuration data, whether it’s plain-text data such as database strings or secrets such as passwords, encrypted through AWS Key Management Service.

For this tutorial, let’s assume we have a web application that interacts with an external API endpoint, which requires an API key for use. The API key is meant to be secret and should not be shared, so you’re the only one who knows it right now. It needs to be stored securely, updated efficiently and easily accessible to your application.

Getting started

Secrets managed by the EC2 Parameter Store are encrypted using AWS Key Management Service (KMS) keys, so the first thing we need to do is create a key. We could use the default aws/ssm key that AWS would create automatically for us, but creating our own key specific to the application that will be using it gives us more granular control over who can manage and access it. It also allows larger organizations with multiple applications to more easily limit visibility and management of application secrets to those teams responsible for those particular applications.

We’ll be adhering to AWS best practices and using IAM Roles on our EC2 instances to interface with the AWS API, so we need to create a new IAM Role. We could also simply attach the AmazonSSMReadOnlyAccess IAM Policy, but doing so would give the IAM Role access to read all SSM-related items, not just Parameters.

To start creating this custom IAM Policy, open the AWS Console and navigate to IAM -> Policies and click Create policy. Next, select Create Your Own Policy, and use the following Policy Document:

If you have additional parameters that you want to grant access to in the future, you can add their ARNs to the Resource list, or prefix all of their names with your application name. You can then specifyarn:aws:ssm:YOUR_REGION_HERE:YOUR_ACCOUNT_NUMBER_HERE:parameter/MyApplicationName-* as the ARN. Otherwise, enter MyApplicationPolicy for the Policy Name and click Create Policy.

Now, let’s create the IAM Role. Head over to IAM -> Roles and click Create new role, then choose Amazon EC2 for the AWS Service Role.

Select Role Type

Choose Customer Managed as your policy Filter and select MyApplicationPolicy as the IAM Policy to attach, then click Next step.

Attach Policy

Finally, name your role MyApplicationRole and click Create role.

Now it’s time to create our own key in KMS. Navigate to IAM -> Encryption Keys and click Create Key.

Create Key

Next, enter an Alias for your key and an optional Description. For Key Material Origin, leave the default KMS selected.

Create Alias

Add some tags, if needed. For this example, I’ll tag this key with the application name and the environment in which it would be used.

Add Tags

Now we need to assign one or more key administrators. The key administrator is an IAM User or Role who is allowed to make changes to the key itself or grant others access to use and administer it. For this example, I’ve chosen my own user, but I could have also created an IAM Role that my IAM User could assume in order to execute administrative functions on the key. Depending on your organization, this might make management of key administrators easier, but for this example we’ll keep it simple and just use my IAM User. I’ve also left the Key Deletion option checked, which will allow me to delete the key in the future.

Define Key

Next, we’ll Define Key Usage Permissions so that we can actually use our key. Here, I’ve selected the MyApplicationRole IAM Role as an authorized key user, which is the role I’ll be assigning to the EC2 instances my application is going to run on. I’ve also selected myself as an authorized user, as by default, the key administrator permissions do not include permissions to actually use the key. If desired, we could also give other AWS accounts permissions to access our key, which those account administrators could grant to individual IAM users and roles on their end.

Define Usage

The next page will display a preview of what your KMS key policy will look like. After looking it over and concluding that it’s fine, click Finish.

You will now be returned to the Encryption Keys page and find that the key was successfully created.

Succesful Creation

Encrypting our secret

Now that we have a KMS key that we can use to encrypt our secrets, let’s go ahead and actually create one!

You can do this by either using the EC2 -> Parameter Store interface in the AWS Console:


Or by using the AWS Command Line Interface:

Note that if you already have an encrypted application secret with the selected name, the command will fail. If that’s the case (as it would be in the future whenever you need to change the value of the secret), you must add the –overwrite flag.

This command only displays output if there was an error, so if you see nothing after running it, you’ve successfully encrypted your first application secret!

Retrieving the plaintext value

Viewing our secret’s plaintext value in the AWS Console is quite easy. First, navigate to the EC2 Parameter Store dashboard and select the parameter.

Create Parameter

Then, just click Show to expose the plaintext value.

Show Parameter

Despite how easy it is to reveal the plaintext value, this information is only available to users and roles who have permission to use MyApplicationKey to decrypt values. If a user who doesn’t have the appropriate key permissions tries to view the plaintext value, the AWS Console will not reveal it to them.

EC2 Parameters

Do note that there are some IAM managed policies, such as AdministratorAccess and PowerUserAccess that will automatically grant access to view EC2 Parameters and use the decryption keys. You can prevent PowerUserAccess users and roles from doing so by adding an Inline Policy to their assigned permissions specifically denying access to the EC2 Parameters or the KMS key (or both). This is not possible for AdministratorAccess, as that access level by definition also enables any such users to remove that restriction from themselves. As such, AdministratorAccess should be assigned judiciously.

Application use

The AWS Console makes it easy for humans to view a secret’s plaintext value, but what about an application? Let’s take a look at a quick example using the AWS SDK for PHP.

This example assumes that your AWS default profile credentials on your workstation have been correctly configured, or that it’s running on an EC2 instance with an IAM Role assigned to it that has access to the encrypted secret. You will also need to have the AWS SDK for PHP installed. Refer to the User Guide for more information on getting started.

Executing this sample code will retrieve the APPLICATION_SECRET_API_KEY stored EC2 Parameter, decrypt it using the associated KMS key, and return the plaintext string to you. For use in a real, Production application, we recommend caching these values in a local configuration file on the application server that can be refreshed as part of your code deployment process. Doing so eliminates the added initialization time at the start of a request used to retrieve the parameters from the EC2 Parameter Store. This ends up yielding the same benefits as the storing plaintext secrets on the server approach without the inconveniences.

If you don’t use PHP, there are AWS SDKs for a variety of other programming languages. Visit the AWS Developer Tools documentation for more information.

Naturally, you can also use the AWS Command Line Interface to retrieve plaintext values, much like it can be used to store them in the first place.

This command will output the key in the following format, and can be used to retrieve multiple keys simultaneously:

Visually, the workflow looks like this:

EC2 Storage Workflow


In this article, we covered some of the common, less-than-ideal ways to manage application secrets. We then learned how to do so in a secure, easily-manageable way with EC2 Parameter Store, and how to make this information available to our applications. Now you should be ready to head over to the AWS Web Console, fire up your favorite code editor and get started on integrating this approach into your applications. Good luck!

Do you have additional questions about EC2 Parameter Store or other AWS features and services? Visit Rackspace to find out more about how our AWS-certified experts are helping businesses get the most out of AWS.

Michael Moussa is a solutions architect on the Rackspace Fanatical Support for AWS team. Prior to joining Rackspace, he was a web application developer for over 17 years. Michael has extensive knowledge of every aspect of software projects, ranging from infrastructure to frontend, and he currently spends his days helping customers solve their technical challenges and migrate their workloads to the AWS cloud. In his spare time, he enjoys spending time with his family and brewing award-winning beer. You can find him on Twitter @michaelmoussa.


  1. Great write up. I am pretty new to AWS and am trying to use parameter store for distributing secrets to code running on EC2 instances. Question for you: what kind of policy did you attach to the role “MyApplicationRole”? Thanks!

    • Hi Hayden,

      It seems I accidentally omitted a step! Thank you for catching that. I added some new information under the “Getting Started” section, which now has instructions on how to set up the minimal policy for this. You could also use AmazonSSMReadOnlyAccess, but that’s a lot more permissive. I tend to limit IAM Roles to the bare minimum they need in order for it to do its job.

  2. Nice post! I was initially planning to use Hashicorp Vault, but looks like we don’t need it now. Now, my next task is to integrate this with an API gateway to have Vault like HTTP/HTTPS endpoints

  3. Good post, out of curiosity how would this set-up work for a multi region application especially if you need to update an api key for example across all regions?

    • Hi DevCoder,

      I see two ways off the top of my head how this could work in a multi-region application. You could either (a) pick one of the application’s regions to be the region where the parameters are managed and have the applications in your other regions talk to it, or (b) have a copy of each parameter in the EC2 Parameter Store in each application region, encrypted with a KMS key specific to each application region.

      The first option has the advantage of ease of management and ensuring there’s a single source of truth for this information. One disadvantage is that it may add a bit of latency when your application in, say, Sydney, has to grab an API key from N. Virginia. This can be mostly mitigated by either pre-generating the application config for each application at deploy time, or caching it after the first retrieval. The main disadvantage is that a major outage in whatever reason you selected to be the “home” region for your EC2 Parameter Store values might render the remainder of your applications nonfunctional if they need to retrieve the parameter and cannot do so.

      The second option provides you with more fault tolerance, but may be more difficult to manage and keep everything in sync. Some automation could help with that, however. Perhaps whatever scripts or tools you use to make the change to the API key in one region could automatically do so for every application region as well. It’s a little bit more work, but it reduces the chances of a regional outage affecting applications in regions that would otherwise be unaffected.

      The best approach will depend on the nature of the application and its uptime requirements.

      Personally, I’m confident enough that a deploy-time pre-generated config file or locally cached API key in my application would hold me over until an outage affecting KMS and the EC2 Parameter Store in my primary region could be resolved, so I would be inclined to manage those parameters in the primary region for most applications I’ve worked on in the past. If my use case was an application that represented a very high cost per minute of downtime in terms of lost revenue, then I’d go with option B and utilize as much automation as possible to facilitate the management for me.

      I hope that helps!

  4. Hi Can you please assist in my usecase .

    i am creating an RDS through JSON CFT for which i was using hardcoded password .

    I want to store a password in Parametre store and encrypt it with KMS . Through my RDS CFT i want to call that value stored in Parametre store as a password for my RDS instance . Could you please assist . many thanks in advance

  5. Hello Michael,

    How would you recommend keeping “database secrets” secure in a bash script that uses the parameter store? I can retrieve the secrets using the flag “–with-decryption” but how do you do it with the flag “-without-decryption”?
    For example if I grab the secrets and store in environment variables at script runtime using:

    export DB_USER=$(aws ssm get-parameters –names /db/postgres/db-user –region us-east-1 –with-decryption –query Parameters[0].Value –output text)

    export DB_PASS=$(aws ssm get-parameters –names /db/postgres/db-pass –region us-east-1 –with-decryption –query Parameters[0].Value –output text)

    Any assistance would be appreciated.

    Marv C

    • Hi Marv,

      The secrets cannot be used unless decrypted, so you’d need to use the –with-decryption flag for them to do any good. In general, if the server is able to decrypt the secrets, then anybody with access to the server would be able to as well, so keeping them in the bash file in their encrypted form wouldn’t help secure them any better.

      If they will only be used in the bash script where they’re being retrieved at runtime, then the code you shared in your comment should be fine. I would recommend also making sure that the script isn’t performing any actions that would cause the decrypted secrets to inadvertently be recorded in a log file that might be exposed. For example, I’ve seen script libraries that include the database connection string (including password) when logging a failed query. If there’s automation in place to ship those logs to CloudWatch, for example, then anybody with access to CloudWatch would be able to retrieve the decrypted credentials in that fashion.

      Finally, please note that the above applies only if you’re executing this script directly. If this export code you’re looking to add is being done in, say, a bash profile or some other file that gets “source”‘ed, then I recommend not taking that approach, because the plaintext value will be stored in that environment variable indefinitely, rather than just until the end of the script’s execution.


Please enter your comment!
Please enter your name here