Protection of AWS CloudFormation Resources

AWS Cloudformation is a service that helps you model and set up your AWS resources. You create a template that describes all the AWS resources that you want and AWS Cloudformation takes care of provisioning and configuring those resources for you. Protecting AWS Cloudformation resources are highly recommended and it is essential part of disaster management, In this blog we see different methods to protect AWS Cloudformation resources.

Protection of AWS CloudFormation:
There are three ways to protect resources that are created by AWS CloudFormation.

1. Account level protection

2. Stack level protection

3. Resource level protection

Account level protection:

AWS CloudFormation takes a template that describes desired resources and deploys it as a stack of resources. When a stack is deleted, the resources are deleted too.

Therefore, we must have account level protection to control which users have permission to delete the stack. This can be assigned via Identity and Access Management (IAM).

Add a policy to an IAM user, which denies the delete stack action.

Step 1: Go to IAM

Step 2: Click on Users

Step 3: Select an user to add a policy
Step 4: Click on Inline policies.
Step 5: Click on Custom Policy

Step 6: Write the policy name and in policy document section copy paste the below content
Step 7: Click on Validate Policy, Once it shows “This policy is valid”

Step 8: Apply Policy

Stack level protection:
You can prevent stack resources from being unintentionally deleted during a stack update by using stack policies. Stack policies apply only during stack updates and should be used only as a fail-safe mechanism to prevent accidental deletes to certain stack resources.
By default, all resources in a stack can be updated by anyone with update permissions. However, during an update, some resources might require an interruption or might be completely replaced, which could result in new physical IDs or completely new storage. To ensure that no one inadvertently delete these resources, you can set a stack policy. The stack policy prevents anyone from accidentally updating resources that are protected. If you want to update protected resources, you must explicitly specify those resources during a stack update.
Stack policies are JSON documents that define which update actions can be performed on designated resources. You can define only one stack policy per stack; however, you can protect multiple resources within a single policy.

Here’s a sample stack policy that prevents delete to PROD_DATABASE resource:
“Statement”: [
“Effect”: “Deny”,
“Action”: “Update:Delete”,
“Principal”: “*”,
“Resource”: “LogicalResourceId/PROD_DATABASE”
“Effect”: “Allow”,
“Action”: “Update:*”,
“Principal”: “*”,
“Resource”: “*”
How to set a stack policy when you create a stack:
Step 1: Go to CloudFormation
Step 2: Click on Create Stack

Step 3: Upload the template

Step 4: Write the Stack Name

Step 5: Click on Advanced , Enter Policy



Copy and paste the above sample stack policy in Enter policy section or you can write your own policy.

Step 6: Create Stack

Resouce level protection:
Resources created by CloudFormation can still be deleted/modified by any user with appropriate permission. Therefore, it is important that you protect important resources from being impacted by unauthorized users. AWS recommends the granting least privilege so that users only have control over the resources they require, and no more.

It is recommended that you write CloudFormation templates with DeletionPolicy attribute. DeletionPolicy attribute can preserve or (in some cases) backup a resource when its stack is deleted. You specify a DeletionPolicy attribute for each resource that you want to control.
DeletionPolicy attribute values:
1. Delete (default) : If a resource has no DeletionPolicy attribute, AWS CloudFormation deletes the resource by default.
2. Retain : Retain deletion policy defines resources that should not be deleted when a stack is deleted.
To keep a resource when its stack is deleted, specify Retain for that resource. For example,
“AWSTemplateFormatVersion” : “2010-09-09”,
“Resources” : {
“myS3Bucket” : {
“Type” : “AWS::S3::Bucket”,
“DeletionPolicy” : “Retain”
The above example demonstrates how to retain an Amazon S3 bucket even when a stack is accidentally deleted. You can add this deletion policy to any resource type. Note that when AWS CloudFormation completes the stack deletion, the stack will be in Delete_Complete state; however, resources that are retained continue to exist and continue to incur applicable charges until you delete those resources.
3. Snapshot
The following resources support snapshotting:
AWS::RDS::DBInstance and
AWS CloudFormation can create a snapshot for these resources before deleting them. Note that when AWS CloudFormation completes the stack deletion, the stack will be in the Delete_Complete state; however, the snapshots that are created with this policy continue to exist and continue to incur applicable charges until you delete those snapshots.
Since our entire infrastructure is managed by AWS Cloudformation, protecting Cloudformation resources is crucial. As we discussed various methods, one can implement all the above protection methods to ensure that Cloudformation stacks are protected. At a minimum we should have at least account level protection, so that we can feel safe about our infrastructure.

Deployment Automation

Continuous Deployment at FreeCharge Payments

We recently launched the FreeCharge Payments applications with autoscaling and continuous deployment. This post talks about how this was done and some of the choices that were made to accomplish this.

The FreeCharge web application goes through various lifecycle phases. We use Jenkins, Ant and Maven as our build tools and Ansible as our Continuous deployment tool.

Jenkins builds are triggered for each release version of the web application software (webapp). Jenkins then posts the successful builds to the Nexus repository. These builds are then propagated to the servers using Ansible.

The webapp is deployed on Amazon Autoscale Groups formed using Amazon AMIs which are created by running Ansible roles on them which deploy our tech stack software and configuration. Since the AMIs have all prerequisites inbuilt, the latency for the servers to be functional is reduced.  

Autoscaling brings in the problem of deployments on a dynamic inventory which is solved by the Continuous Delivery apparatus discovering if the servers have scaled up or down for an accurate point in time inventory. This is done by a python daemon script using python boto library.

In the above diagram ‘CD’ uses Jenkins and Ansible. Jenkins pulls code from git; generates a deployable; puts the artifacts in a versioned format in Nexus. Ansible runs a playbook on each of the servers categorizing them based on the Autoscaling Groups they have originated from.

The tasks performed by the Ansible playbook are:

  • Pull the artifact from Nexus.
  • Unregister the server from it’s ELB.
  • Stop the running services (tomcat, application service, any other monitoring service, etc).
  • Remove the old artifacts from the deployment directory of tomcat.
  • Copy the new artifacts to the deployment directory of tomcat.
  • Start services.
  • Register the server in it’s ELB.

After any deployment (successful or unsuccessful) a detailed email alert is sent to the Build & Release team using Amazon the SNS/SES services.


Though there are many other options available for Continuous Deployment but we chose Jenkins+Ansible for it’s simplicity. Using this strategy, we have reduced our down-time between the payments app server deployments. The release process is just a “one-click” process now! At the end of the day our Dev and DevOps teams are happier, one of the very critical aspects that FreeCharge cares about!

Careers at Freecharge

FreeCharge is among India’s largest consumer transaction platforms and now part of India’s largest M-Commerce Ecosystem. We’re committed to building products that are not only used by millions of users, but also loved by them making us involved in the industry’s most interesting and challenging projects. With this said, let us introduce the awesome technologies that we use that makes this incredible product possible :-

Automation :-

  • Developing new features at great pace is not possible without use of Automation. Automation is part and parcel of all our teams and allows us to deploy, manage and secure systems just by a command.
  • We make use of configuration management, version management and CI/CD tools to automate cloud provisioning, application deployment, deploying Security layers on application, and many other IT needs. With agent-less architecture and SSH based mechanism, it saves us lot of time during deployment and allows us to focus on developing cool features for our customers.

Infrastructure :-

  • We leverage the Amazon Cloud extensively which gives us Agility, Elasticity and Flexibility to handle the large scale spikes in traffic at frequent intervals and provide customers a consistent experience without any degradation of performance.
  • We have a solid 3 tier Architecture consisting of Web Servers, Application Servers and Database Servers which gives us Flexibility, High Availability and Strong Security at each tier.
  • We actively use Content Distribution Systems, Queues, Messages Buses, redundant shared data stores, load balancers and other services that enable us to distribute the loads even at the highest peak of traffic and load the site in a flash.

Analytics :-

  • We track about 720 million click stream events per day approximately. We have robust and salable systems in place to handle click stream events at
  • We generate real time transnational funnel reports with a delay of 1hr.
  • We continue to thrive to personalize the user experience in, for the same, we do individual profiling of more than 23 million users at, by capturing user behavior, and processing by in build predictive systems.
  • In built robust reckoning system will alarm any mismatch in data uploads/data dumps/data transfers

Security :-

  • We are one of the initial companies who have achieved PCI DSS version 3.0 compliance as a Level 1 Merchant. We follow Industry Level Best Practices for Defense-In-Depth across all of our Infrastructure

Life at FreeCharge :-

  • FreeCharge boasts of a culture where there is a right mix of responsibility and freedom.
  • Our team consists of rock star front-end engineers, solid Java developers, product hackers and DevOps gurus handpicked from the best product development companies. FreeCharge inspires a sense of high ownership towards the vision which leads to high quality outputs from every single employee.
  • With constantly researching, developing and implementing best-in-class features, we are always looking for people who can help us do the same.
  • With this being said, we have a Awesome Gaming Room, Free Food, A Library and much more.

Opportunities at FreeCharge :-

  • Feel free to explore the Current Openings at Free Charge.


The case of slowly dropping recharge rate


This shows recharge rates on Nov 1 and Nov 2.

Since 1:30pm on Nov 2, we began to notice a slow decrease in the recharge rate. The normal pattern we observe almost daily is a steady drop in the afternoon hours, before recharges begin exponentially picking up pace again in the early evening – but this looked different and the decrease rate continued through 5:30pm, which was odd. It was not a sudden drop, so any system change seemed unlikely to have caused this. A look at the HTTP connections graph also showed the same story. The actual traffic to the site was slowly dropping since 1:30pm. Maybe it was because of Diwali. But still, we were not satisfied with the lack of a singular culprit.

I could not find anything obviously wrong with our systems and got in touch with Chandrashekhar (our devops guru). He also agreed that the shape of the chart looked suspicious.

After ho-humming about it for a while, CNB asked me to turn on the TV and watch the India-Australia cricket match; I acquiesced, and watched the last two overs of the superb Indian innings. Around 10 minutes after the Indian innings got over, CNB tells me to look at the chart again. A noticeable spike – in recharge rate and traffic as well!

Take a look at the second chart.


The correlation was just too much to ignore. When India bats, we experience lower recharge rates. Mind = blown!

Call alerts with KooKoo

Making sure all systems are working fine round the clock is very important for us. We use the popular monitoring solution, Nagios, to do the job of alerting us when things are not quite ok. Now, configuring Nagios with email alerts is pretty simple and we set it up with that.

But sometimes, email alerts are simply not good enough – say – a server is experiencing low memory situation in the night. The solution is to have Nagios call up a telephone number for critical alerts. This is where KooKoo comes in.

KooKoo has a web based API for call control. Although most of their services are aimed towards incoming calls, they do have a simple outgoing call feature as well. We wrote a quick shell script – “” which takes the phone number and the message to be delivered. The real task it does is to simply make an HTTP request:

wget --quiet --timeout=10 -t 1 -O /tmp/kookoo_call.$$.out "$PHONENUM&api_key=XXX&extra_data=$MESSAGE ... repeating message ... $MESSAGE"

KooKoo uses a decent Text-to-Speech engine which generates the message on the phone call. Still, repeating the message does not hurt – helps you to rub your eyes and become sane enough to understand what is being said 🙂

Next, use this script as a Nagios alert command:

/path/to/ -p $CONTACTNUMBER$

Voila! We now get a phone call on critical system alerts. Of course, we still have to make sure our on-call mobile phone is charged – but that’s another story 😛

Exploiting Spring MVC interceptors

Interceptors are a pretty nifty feature in Spring MVC and we mix it with annotations to pull off some cool things in our app that keeps our code neat and tidy.

Let us take the case of protecting your actions with CSRF tokens. One approach that you can take is, in every controller, the first thing that you do is CSRF token matching and sending a forbidden response if the tokens do not match. Being the technically savvy reader that you are, I am sure you see the problem with this approach. Your CSRF related code is littered all over the place and duplicated in each and every controller action. So how can we stick to the DRY principle and keep our code sane? How about moving that logic to a Spring MVC interceptor and configuring every request that hits the app to be intercepted by this? Now we have moved our whole CSRF processing to a single place, but there is a problem here. Say, you do not want some controller action to be checked for a CSRF token, how do you achieve this? Annotations to the rescue. Let us define a custom annotation called Csrf with an attribute called exclude. In the interceptor, we get the target controller method, check whether there is an annotation defined for the method and if there is an annotation, we skip the CSRF token check.

Interceptor code, HandlerMethod is the target method of the controller:

public boolean preHandle(HttpServletRequest request, HttpServletResponse response, Object handler) throws Exception {
    //To make sure this is a spring controller request
    if (handler instanceof HandlerMethod) {
        if (shouldCheck(request, (HandlerMethod)handler)) {
            String sessionToken = CSRFTokenManager.getTokenFromSession();
            String requestToken = csrfTokenManager.getCSRFToken(request);
            if (sessionToken.equals(requestToken)) {
                return true;
            } else {
                response.sendError(HttpServletResponse.SC_FORBIDDEN, "Bad or missing CSRF value");
                return false;

    return true;

private boolean shouldCheck(HttpServletRequest httpServletRequest, HandlerMethod handlerMethod) {
    Csrf csrf = handlerMethod.getMethodAnnotation(Csrf.class);

    if (csrf != null) {
        return !csrf.exclude();
    } else {
        if ("POST".equals(httpServletRequest.getMethod())) {
            return true;

        if ("GET".equals(httpServletRequest.getMethod())) {
            return false;

    return false;

Csrf annotation class:

public @interface Csrf {
    boolean exclude() default false;

An example controller with annotation usage:

	@RequestMapping(value = "foo", method = RequestMethod.POST)
	public ModelAndView foo(Model model,
			HttpServletRequest request, HttpServletResponse response) {

We use a similar pattern for our login validation also, will expound on this in a separate post. Are you looking forward to hacking on cool things like this? Get in touch with us at, we are always looking for curious people who want to build beautiful products.