Author Archives: admin

Amazon EC2: Step by Step guide to setup a Java Application

Here is a step by step guide to set up an EC2 instance, installing the required software and deploying a WAR file.

1. The first step, of course, is to get a new EC2 instance. I am launching a new Ubuntu machine to start with, using a free tier configuration for this demo, with all default values. Make sure you have access to key-pair you are using as you will need to access the machine to install software or create a new key pair.

2. Right-click on the instance just booted, click connect. It will give you details on how to connect to the machine. Follow the instructions and you will be in the machine.

3. Set of commands to get wildfly server up and running on the EC2 machine

sudo apt update

sudo apt install default-jdk

sudo groupadd -r wildfly

sudo useradd -r -g wildfly -d /opt/wildfly -s /sbin/nologin wildfly


wget$Version_Number/wildfly-$Version_Number.tar.gz -P /tmp

sudo tar xf /tmp/wildfly-$Version_Number.tar.gz -C /opt/

sudo ln -s /opt/wildfly-$Version_Number /opt/wildfly

sudo chown -RH wildfly: /opt/wildfly

sudo mkdir -p /etc/wildfly

sudo cp /opt/wildfly/docs/contrib/scripts/systemd/wildfly.conf /etc/wildfly/

sudo nano /etc/wildfly/wildfly.conf

sudo cp /opt/wildfly/docs/contrib/scripts/systemd/ /opt/wildfly/bin/

sudo sh -c ‘chmod +x /opt/wildfly/bin/*.sh’

sudo cp /opt/wildfly/docs/contrib/scripts/systemd/wildfly.service /etc/systemd/system/

sudo systemctl daemon-reload

sudo systemctl start wildfly

sudo systemctl enable wildfly

sudo ufw allow 8080/tcp

If you try to hit port 8080 of the machine, you will see a default Wilfly page

If you are seeing this page, you know that wildfly is installed properly.

4. Next step is to finally add our war file, for this example, I will just copy the war file to the deployment folder

sudo wget https://war.file.path/HelloWorld.war /opt/wildfly/standalone/deployments/

5. Lastly, we will check our URL and validate


Azure Security Center

Security is one of the most important aspects of any application. When you deploy an application on the cloud, you have to make sure you handle security at multiple levels including computing infrastructure, storage, database, application level and so on. Azure Security Center is a tool that can help you assist in your quest for absolute security for your applications. The tool comes free with Microsoft Azure account and can help you understand if any of your resources or applications need attention.

The image above of the Security Center shows us how we can easily get a high-level view of our security. It gives us actionable recommendations like- if we need to turn on encryption on some of our resources and if some API is exposed to the public which should be controlled.

The video below gives us additional view of security center usage

Managed Identities for Azure Resources

In my last post I talked about how one can use Azure Active Directory to manage user access for various resources. But it is not only users who need access to resources, there are times when your application code needs to access cloud resources. Your application might need access to key-vaults, databases, storage, etc. This can be managed in a similar manner we managed access for users using Managed Identities. Basically we give our application or resource an identity, and using the identity it can access any cloud resource like key-value just like a user.

Managed service identities and Azure VMs

image source:

The image above shows how a resource with manage identity can get a token from Azure AD, and further use this token to access a cloud resource which gives permission to that identity.

Here is a video explaining the concept in detail

Here are a few key terms you need to understand

An “Identity” is a thing that can be authenticated.

A “Principal” is an identity acting with certain roles or claims.

A “Service Principal” is an identity that is used by a service or application. It can be assigned roles.

Managed Identity” is an identity created for a service, which is like creating an account on Azure AD tenant. Azure infrastructure will automatically take care of authenticating the service and managing the account.

Azure Active Directory

Azure Active Directory or AAD as it is commonly known, is a powerful tool that helps manage users and their access. Let us start by taking a look at official definition by Microsoft

Azure Active Directory (Azure AD) is Microsoft’s cloud-based identity and access management service, which helps your employees sign in and access resources in:

  • External resources, such as Microsoft Office 365, the Azure portal, and thousands of other SaaS applications.
  • Internal resources, such as apps on your corporate network and intranet, along with any cloud apps developed by your own organization.



The Azure Active Directory (Azure AD) enterprise identity service provides single sign-on and multi-factor authentication to help protect your users from 99.9 percent of cybersecurity attacks.


One key term here “Single Sign On” or SSO. Let’s assume that you work for a company which requires you to access multiple applications for example, your email, an HR system, a leave management system, a project management system, an employee directory and so on. Now think of a scenario when you are required to sign into all these applications independently. For each application you have a separate set of credentials, that you need to remember and hence weaken the security. Additionally managing access is also difficult for admins as in scenario when an employee leaves the company or joins the company, an admin ends up adding or removing credentials to multiple applications, which again is error prone.

To handle such problems, Single Sign on provides us with a mechanism to manage user identities in a single place and provide or manage access to different applications. Azure Active Directory is such a system helping manage user identities at one place and control access to various applications and resources on cloud.

We can see Azure provides us with a very simple way to create Azure Directories, manage users, groups and roles. In addition it also allows you to manage the user, setting like if user needs multi-factor authentication, if user is located in a specific country and can login for there only, etc.

Designing for Performance and Scalability

When one thinks about deploying an application to Cloud, the first advantage that comes to mind is scalability. A proper word would be Elasticity, which implies that application can both scale in and scale-out. When talking about scalability, we can scale an application in two ways

Vertical Scaling: Also known as scaling up would mean adding more physical resources to the machine, for example, more RAM or CPU power to existing boxes.

Horizontal Scaling: Also known as scaling out would mean adding more boxes to handle more load.

Here are some common design techniques help us manage performance and scalability of a system

Data Partitioning: One traditional problem faced with performance and scalability is around data. As your application grows and the size of your data gets bigger, executing queries against it becomes a time-consuming job. Partitioning the data logically can help us scale data and keeping performance high at the same time. One example can be to manage data for different geographies in different data clusters.

Caching: It is an age-old technique for making sure our system performs fast. Cloud provides us out of the box caching mechanisms like Redis cache which can be used off the shelf and help us improve performance.

AutoScaling: Cloud service providers help us autoscale horizontally based on rules. For example, we can set a rule that when average CPU usage of current boxes is beyond say 70%, add a new box. We can also have rules for scale in like if the average CPU usage is below 50%, kill a box.

Background Jobs: Move the code which can be managed asynchronously and independently like reports generation or AI model execution to batch jobs or background jobs. This will help manage the performance of core application features.

Messaging Infra: Again use Messaging or Queue based communication for asynchronous tasks. Services handling messages can scale up or down based on need.

Scale units: At times you will feel that when scaling up you need to scale more than just virtual machine, for example, an application uses X web servers, Y queues, Z storage accounts, etc. We can create a unit consisting of the infra need and scale resources as a unit.

Monitoring: Have performance monitoring in place to make sure your application and services are following SLAs. Monitoring will also help us identify problem areas, for example, one service might be slowing down the whole system.

Common Security Threats

In the last post, I talked about some of the design areas one needs to consider when designing an application for Cloud. Here I will talk about some of the very common threats that an architect one should consider when designing the system.

Physical Layer Access: At the lowest level of security, one needs to consider the fact that physical machines can be accessed and tampered with. This is more possible when we have on-premise hardware infrastructure than on the cloud. But even when one is choosing a cloud platform, it makes sense to understand and question the level of physical security implemented by the cloud service provider to avoid any unauthorized access.

Virtual Access: The next level of access is someone gaining virtual access to the machines manually, programmatically or through malware. Basic techniques like using a Virtual Network to isolate data and code machines, using Identity Management and Role-based access to make sure only authorized personals or code can access data, having security groups and firewalls in place and making sure security patches and antivirus definitions are always updated can help mitigate this threat.

Manual Errors: A misconfiguration causing a VM exposed through unwanted open ports can be another problem. Implementing infrastructure as a code where automated scripts are responsible for creating and maintaining infrastructure can be helpful in avoiding manual errors.

Weak Encryption: Though most cloud service providers give us options to encrypt our data, filesystems, and disks, it is the responsibility of the architect to make sure strong encryption is implemented. Tools like Key Vault services can help to store encryption keys to avoid manual handling. Also, all your APIs and pages dealing with important data should use HTTPS (Secured) protocol.

Application Layer Attacks: Common attacks like Code Injections, SQL Injections and Cross-Site Scripting (XSS) can be targeted towards the application. It is the responsibility of the architect and development team to make sure best practices are followed while writing the code to tackle these attacks.

Perimeter Layer Attacks: DDOS or Distributed Denial Of Service is a common attack used by hackers to bring an application down. Most cloud service provider gives you out of the box solutions which can help manage these threats.

Designing for Security in Cloud

At times we hear news like data by a big software company was compromised. Cloud gives us a lot of capabilities, but along with that comes certain vulnerabilities. As now anyone can access resources on the cloud, it is important that proper security measures are thought of while designing the system. Let’s take a look at some of the core security areas to be considered when designing for Cloud

Infrastructure Access: Who can access a service or a filesystem or a database. What kind of access is required? How can the resources be accessed? One needs to answer these questions before getting started with the application development process. Role-based access is an important tool that can help architects making sure proper security. For example, a user or an application might just need read access on file system or database, then rules should not allow any read or update access.

Traceability: Most cloud service providers allow you to see any changes being done on infrastructure. You can monitor which resources were updated by whom and when.

Layered Approach: When implementing security, most cloud service providers encourage layered approach. That is, implement security rules at different layers like a load balancer, application server, application code, database and so on. So that even in case one layer is compromised, your core application and data are still secured.

Encrypt Data at rest and transit: cloud service provider providers mechanism to secure your data at rest and in transit. Encryption is one big tool in your arsenal, for example, a simple step of using HTTPS against HTTP will ensure your data is encrypted and secured while in transit. Similarly, most cloud service providers have encryption available of disks and databases to help secure the data.

Data type specific security: You also need to understand that there will be certain needs specific to the type of data you are storing, for example, if you are storing healthcare-related data, you will need to understand HIPAA (Health Insurance Portability & Accountability Act) needs, for finance-related data you might want to check PCI (Payment Card Industry) data standards. Also, there might be region-specific needs like in Europe we have GDPR or General Data Protection Regulation for personal data.

Designing applications for Cloud

Cloud has changed the way we used to think about application design. We need to make sure we are using cloud capabilities to the fullest so that we are able to create the applications in a manner that are more robust, and can withstand changing load and unexpected failures. To start with, when one starts designing the application, consider the following points are addressed.

Scalability: Perhaps the single keyword that has helped make cloud popular. Whenever we start talking about cloud-based systems, the image that comes to our mind is of an elastic infrastructure that can grow or shrink based on user needs, or in more technical terms, the application should be able to scale out and scale in. Microservices based design is synonymous with a scalable system, where each microservice can independently scale irrespective of the rest of the system.

Security: We know that cloud-based systems work on shared responsibility where a part of the responsibility to secure the system lies on the development team. Cloud provides us with a lot of features that can be used to secure the system but it is the onus of the development team to understand the capabilities of cloud and implement security as per application needs. Some of the common best practices are- use of Virtual Networks to isolate infrastructure, use firewalls and security groups, use Identity features like IAM to control infrastructure access, use Encryptions and out of the box tools like DDOS security from cloud service providers.

Performance: Cloud gives development teams a lot of power in terms of choosing from different services and infrastructure range, but it is the responsibility of the developers to understand their performance needs and choose options accordingly. One needs to understand which operations are critical and what can be done at the background asynchronously. Performance tests based on the load expected and make sure you have the required capabilities assigned.

Availability: Cloud is famous for providing the availability of more than 99% for most of its services. At times you have to pay for extra reliability you need. Also, you will need to understand the concepts of availability zones and geographies to make sure you are using the capabilities to the fullest. If your application is using multiple services, you need to understand the availability promise of each of the services as you can commit availability based on the weakest link in the chain.

Cost Optimization: Mostly undermined, but one of the most important factors when using the cloud. With the ease of boarding on to new infrastructure and services, developers at times fall into trap of over-provisioning the infrastructure, leaving capabilities unused, not freeing up unused resources, etc. Thankfully most cloud service providers give you a tool to monitor the usage of infrastructure and even have tools that can help you optimize your cost.

Automation: Cloud service providers provide us with tools for setting up infrastructure as code. The idea is to automate the process of setting up all infrastructure needs to avoid human failure. This also helps in restoring failed services automatically.

Handling Failures: Moving to the cloud also needs one to understand what can go wrong and how to handle those situations. For example, Microservices based architecture helps one to develop a system that is scalable and maintainable, but it also brings in a risk that with multiple microservices we have multiple things that can fail. Does our design take into account what will happen if one or more services are down?

Monitoring: Once an application is deployed to a production environment, you do not have any control our how your code and infrastructure will behave. If any problem is faced, logs are your only friends. Plus if you can have monitoring and alerting system in place you can detect and act upon potential issues before end customers start noticing them. Again cloud service providers provide us with tools for monitoring and alerting infrastructure and code level logs. We need to make sure we consider what options we can use while we are in designing the solution.

AKF cube for analyzing application scalability

AKF cube is an interesting concept to analyze how we can scale software. The cube has 3 dimensions x, y, and z representing three different ways of scale a software solution.

x-axis or mirroring: This axis states the simplest form of scaling that is creating more copied of the same solution for faster and wider access. For application scaling, we will add more servers with a copy of the application and load balance the traffic. This will have obvious challenges to maintain the session related information if applicable to the solution. Scaling data can be more challenging as just creating copies of the database will need you to keep them in sync all the time.

y-axis or Functional decomposition: This is more of service-oriented architecture’s axis. We divide application info functions or services. If a particular service is being used heavily, we will need to create copies of only that service and

z-axis or Dividing based on logical division (of data): This is scaling based on data. For example, you need to support multiple geographies, so you copy complete code to different deployments. It is as good as a replica of the same application based divided based on data needs.


HTTPS – the first line of defence

In today’s world when our online interactions are increasing every day, the security of our online transactions becomes very important. If you are an end-user to a website that is required to handle important financial or informational transactions like bank sites, eCommerce sites, email sites, etc, the very basic thing you should check is that protocol for the transaction being used is HTTPS. HTTPS stands for HyperText Transfer Protocol Secure, or Secure appended at the end of the HTTP protocol. For example, if you are using a chrome browser you can see a symbol of a lock at the start of the website name in the address bar indicating the site uses HTTPS. If you click on the lock symbol you will see more details about the certificate provided by the website claiming that it is secure.

Let’s try to understand how HTTPS works.

Step 1: Browser sends a request for an https secured page, like

Step 2: The website will need to send back an SSL certificate confirming that it is secured. SSL certificate is provided by certifying authorities like Godaddy which can be verified by the browser. Along with the certificate, the website would send the browser its public key.

Step 3: Once the browser receives the certificate and public key from the website, it will authenticate the certificate, and if satisfied, it will use the public key from the browser to create a symmetric key.

Step 4: The browser generates a symmetric key, encrypts it using the public key provided by the website server and sends it back to the server. The web server receives the encrypted symmetric key, decrypts it using its private key.

Step 5: For any further communication, the server and the browser both use this symmetric key.

Now to understand the concept completely we need to understand how public-private key encryption is different from symmetric key encryption. In the case of symmetric key encryption, the key that encrypts the message can be used to decrypt the message. Whereas in the case of Public key – Private Key encryption, a message encrypted by Public key can only be decrypted by the private key. Symmetric key encryption is faster but not as secured as public-private key encryption.

If you are interested in understanding how each step mentioned above assures security in communication, read on.

Let’s look at each step and try to understand how each step is secured.

Step 1: The browser is just sending a request for website contents at this point, the actual transfer of information is not started.

Step 2: The server is only sharing its public key in this step, so not worried about security.

Step 3: Browser decrypts the SSL certificate which needs to be provided by an authorized Certification Authority (CA). The browsers come packaged with details of all major CAs and can validate the certificate against its local cache or can confirm with the certification authority. Browser matches the URL being addressed and the certificate received, this way it makes sure that no fake website or server is trying to send the data.

Step 4: The browser has encrypted the symmetric key with the public key provided by the server. So in case even if the packet is compromised and received by a fake server, it will not be able to decrypt the message as only the original server has the private key.

Step 5: By this step, only the browser and the intended server have the symmetric key. Any communication happening can be decrypted by only these two parties, so even if there is a leak, the message content remains safe.