Tools for Monitoring Applications Logs

Monitoring logs for an application is an important part of any deployment and support cycle. You want to keep a check on logs to understand what is happening with your application. But these logs are mostly GBs of raw data, that making sense out of this is not very easy. Thankfully there are many off the shelf tools available to help us out in this tedious task. I have already talked about ELK which is a popular tool for log analytics. In this post, we will talk about some of the other popular tools and get an idea of how these can help us.

Splunk is a tool to collect and analyze logs. Splunk basically has three core components, a forwarder which will forward data to Splunk server, An indexer which takes the data and indexes it for better search and finally Search head component which actually looks into the data and searches relevant information. An important aspect of Splunk is that it can easily scale horizontally with Splunk cluster, so you can manage GBs of data coming in the form of logs.

Graylog is another option for log monitoring. You can stream your logs to Greylog, which uses MongoDB and ElasticSearch behind the scenes to make sure you get fast and useful analysis.

Then there are specialized tools like SumoLogic for log analysis, which works on your log data and can provide additional analytics based on your logs. It can help you make sense of your logs as well as provide suggestions.

The list of tools providing log management, monitoring, and analysis tools is increasing by the day as people are recognizing the need and importance of mog monitoring. Here are some additional resources for interested readers.

Prometheus for monitoring and alerting

I have talked about how to use tools provided by Azure and AWS for monitoring the health of your applications and servers. But there will be times when you need something opensource to keep you independent of the underlying cloud service provider. Prometheus is one such tool that will help in these cases.

Prometheus is an open-source monitoring and alerting tool. It helps gather metrics in time series data format from various sources and monitor that. Written in Go language, the tool can be combined with Grafana or other consumers to capture and visualize data.

Prometheus architecture
image source

Image above shows the architecture for Prometheus. Following are the core components

Prometheus Server: The server collects metrics from applications and stores them locally. Data is collected at regular intervals and stored for processing.

PushGateway: There are cases when an endpoint cannot be exposed by the application due to the nature of its work, such as static jobs. The Push gateway captures the data, transforms that data into the Prometheus data format, and then pushes that data onto the Prometheus server.

Alert Manager: Based on collected data, rules can be configured to send alerts in the form of SMS, Email, etc.

Client Libraries: A set of client libraries is provided which can be added to application code for enabling monitoring endpoints.


ELK stack- Getting started

In the last three posts, I had talked about three popular off the shelf monitoring tools by cloud service providers, i.e. AWS CloudWatch, Azure Application Insights, and Azure Monitor. A discussion about monitoring cloud-native applications and microservices is incomplete without discussing ELK stack. ELK stack provides end to end functionality from capturing logs, indexing them in a useful manner, and finally visualizing them in a form that makes sense. Three core components that make the ELK stack are Elastic Search, Logstash, and Kibana.

Image source –

As the image above shows, three tools forming ELK stack work together, where Logstash is responsible for the collection and transformation of logs, ElasticSearch indexes and makes logs searchable, and finally Kiabana helps them visualize in forms of reports which are easy o make sense of.

Let’s take a look at these three components.

ElasticSearch: is a popular search engine implementer. It indexes data and helps in implementing quick searches. It is based on Apache Lucene and provides REST APIs for accessing data. It is highly scalable and reliable, implemented on the No-SQL database.

Logstash: provides connectors for various input sources and platforms, helping in the collection of logs data from different sources. It can collect, parse, and manage a variety of structured and unstructured data.

Kibana: is basically a visualization tool, provides various user-friendly visual options for reporting like graphs, bars, tables, etc. One can create and share dashboards for an easy understanding of data in form of visual reports.

Additional resources:

Azure Monitor

Azure Monitor is a tool, which acts as an umbrella for services that help us gather telemetry data and analyze it. Azure Monitor Captures data in form of Logs and Metrics. Logs contain time-stamped information about changes made to resources. Logs data is mostly in text form. Whereas Metrics are numerical values that describe some aspect of a system at a point in time. 

image source

The image above shows how Azure monitor gathers data in form of Logs and Metrics from Applications and other Azure resources. Once data is gathered, Monitor can be used to view and analyze data in the form of tables and graphs. In addition, one can set up an automated response in the form of Alerts or passing the information to Logic Apps or Custom APIs.

You can capture the following data for Azure monitor

Application data: Data that relates to your custom application code.
Operating system data: Data from the Windows or Linux virtual machines that host your application.
Azure resource data: Data that relates to the operations of an Azure resource, such as a web app or a load balancer.
Azure subscription data: Data that relates to your subscription. It includes data about Azure health and availability.
Azure tenant data: Data about your Azure organization-level services, such as Azure Active Directory.


Here is an example explanation of Azure monitor usage

Azure Application Insights

Once your application is deployed in a production environment, you want to make sure everything is working fine with it. You would like to analyze how many exceptions and errors are being thrown, how many requests are being handled, how many requests are being made, what is memory and CPU usage, and so on. In Azure, you can do all this by using the Application Insights tool.

Application Insights instrumentation in your app sends telemetry to your Application Insights resource.
image source –

You can see in the above image that your application components will publish the data to Application Insights service, from where you can create alerts, reports, or trigger other actions based on your need.

Setting up Application Insights need some instrumentation on your application side. Mostly it is as simple as importing the SDK and adding a config file. Here is a detailed explanation of how to implement it for a Java Project

You can gather following information from Applications Insights.

  • Request rates, response times, and failure rates
  • Dependency rates, response times, and failure rates 
  • Exceptions
  • Pageviews and load performance
  • AJAX calls
  • User and session counts
  • Performance counters 
  • Host diagnostics
  • Diagnostic trace logs
  • Custom events and metrics

For more information on Application Insights and usage with different languages –

Amazon CloudWatch

Once your application is deployed to production, monitoring is the only friend that can help you avoid embarrassing situations like a service not responding or an application is running very slow. You would like to make sure that monitoring and alerting systems are in place so that before you start hearing complaints from your end users, you can know about the problem and fix it. You would also like to make sure automated systems are in place to handle such issues.

Amazon CloudWatch is a service provided by AWS which can help us add monitoring for AWS resources.

image source

Let’s try to understand the above design. AWS services publish data to cloud watch in the form of metrics. Metrics here contain time-ordered data for various aspects, like CPU usage. Cloud watch processes the data and is capable of showing that in the form of graphs and bars. One can also set alarms on certain events like CPU usage goes beyond 75%. Based on alarm action can be taken like sending an email notification to admins or autoscale the application by adding an additional server to reduce CPU usage. One can also publish additional application data to CloudWatch for monitoring.

Let’s take a look at how we can create metrics and alerts for EC2 instance. Basic CloudWatch is by default enabled for EC2. You can enable detailed monitoring which will register events every minute, but it is a paid option.

For this example, I will move ahead with basic default monitoring. As I mentioned that default monitoring is enabled by default, so once you go to CloudWatch, select EC2 resources and you will see some default metrics already in place.

As a next step, we will add Alarms for the instances. You can set up alarms at an individual level, scale group level for autoscale, type of instance, and so on. For the sake of this example, I am choosing a metric of average CPU utilization for all my EC2 instances.

So the alert I am setting says that whenever average CPU utilization for all my instance goes beyond 50% an alarm should be raised. As a result of alarm, I can make the CloudWatch send a message to SNS or Simple Notification Service Queue, from which I can read in some application or serverless function and configure to send email or SMS notifications. One can also set auto-scale options like adding or removing servers or simply restarting an EC2 instance based on the alarm.

Scalability in Cloud

Before the cloud era, you were never sure how much hardware power is sufficient for you. You did not want to over-provision and the infrastructure is unused, at the same time you did not want your customer to face issues just because you did not have sufficient infrastructure. Cloud has changed this for us and made scalability possible by providing services out of the box that helps us create a scalable system.

Cloud helps, but it onus of the development team to make sure we design our application in a manner that is scalable and understand cloud capabilities to use them effectively in order to create a scalable and cost-effective system. For example, for some service it might make sense to use a serverless function that is auto-scalable rather than code deployed on a Virtual Machine. Similarly, a NoSQL based database might be more scalable than a SQL database.

Let’s take at three core aspects which we need to consider scalability.

Compute: The most obvious compute option you have on the cloud is a virtual machine. one can scale virtual machines, by setting up autoscaling with rules like if average CPU usage goes beyond a certain percentage. There are compute options provided by cloud service providers like serverless functions, batch executions, and off the shelf application environments which are auto-scalable. One needs to carefully observe all the services available and take a final call that is most suitable for the application.

Additionally, with the popularity of container-based implementation, most of the cloud service providers have offerings for docker and Kubernetes based implementations. It is worth exploring if that can help your design in the longer run,

Database: After the compute resources, one needs to understand the database needs of the application and how to scale that. We know that a NoSQL database is more scalable than a SQL based database as it can easily be scattered across the storage. Even if one needs to go for the SQL database, there are techniques like sharding which can help us make out database scalable. Again one needs to understand offering from the cloud service provider being used and choose the best options available for database, backup, and replication.

Storage: Most of the cloud service providers have off the shelf storage services like Amazon S3 and Azure storage. Additionally, there are various options which will provide different cost benefits based on kind of usage, for example, one can choose cheaper storage options where data is not in use frequently. Also as these storage options have better backup, encryption, and restore options, one needs to make sure what should be store on a VM disk vs what can be stored in external storage.

AWS Autoscaling application instances

In the last few posts, I have talked about how to host your application on EC2 instance, how to create a replica of application instances, and load-balancing them behind a classic load balancer. The load balancer did help us distributing load among various instances of the application, but that approach had a serious drawback, i.e., you had to have all the instances of application up and running beforehand. We have not thought about money we would be paying extra even if the application load is too low or a situation where our instances will choke when the load is too high.

Autoscaling comes to rescue in such a situation, as the name suggests, you can tell AWS to scale up or down the application instances based on the load coming in for the application. Let’s take a look at how this can be achieved.

To start, go to the “Auto Scaling Groups” option and select the “Create Auto Scaling Group” button. It will ask you to create a Launch Configuration first. This is nothing but a configuration to launch your EC2 instance. Similar to the way we created a replica of EC2 instance, we will select all the options, the only difference is that we are not actually launching an instance at this point, we are just creating a configuration, which will be used by Auto Scaling to create a new instance. Make sure you select AMI, machine type, security group, and key-pair care full to avoid any surprises.

Once you will create and select launch configuration, you will be redirected to create an autoscaling group.

Also at this point, you can specify if your application instances are being accessed through a load balancer.

Next, you can select scaling rules. For example, I can say that a minimum number of instances I want for autoscaling is 1 and the maximum it should go up to is 5. Also, I can say that scale up or down based on a rule that CPU utilization is on average 70%, that is if it goes up to create a new instance and if it goes down, kill an instance, of-course max and min limits will be honored.

You can leave all other options as default or provide settings if needed. Once creates, now your autoscaling group takes care of handling the load for your application, spawning new instances whenever needed maintaining cost-load balance for you.

AWS Classic Load Balancer

Load balancing is a very common need for any production application. In this post, I will try to demonstrate how to use a classic load balancer with AWS to manage load among various EC2 instances. I am using a simple Hello world code here which returns the address of EC2 instance, to distinguish between instances responding.

Here is the code used to return message

public class HelloWorldService
    public String helloWorld()
    	String address =" IP Address";
    	try {
		InetAddress inetAddress = InetAddress.getLocalHost();
		address = inetAddress.getHostAddress().toString();
	} catch (UnknownHostException e) {
        return "Hello World from "+ address + "!!";

After creating replicas of the same code, we have 3 instances that have now our code. Next, we need to use a load balancer that can balance out the load among these 3 instances.

Once you click on the “Create Load Balancer” button, you will be given 3 options to choose from. Application load balancer, Network load balancer, and Classic load balancer.

For the sake of this example, we will use Classic Load Balancer. First thing you will need to set the “listener port and protocol” and “target port and protocol”.

After that, you will be asked to provide more information like security groups, health check path and port, instances to be balanced, and so on. Finally, you can review and create the load balancer. Once load balance is created, you will be given a DNS name which you can use to call the service. Also, you can check instances health and set alerts if needed.

In my case, to validate that my load balancer is working, all I need to do is to hit the load balancer URL and check that the IP address returned by the call is changing, showcasing the request is going to a different server.

Amazon EC2: Create Image and Replica instance

There are times when you want to bring up a copy of the instance running on EC2. A common use case can be your in use instance faces some ad-hoc problem and stops responding. You want to bring up the exact replica as soon as possible to minimize the impact on end-users. Fortunately, this can be done easily in a couple of steps.

The first step is to create an image or AMI (Amazon Machine Image). As the name suggests, AMI gives you an image of the current state of the machine. All you need to do to create an AMI is right-click on an EC2 instance, and select option to create the image.

Once the image is created, you will be able to see it under the AMIs section. Creating an instance from Image is equally easy. All you need to do is select the AMI and click launch.

You will need to choose Instance type, security group, key pair, etc just like when creating a new instance. When the instance is launched, you will see it is exact replica of the original image. This can be handy in a situation where your server just went down and you need to bring the application up. You can bring a new replica instance up and assign the Elastic IP, your application is back in minutes.