Monitoring Microservices: Part 1

Microservice-based architecture comes with many advantages over monoliths, especially in areas of scalability, enhance-ability, and maintainability of the application, as instead of a big application we are dealing will smaller pieces that are easier to manage and update.

But every good thing comes with some challenges, and in the case of microservice-based architecture, monitoring of application is one such challenge.

Earlier you were looking at one place for logs, server health, etc for any issues or status. But with such a s distributed system where we have tens or hundreds of microservices, it is difficult to monitor the status of each service individually. For example, say we have a scenario where a service is calling another service which in turn might be calling another service to fulfill a user’s request. Now if a request is failing or responding very slow, which of the service is the culprit? Which logs are to be analyzed?

To solve this issue, we have a set of practices that can help us to build a robust and effective Monitoring Strategy.

Before getting into the Strategy to monitor microservices, let’s take a look at a few core concepts that one needs to be aware of, which are Logs, Metrics, and Traceability.

Logs: Logs are the first place you will look at if you see your application is not behaving in an expected manner. Your application emits logs to publish the current state. Logs are mostly categorized into, Debug, Info, Warning, and Error.

Metrics: Metrics are Time series data published by applications to provide a quick view of an aspect that changes with time depending on external conditions like request traffic. For example, Latency Metrics can show data like if 95% of all calls respond under 300ms.

Traceability: Traceability is very important when it comes to distributed systems with multiple microservices. Say Service A calls Service B which calls C and so on. If you see requests failing or responding slowly, you need to track which services are facing issues. Traceability helps track the journey of a request and monitor it at every step.

Listening

Listening is a very important skill for any leader.

https://hbr.org/2022/09/in-a-crisis-great-leaders-prioritize-listening

The artical talks about how the importance of this skills increases multifold when it comes to a crisis situation. The improtance of Listening is in the fact that one cannot have all the information or perspectives. So it important to have discussions with team members and get different perspective.

Th article focuses on that fact that information available at the top level might be different from ground reality. This is another important reason to “listen” to diversified groups at different levels to not make biased decisions.
“Then there’s the echo chamber. Whether we know it or not, most of us gravitate to people (and information) that confirm things we already think and believe. We’re drawn to individuals and ideas that concur with, and even end up shaping, our worldview. “

Someone living in silos is bound to be away from ground reality and might convince himself that “this is not going to happen to me”, and take incorrect decisions. It might be too late in the game when they realise their mistake and it is difficult to make ammendemnts at that time.

We tend to downplay or dismiss threats along the lines of “it’ll never happen to me, and even if it does, it won’t be that bad.” And when the chips finally do fall, we can become anchored to one particular plan or solution, even as the crisis shifts or changes direction. We may continue down one path long after it makes sense to do so, because of sunk costs: “we’ve come this far; it’s too late to change course.”

https://hbr.org/2022/09/in-a-crisis-great-leaders-prioritize-listening

Design vs Code – The curse of Agile

Found some old notes of mine about agile, almost a decade old (note I am referring to Agile as new 🙂 ), but surprisingly the core question I was pondering upon a decade back, still holds true. The question development teams still struggle to solve is – how much time is sufficient for the design phase when one is following Agile practices.

Agile is new, bold, and sexy. Everyone wants to be Agile. Every resume I have seen of late has “Agile Development” mentioned in one form or another.

But the question is, what is Agile? Now some would say Scrum is Agile, having status meetings every day is Agile, and development in Sprints is agile. Well, if you look at the dictionary, agile is “able to move quickly and easily”. And Scrum, Sprints, TDD, etc are just some tools to get things done faster. I have shared my thoughts on agile development here.

With Agile development, I have seen too often teams trying to jump on development from day one. Design and architecture sound old-fashioned. Why spend time on something that will not show up on the screen as the end product? Why not instead spend time developing something which one can demo to clients, and get appreciated for fast work?

When you are driving, speed is attractive, but it needs better clarity, stability, balance, and most importantly synchronization if we have a team. By pushing the design to the back burner, we are saying “let’s start driving, we will figure out the route map on the way”, and before we know it, every member of the team comes up with a different route map and is sitting miles away from each other.

As would not stop people from developing unless the design is complete, but definitely some ground rules should be set beforehand.

Rate Limiting- Basics

Rate limiting is a potent tool to avoid cascading failures in a distributed system. It helps applications to avoid resource starvation and handle attacks like Denial of Service (DOS).

Before getting in rate limit implementation in a distributed system, let’s take a scneario for basic rate-limiting.

Lets say, the provider service has the capacity of processing 100 requests per second. In this case, a simple rate limit can be implemented on the server-side where it will accept 100 requests in a second and drop any additional requests. The rate-limiting can be in form of requests accepted per second, per minute, or per day. Additionally, rate limit can be applied per API bases, say “/product” service can accept 100 requests per minute, but “/user” service can accept 200 requests. Also, the rate limit can be implemented based on the client or user. For example, client_id is being sent as part of the header and we have a rate limit of 10 requests that are allowed per minute for a unique client.

The above scenario talks about a very simple client-server implementation. But in the real world, the setup will be more complex with multiple clients and scaled-up services deployed behind a load balancer, usually being accessed through a proxy or an API gateway.

Before getting into the complexities of distributed systems and implementing Rate limiting, lets take a look at some of the basic alogorthms used to implement rate limiting.

Token Bucket: To start with you can think of a bucket full of tokens. Every time a request comes, it will take a token from a bucket. New tokens get added to the bucket after the given interval. When the bucket has no more tokens, requests will be throttled. Implementation wise, it is a simple approach where a counter is set to the max limit allowed, and each incoming request checks for the counter value and decreases it by one unless the counter reaches zero (at this point requests are throttled). The counter is updated/ reset based on the rate-limiting condition.

Leaky Bucket: A modification of token bucket algorithm, where requests are processed at a fixed rate. Think of implementation as a fixed-length queue, to which requests are added, and processed at a constant rate.

In a distributed system, it makes sense to implement a rate-limiting algorithm at the proxy or gateway layer, as it helps us fail fast and avoid unnecessary traffic on the backend. Secondly implementing it in a centralized place takes away the complexity from backend services as they need not implement rate-limiting now and can focus on their core business logic.

Implementing rate-limiting in the above-mentioned distributed design has its own challenges. For example, as there are N machines with proxy deployed, how to implement 100 requests per second for a service. Do we say each machine has a 100/N quota? But load balancers do not guarantee that kind of distribution of incoming requests.

Capital Budgeting- Investment Decisions

At times corporations need to make investment decisions. These decisions are important as they help firms build up their assets and future cash flows, at the same time would need considerable investments.

An important aspect of these investments is the time value of money, i.e. cash received earlier has more value than cash received at a later time.

Stages in Capital Budgeting

  • Stage 1: Investment screening and selection
  • Stage 2: Capital budget proposal
  • Stage 3: Budgeting approval and authorization
  • Stage 4: Project tracking
  • Stage 5: Post-completion audit

Financial Appraisal tools for Capital Budgeting

Payback Method: The payback period is the number of years it takes to recover the project cost. The payback method helps understand projects’ risk and liquidity and is easy to understand. On the downside, it does not consider the time value of money (TVM) and does not consider cash flows after the payback period.

An alternate to payback method is discounted payback, where instead of exact Cash Flow (CF), a discounted CF is considered to take care of TVM.

NPV: An important tool to evaluate projects is NPV or Net Present Value. In simple terms, NPV is is the difference between the present value of cash inflows and the present value of cash outflows over a period of time. If NPV>0, the project can be considered for acceptance.

– investopedia.com

Any project with NPV>0 is profitable. The higher the NPV, the more profitable is the project. So in the case of mutually exclusive projects (the only one that can be chosen), the one with higher NPV is preferred.

IRR or Internal Rate of Return: “The internal rate of return (IRR) is a metric used in financial analysis to estimate the profitability of potential investments. IRR is a discount rate that makes the net present value (NPV) of all cash flows equal to zero in a discounted cash flow analysis.” – investopedia.com. A Higher IRR rate makes the project more desirable.

To calculate IRR, set NPV=0 in the NPV formula mentioned above.

If IRR>WACC (Weighted Average Cost of Capital), the project is profitable.

Sunk Cost: Sunk cost is a cost that has already been incurred and as such, exists irrespective of whether the project is undertaken or not. For example the salary of the employees. This cost should not be considered as part of project cash flows.

Opportunity Cost: For example, if the company has land which is to be used to set up a factory for the current project. This cost will be added to the project.

Profitability Index: When comparing multiple projects of different sizes, directly comparing NPV might not make sense as one project might be worth 10000 and another might be 1000000. Profitability index or PI is calculated as NPV/ Initial investment and helps us calculate profit generated per dollar invested. A PI> 1 means the project is profitable.

SQL vs NoSQL

An old video but still relevant

Key Take aways

  • Complex SQL queries- Joins : SQL
  • Transaction management / ACID Properties: SQL (commit and rollback are by default whereas you need to tackle them in NSQL)
  • Huge quantity of data/ fast scalability – NOSQL
  • Write Heavy – Logging system – NoSQL
  • Read Heavy- Queries/ indexes – SQL
  • Fixed Schema- SQL, Flexible schema-NOSQL (Alter table statements are costly and have restrictions)
  • JPA/ hibernate/ django- by default support SQL
  • Archiving and managing huge data- NoSQL

Azure Networking – 2

User-defined routes
You can use a user-defined route to override the default system routes so traffic can be routed through firewalls or NVAs.

For example, you might have a network with two subnets and want to add a virtual machine in the perimeter network to be used as a firewall. You can create a user-defined route so that traffic passes through the firewall and doesn’t go directly between the subnets.

When creating user-defined routes, you can specify these next hop types:

Virtual appliance: A virtual appliance is typically a firewall device used to analyze or filter traffic that is entering or leaving your network. You can specify the private IP address of a NIC attached to a virtual machine so that IP forwarding can be enabled. Or you can provide the private IP address of an internal load balancer.
Virtual network gateway: Use to indicate when you want routes for a specific address to be routed to a virtual network gateway. The virtual network gateway is specified as a VPN for the next hop type.
Virtual network: Use to override the default system route within a virtual network.
Internet: Use to route traffic to a specified address prefix that is routed to the internet.
None: Use to drop traffic sent to a specified address prefix.

If there are multiple routes with the same address prefix, Azure selects the route based on the type in the following order of priority:

  • User-defined routes
  • BGP routes
  • System routes

A network virtual appliance (NVA) is a virtual appliance that consists of various layers like:

  • a firewall
  • a WAN optimizer
  • application-delivery controllers
  • routers
  • load balancers
  • IDS/IPS
  • proxies

Azure Networking

VNet Peering: Virtual network peering enables you to seamlessly connect two Azure virtual networks. Once peered, the virtual networks appear as one, for connectivity purposes. There are two types of VNet peering.

Regional VNet peering connects Azure virtual networks in the same region.
Global VNet peering connects Azure virtual networks in different regions.

A VPN gateway is a specific type of virtual network gateway that is used to send encrypted traffic between an Azure virtual network and an on-premises location over the public Internet. You also use a VPN gateway to send encrypted traffic between Azure virtual networks over the Microsoft network.

Site-to-site connections connect on-premises datacenters to Azure virtual networks
VNet-to-VNet connections connect Azure virtual networks (custom)
Point-to-site (User VPN) connections connect individual devices to Azure virtual networks

There are two types of load balancers: public and internal.

A public load balancer maps the public IP address and port number of incoming traffic to the private IP address and port number of the VM. Mapping is also provided for the response traffic from the VM. By applying load-balancing rules, you can distribute specific types of traffic across multiple VMs or services. For example, you can spread the load of incoming web request traffic across multiple web servers.

An internal load balancer directs traffic to resources that are inside a virtual network or that use a VPN to access Azure infrastructure.

Application gateway: There are two primary methods of routing traffic, path-based routing, and multiple site routing.

path: /images, /videos
site: kamalmeet.com, bizt.com

Gateway transit
You can connect to your on-premises network from a peered virtual network if you enable gateways transit from a virtual network that has a VPN gateway. Using gateway transit, you can enable on-premises connectivity without deploying virtual network gateways to all your virtual networks.

Overlapping address spaces
IP address spaces of connected networks within Azure, between Azure and your on-premises network, can’t overlap. This is also true for peered virtual networks.

A is the host record and is the most common type of DNS record. It maps the domain or hostname to the IP address.
CNAME is a Canonical Name record that’s used to create an alias from one domain name to another domain name.
MX is the mail exchange record. It maps mail requests to your mail server, whether hosted on-premises or in the cloud.
TXT is the text record. It’s used to associate text strings with a domain name. Azure and Microsoft 365 use TXT records to verify domain ownership.

Azure Application Insights

Application Insights is aimed at the development team, to help you understand how your app is performing and how it’s being used. It monitors:

Request rates, response times, and failure rates – Find out which pages are most popular, at what times of day, and where your users are. See which pages perform best. If your response times and failure rates go high when there are more requests, then perhaps you have a resourcing problem.
Dependency rates, response times, and failure rates – Find out whether external services are slowing you down.
Exceptions – Analyze the aggregated statistics, or pick specific instances and drill into the stack trace and related requests. Both server and browser exceptions are reported.
Performance counters from your Windows or Linux server machines, such as CPU, memory, and network usage.
Diagnostic trace logs from your app – so that you can correlate trace events with requests.
Custom events and metrics that you write yourself in the client or server code, to track business events such as items sold or games won.