DORA Metrics

With the popularity of the Agile development approach, frequent deployments to prod have become commonplace. DevOps teams manage these deployments, sometimes multiple deployments on the same day. With these frequent deployments, it is important to define and measure the success criteria. DevOps teams use DORA metrics to measure their performance.

Deployment Frequency: Refers to the frequency of successful software releases to production.

Lead Time for Changes: Time it takes for change that was committed to reach to production.

Mean Time to Recovery: Measures the time between an interruption due to deployment or system failure and complete recovery.

Change Failure Rate: Indicates how often a deployed changes lead to failures.

How DORA Metrics Can Measure and Improve Performance

Java Collection Framework and Generics

As the name suggests, a Java Collection is a collection of objects represented as a single unit. The idea is to provide a set of operations on the logically grouped elements like searching, sorting, etc.

collection framework hierarchy

As the image above shows, there are two core interfaces, Collection and Map. The collection further has interfaces for List, Queue, and Set. For each of these interfaces than we have a concrete set of classes implementing the functionality.

Let’s take a very simple example

Collection values = new ArrayList();

We can see collection interface helps us create a new ArrayList, which gives us flexibility over the array where the List can grow during runtime.

But one challenge we can see in the code above is that when we are able to add numbers as well as String to the ArrayList. This is in contrast with the type safety provided by Java. To solve this Java provides us Generics to control over types of elements that can be added to a collection.

Generics Example

class MyClass<T>{
T value;

public T getValue() {
return value;

public void setValue(T value) {
this.value = value;

Coming to our previous code for creating collections

Collection<Integer> values = new ArrayList<>();

In this example, we cannot add a String to values now.

What if we want to get or add into ArrayList at a particular index. List Interface adds these features on top of Collection. As already depicted in the hierarchy, List inherits Collection and then adds features on top of it.

List<Integer> values = new ArrayList<>();
int num = values.get(0);

Iterating over a Collection

There are multiple ways to iterate over a collection

List<Integer> values = new ArrayList<>();
Iterator<Integer> i= values.iterator();
while(i.hasNext()) {

Another way is to use for loop

for(Integer num:values) {

Or we can use Streams

Comparable vs Comparator

Another important thing one would like to do with the collection is to arrange elements in sorted order. For example, say we have a List created for Student class objects, and we want to arrange based on Roll numbers.

The easiest way is to make the class implement comparable

class Student implements Comparable<Student>{

 int rollno;
 String name;

 public Student(int rollno, String name) {

 public int compareTo(Student o) {
  return this.rollno<o.rollno?-1 : (this.rollno==o.rollno)?0:1;

// In calling code, we can sort list made up of students 

There can be cases where it is handy to maintain comparison based on multiple criteria. For example, along with roll number, we want to make sure we can sort the list of students based on names in alphabetical order. In such a case it makes sense to implement Comparator as we can implement multiple comparators based on our need.

class NameComparator implements Comparator<Student>{
 public int compare(Student o1, Student o2) {

// In calling code, we will send compatator instance 
Collections.sort(list, new NameComparator());

JWT Token

Authentication and authorization are the most important security features to be implemented for any API. One way to manage this information is through sessions. Once users log in, a session is created on the server-side for the user with user metadata information. The problem with this approach is that this is stateful and difficult to scale. Another way to implement the metadata is to send back a hash or a key once the user logs in successfully, and every subsequent request needs to pass this key back. This key is stored along with user metadata is stored in the database to avoid statefulness. The disadvantage here is an additional database query, every time a request comes in.

JWT or JSON Web Token solves this problem. JWT is a string in JSON format, encrypted with a key. The encryption can be symmetric or asymmetric. JWT contains 3 sections, a header, a payload, and a signature.

Header: The header JSON normally contains two fields, a type (typ), which is always “JWT” and alg field providing algorithm used for encrypting the token.

"alg": "HS256",
"typ": "JWT"

Payload: This section contains JSON for any payload or data you want to send. This can contain fields that can identify the user and their roles for authorization and authentication. This can also have iat or Issued At time and token expiry time (exp).

"sub": "1234567890",
"name": "John Doe",
"iat": 1516239022

Signature: The signature is the encrypted part of the token. The encryption is done using the algorithm mentioned in the header. The signature can only be decrypted using the secret key.

Sample JWT token

Originally posted:

4Ps of Marketing


Product Levels

5 Product levels - Five product levels

Search Goods: You search and compare characteristics before you buy, for example, in mobile- screen size, camera pixel, features, etc.

Experience Goods: where you can categorize as good or bad after experience only, for example, a stay in a hotel for the first time.

Credence Goods: Where you cannot categorize even after consumption for example an online course (you do not have anything to compare to unless you have taken another course on the subject).

Product Life Cycle or PLC


BCG Matrix



The second P of marketing is Promotion, which is a basic means for generating awareness about the product and create a desire to buy.

6M strategy model

  1. Mission: What is the objective?
  2. Market: Who are my customers?
  3. Message: What I will tell my customers?
  4. Media: How do I reach them?
  5. Money: Budget
  6. Measure: Was the campaign effective?

Two choices of channels

  1. Personal Communication: Mostly more effective and low price via Telemarketing, Emailing or one on one meetings.
  2. Impersonal communication: Mass media via Newspaper or T.V.


The third P of marketing is Place. This gives the convenience of product availability at an arm’s length and helps build trust with customers.


The fourth P of marketing is Price. A fundamental mistake done by companies is to think of price as cost + profit. Ideally one needs to do market research and competitive analysis to come up with the target price. Once one has a target price, it helps to come up with a target cost and help figure out how much to be spent on RnD, marketing, manufacturing, etc.

Two common pricing strategies are skimming (enter high and then move to low) and penetration (enter low and move to high).


Going rate pricing is another strategy where current pricing is studied in the market and an average is taken for pricing the product.

Marketing Basics

In a dictionary Marketing is defined as “the action or business of promoting and selling products or services, including market research and advertising.“.

Before getting into depth, let’s take a step back and refer to the famous question asked in the paper “Marketing Myopia” by Levitt, i.e. “What business are you really in?“. The question forces one to think beyond what products are you selling to what “needs” of customers are you fulfilling. Marketing is all about understanding customer needs.

Need” is a state of dissatisfaction. Need is a problem and “want” is the solution. I “need” to communicate, I “want” a phone. A “want” combined with a willingness to pay becomes the demand – I am ready to pay for an ‘iPhone”.

Another important aspect is to understand the “value” that the product is adding to the customer. Say, I have the option of buying two phones, I would like to buy the one which gives more value. Perceived value can be thought of as perceived benefits – perceived cost.

P(V) = P(B) – P(C)

While choosing a product over other

P(B1)-P(C1) > P(B2) – P(C2)

or B1-B2 > C1- C2

or Extra Benefits I am getting is more than the extra cost I am paying.

Segmentation – Targeting – Positioning

Segmentation is about grouping your customers by identifying commonalities based on their needs. Segmentation should not be geographical (urban/ rural, north/ south), or demographic (age, sex, education, economic class) but based on personality, values, lifestyle, and behavior.

Segmentation is a three-step process

  1. Clustering
  2. Profiling
  3. Assigning a segment descriptor

Targeting is to select the segments that you will focus on.

There are the following targeting approaches

  1. Mass Market: un-differential
  2. Focus on all, but with the segmented approach: Diffeerential
  3. Focused Strategy: One or two segments

Factors to be considered while targeting

  1. Sales Potential: How much sales can be made for the segment?
  2. Profitability
  3. Consumer Maturity
  4. Competition
  5. Own Abilities

Competition: A less fragmented market can provide healthy competition. HHI or Herfindahl-Hirschman Index can provide a good measure to check how fragmented the market is.

HHI= s1^2 + s2^2​ + s3^2 ​+ …sn^2​

where: sn​=the market share percentage of firm n 


Positioning is “what to tell these customers?” so that they chose you.

While positioning one can take two approaches

  1. Point of Parity or PoP: How are we similar to other products?
    1. Category Point of Parity
    2. Competitive Point of Parity
  2. Point of Difference PoD: How are we different from other products?

SWOT Analysis

Strength – Weakness – Opportunity – Threat analysis is an old tool used by product teams to understand their characteristics. An important aspect one needs to take care of is all your strengths should map to opportunities, and similarly, weaknesses should map to threats.

5C Analysis

Branching Strategies

When one starts a project, one question that needs to be answered immediately is how to manage the code? The branching strategy needs to be finalized, in the agile world, we want to make sure code gets merged and deployed asap. Continuous Integration and Continuous delivery want us to make sure our code is always in a state that is ready to be deployed on production.

There are two most common branching strategies which are used in Industry. I have written about these in my book on microservices. Here is an overview.

Feature-Based Branching

The idea is to separate branches for each feature. The feature gets merged back to the production branch once it is implemented and tested completely. An important advantage this kind of approach gives us is that at any point in time there is no unused code in the production branch.

Single Development Branch 

In this approach, we maintain a single branch in which we keep on merging even if the feature is incomplete. We need to make sure the half-built feature code is behind some kind of flag mechanism so that it does not get executed unless the feature is complete. 

Cloud-Native Design with 12 Factor App

There are many guidelines available for building a cloud-native application. One of the Industry accepted set of guidelines is 12 factors suggested on

In nutshell, here are the 12 factors to help your application build in a cloud native manner.

Codebase: One codebase tracked in revision control, many deploys
Dependencies: Explicitly declare and isolate dependencies
Config: Store config in the environment
Backing services: Treat backing services as attached resources
Build, release, run: Strictly separate build and run stages
Processes: Execute the app as one or more stateless processes
Port binding: Export services via port binding
Concurrency: Scale-out via the process model
Disposability: Maximize robustness with fast startup and graceful shutdown
Dev/prod parity: Keep development, staging, and production as similar as possible
Logs: Treat logs as event streams
Admin processes: Run admin/management tasks as one-off processes

A good tutorial on practical usage of these 12 factors in your application


A normal flow between client and server over HTTP connection is made of Requests and Responses.

HTTP communication

The client sends a response to the server and then the server sends back the response. Now there can be use cases where the server would have to send data to the client, for example, a chat application or a stock market tracker. In such scenarios, where the server needs to send data to the client at will, WebSocket communication protocol can solve the problem.

WebSocket provides full-duplex communication channels over a single TCP connection. Both HTTP and Websocket protocols are located at layer 7 in the OSI model and depend on TCP at layer 4.

Websocket Connections string looks like ws:// or for secured wss://

To achieve the communication, the WebSocket handshake uses the HTTP Upgrade header to change from the HTTP protocol to the WebSocket protocol.

WebSocket handshake

  • The client sends a request for “upgrade”  as GET 1.1 upgrade
  • The server responds with 101- Switching protocols 

Once the connection is established, the communication is duplex, and both client and server and sent messages over the established connection.

Challenges with Websockets

  • Proxy is difficult at Layer 7. It will mean 2 levels of WebSocket connectivity, client to proxy and then proxy to backend.
  • Layer 7 load balancing is difficult due to the stateful nature of communication.
  • Scaling is tough due to its stateful nature. Moving the connection from one installed backend to another would mean resetting the connection.

Disclaimer: This post was originally posted by me in the cloud community –

HTTP 1 vs 2 vs 3

For years, the Internet is powered by HTTP protocol helping millions of websites and applications deliver content. Let’s take a look at the journey of the HTTP protocol, its past, present, and future.


The current version of the HTTP 1 protocol is actually HTTP 1.1. But let’s start with HTTP 1, which was a simple request-response protocol.

HTTP 1 flow

HTTP 1.1

As we can see in HTTP 1 implementation, one major problem was that connection needed to be established after each request. To solve this problem HTTP 1.1 came up with a keep-alive concept which helped to send multiple requests over a single connection. To achieve the speed, HTTP1.1 had 6 TCP connections behind the scenes instead of 1.


Though HTTP 1.1 was much faster than HTTP 1, it had some problems, most importantly, it was not making use of TCP connection completely. Each connection was sending one request at a time. This problem was solved in HTTP 2 and multiple concurrent requests could be sent.


To achieve this parallel request over a single HTTP connection, HTTP 2 uses the concept of streams. That is, each request being sent from the client has a unique stream id attached behind the scenes. This helps the client and server identify the calling and receiving endpoints. One can think of each stream as an independent channel for communication.


One problem with HTTP 2 is that the streams we have defined are at the HTTP level. TCP is not aware of the concept and is just sending packets at a lower layer. So if there are 4 independent requests sent using 4 different streams, and even if a single packet for any of the requests is lost in the communication, the backend server will keep waiting for the packet and all 4 requests will wait.

HTTP 3 plans to solve this problem by implementing HTTP over QUIC instead of TCP. QUIC too has the concept of streams inbuilt, so in the above-mentioned scenario, when one packet is lost for one request out of 4, only one stream is impacted and response for the other 3 requests will be successfully served.

Disclaimer: This post was originally posted by me in the cloud community –