Author Archives: admin

Design for Accessibility – 1

As a software developer/ designer/ architect, when you think about software, there are many non-functional aspects of the software that you need to take care of. Accessibility is one such important non-functional aspect, which can get neglected if you are not paying attention.

Before getting into more details, let’s try to understand what accessibility is –

Accessibility enables people with disabilities to perceive, understand, navigate, interact with, and contribute to the web.

Four core Accessibility principles – Perceivable, Operable, Understandable, and Robust.

Information about accessibility testing and compliance

WCAG or Web Content Accessibility Guidelines, current version 2.1 gives us a detailed idea of areas one needs to consider when working on the accessibility of a website.


WCAG 2.1 gives three levels of conformance A, AA, or AAA

WCAG diagram showing overall structure, with principles in the far left column, guidelines in the adjacent column, and success criteria numbers across the following three columns, A, AA and AAA.

WAI-ARIA: Web Accessibility Initiative – Accessible Rich Internet Applications or WAI-ARIA is a specification developed by W3C in 2008.

WAI-ARIA, the Accessible Rich Internet Applications Suite, defines a way to make Web content and Web applications more accessible to people with disabilities. It especially helps with dynamic content and advanced user interface controls developed with HTML, JavaScript, and related technologies.

Further Readings

Cloud Native Application Design – Benefits of Microservices

In the last post, I introduced the concept of microservices and why it is a default fit for Cloud Native Design. But an important question that should be asked is why microservice-based design has gained so much popularity in recent years. What all advantages microservices gives us over traditional application design?

Let us go over some advantages of microservices-based design here

Ease of Development: The very first thing visible in microservices-based architecture is that we have divided the application into smaller services. This also means we can organize our developers into smaller teams, focusing on one piece of deliverable, less code to understand, less code to the manager, and hence can be more agile.

Scalability: A monolith application is difficult to scale as you are looking at replicating the whole application. In the case of microservices, you can focus on pieces that are actually required to scale. For example, in an eCommerce system, a search microservice is getting a lot of requests and hence can be scaled independently of the rest of the application.

Polyglot programming: As we have divided applications into smaller pieces, we have flexibility in choosing tech stacks for different pieces as per our comfort. For example, if we know a certain Machine learning piece of application/ microservice can be developed easier in python, whereas the rest of my application is in Java, it is easy to make the choice.

Single Responsibility Services: As we have divided the application into smaller services, each service is now focusing on one responsibility only.

Testability: It is easier to test smaller pieces of applications than to test them as a whole. For example, in an e-commerce application, if a change is made in the search feature, testing can be focused on this area only.

Agile Development: As teams are dealing with smaller deployable now, it is easier and a good fit for Continuous Integration (CI) and Continuous Deployment (CD), hence helping achieve less time to market.

Stabler Applications: One advantage microservices architecture gives us is to categorize our microservices on basis of criticality. For example, in our e-commerce system, order placement and shopping cart features will be of higher criticality than say a recommendation service. This way we can focus on the availability, scalability, and maintenance of areas that can directly impact user experience.

Cloud Native Application Design – Microservices

I cannot tell you whether the cloud has popularized microservice-based design or is it vice versa, but both go hand in hand. To understand this, let us take a quick trip to the history of the most popular application designs in past few years.

Monolithic design: If you are in the software Industry for more than a decade, you know that there was a time when the whole application was written as one single deployable unit. This does not sound bad if the application itself is small, but as the size of the application grows, it becomes difficult to manage, test, and enhance.

Some of the important challenges with monolith design were – difficulty to update, even a small change needing complete testing and deployment, being stuck with a tech stack, and difficult scaling.

Service-Oriented Architecture: The challenges in Monolithic design paved the way for an architecture where code was distributed in terms of services, hence called Service-Oriented Architecture or SOA. For example, in an e-commerce platform, we will have services for product management, inventory management, shipping management, and so on. This design helped in better organizing the code, but the final application was still compiled into one deployable and deployed to a single application server. Because of this limitation, this inherited most of the challenges from Monolith design.

Microservices: Though SOA inherited challenges from the Monolith era, one important change it brought was to the mindset of the developers and architects. We started looking at the application not as a single piece but as a set of features coming together to solve a common purpose. With the cloud, Infrastructure related limitations were reduced to a great extent, hence giving us the freedom to further divide the application not only at design time but also at run time.

A major change that has come with Microservices is that you are breaking your application into a smaller set of services (hence the term microservice), which can solve one piece of the problem independently. This microservice is designed, developed, tested, deployed, and maintained independently. This also solves most of the problems we faced in Monolith design, because now you can scale, manage, develop (hence use independent tech stacks), and test these microservices independently.

Cloud Native Application Design – Type of Services

At a high level, cloud-provided services can be categorized into the following – Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). Before you design an application for the cloud, it is important to get a basic understanding of these types, to make a call on what application best fits into which kind of service.

Before the cloud, when we owned the infrastructure on which applications were deployed, we were responsible for buying and managing hardware (usually a Server or PC), then deploying Operating Systems, Firewalls, application servers, Databases, and other software for applications to run. Everything needed to be manually handled from the power supply to software patches. This was changed with the introduction of the cloud.

Infrastructure as a Service or IaaS is the basic set of services provided by cloud providers, where you are provided with bare minimum hardware needs. For example, you take a Virtual machine and install OS on that (mostly you will get a VM with pre-installed OS, but you are responsible for managing the patches and upgrades). On top of that, you are responsible for installing and managing any tools and software needed for your applications.

Platform as a Service or PaaS provides a platform on top of which you can deploy your applications. In this case, the only thing you are responsible for is the code or application and data. Amazon Elastic Beanstalk or Azure Web Apps are examples of such services, that help you directly deploy your applications without worrying about underlying hardware or software.

Software as a Service or SaaS, as the name suggests, are services where you get the whole functionality as off the shelf, all you need to do is login and use the software. Any email provider service is a good example, like Gmail. you just need to login and use the service.

Azure: Designing Data Flows

Data Flow stages can be defined at a high level as

Ingest -> Transform -> Store -> Analyze.

Data can be processed as Batch Process, which is not a real time processing of data, for example, one collects sales data for whole day and run analytics at the end of the day. Stream processing of data is near real time, where data is processed as it is recieved.

ELT vs ETL: Extract Load and Transform vs Extract Trandorm and Load. As the terms suggests, in ELT, data is loaded first to storage and than transformed. whereas in ETL data is transofrmed first and than loaded. In case of large amount of data, ETL will be difficult as processing large data in real time will be difficult, so ELT might be preffered.

Data Management in Cloud: Azure provides multiple solution for data flow. When choosing a solution one needs to take care of following aspects, Security, Storage Type (IaaS vs PaaS, Blob, File, Database, etc), Performance, Cost, redunancy, availabillity, etc.

Let’s take a look at some important solutions by Azure

Azure Data Lake Storage: Azure Data Lake is a scalable data storage and analytics service. 

Azure Data Factory: Azure Data Factory is Azure’s cloud ETL service for scale-out serverless data integration and data transformation.

Azure Database Services: Azure provides various options for RDBMS and No-SQL database storage.

Azure HDInsight: an open-source analytics service that runs Hadoop, Spark, Kafka and more.

Azure DataBricks: Azure Databricks is a fast, easy and collaborative Apache Spark-based big data analytics service designed for data science and data engineering.

Azure Synapse Analytics: Azure Synapse Analytics is a limitless analytics service that brings together data integration, enterprise data warehousing and big data analytics. It gives you the freedom to query data on your terms, using either serverless or dedicated options—at scale.

Here is how a typical data flow look in Azure

Power BI azure synapse architecture

Web 3.0 – Potential and Challenges

This post is inspired by the following article on

The article discusses Web3.0, where it is right now, what are some of the possible use cases, and what challenges are foreseen. The web has seen an interesting journey in the last couple of decades from Web 1 to 2 to 3.

 While the first incarnation of the web in the 1980s consisted of open protocols on which anyone could build—and from which user data was barely captured—it soon morphed into the second iteration: a more centralized model in which user data, such as identity, transaction history, and credit scores, are captured, aggregated, and often resold. Applications are developed, delivered, and monetized in a proprietary way; all decisions related to their functionality and governance are concentrated in a few hands, and revenues are distributed to management and shareholders.

Web3, the next iteration, potentially upends that power structure with a shift back to users. Open standards and protocols could make their return. The intent is that control is no longer centralized in large platforms and aggregators, but rather is widely distributed through “permissionless” decentralized blockchains and smart contracts.

Web 3.0 is the future, no matter if you like it or not. At this point, it is difficult to predict how things will pan out, but the change will be significant.

The disruptive premise of Web3 is built on three fundamentals: the blockchain that stores all data on asset ownership and the history of conducted transactions; “smart” contracts that represent application logic and can execute specific tasks independently; and digital assets that can represent anything of value and engage with smart contracts to become “programmable.”

Web3 applications and use cases are built on top of three technology fundamentals: blockchain, smart contracts, and digital assets.

At this point, we are seeing the finance industry as one potential frontrunner in terms of use cases available for Web3.0 and Blockchain. The following image shows how lending is one area where Web3.0 can have an impact.

Web3 could represent a paradigm shift in business models for digital applications.

The opportunities presented are not without challenges. There needs to be clarity on the responsibility owned by various parties.

The chief challenge is regulatory scrutiny and outlooks. Regulators in many countries are looking to issue new guidance for Web3 that balances the risks and the innovative potential, but the picture remains unsettled. For now, there is a lack of clarity—and jurisdictional consistency—about classifying these assets, services, and governance models. For example, smart contracts are not yet legally enforceable. 

Cloud Native Application Design – Cloud Computing

I introduced Cloud Native Application Design and Cloud Computing in the last post. As a next step, we will discuss cloud computing a little more. Understanding cloud computing is the first step toward generating cloud-native applications.

So what is cloud computing? Cloud computing is a term we use to collectively call services provided by cloud service providers. These services include storage, compute, networking, security, etc. Let me borrow the official definition from Wikipedia.

Cloud computing is the on-demand availability of computer system resources, especially data storage (cloud storage) and computing power, without direct active management by the user. Large clouds often have functions distributed over multiple locations, each location being a data center. Cloud computing relies on sharing of resources to achieve coherence and typically uses a “pay-as-you-go” model, which can help in reducing capital expenses but may also lead to unexpected operating expenses for users.

The definition covers a few important aspects.

On-Demand: You can activate or deactivate the services as per your need. You are renting the services and not buying any hardware. For example, you can activate/rent a Virtual Machine on the cloud, use it, and then kill it (you take compute capacity from the cloud pool and return it back when you are done).

Multiple Types of services: The definition talks about Compute and Storage (these are basic, and one can argue most services are built on these), but if you go to any popular cloud service provider’s portal, you will see a much larger set of services like databases, security services, AI/ ML based, IOT, etc.

Data Centers: Cloud service providers have multiple data centers, spread across various geographical regions in the world, each region has multiple data centers (usually distributed into zones, with one zone having multiple data centers). This kind of setup helps in replicating resources for scalability, availability, and performance.

Shared Resources: As already mentioned, when on the cloud, you do not own resources and share infrastructure with other users (though cloud providers also have an option for separated infrastructure at a higher price). This helps cloud providers manage the resources at scale, hence keeping the prices low.

Pay-as-you-go: Probably one of the most important factors for the popularity of the cloud. You pay for your usage only. Going back to our previous example, you created a VM for X amount of time, so you pay only for that time.

Unexpected Operating Expenses: Though Cloud is popular because it can help reduce overall infrastructure costs, it can also go the other way if you are not careful about managing your resources. For example, unused resources or unused capacity will add to your bills, at the same time, low capability resources will impact services and user experience. So striking the correct balance becomes important and needs expertise, hence adding to operating costs.

Blockchain – Basics

If we look at the word blockchain, it gets simplified to a chain of blocks. A block in the blockchain is a node that contains, data, hash, and previous hash. This previous hash information helps these nodes connect to each other, hence forming the chain.

blockchain is a type of distributed ledger technology (DLT) that consists of growing list of records, called blocks, that are securely linked together using cryptography. Each block contains a cryptographic hash of the previous block, a timestamp, and transaction data

Blockchain is a shared, immutable ledger that facilitates the process of recording transactions and tracking assets in a business network.

Three terms we are hearing here

Ledger: In simple terms, a ledger is a book of records. In this case, a record is a block. And blocks chained to each other (blockchain) form the ledger.

Distributed: This is where things get interesting, this ledger is not dependent on one machine available in multiple machines, hence distributed in nature.

Immutable: Records or Blocks are immutable. This is because updating a record means corrupting the hash (hence the whole chain is corrupted).

Cloud Native Application Design – Basics

Cloud native application, or cloud native design, or cloud native architecture, or .. well the list is endless, and unless you are living under a rock, you hear these terms almost on daily basis. The critical question here is, do we understand what the term “cloud-native” means? Mostly “cloud-native” is confused with anything that is deployed on the cloud. And nothing can be more wrong.

What is “Cloud-Native”?

Cloud-native architecture and technologies are an approach to designing, constructing, and operating workloads that are built in the cloud and take full advantage of the cloud computing model.

Cloud native technologies empower organizations to build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds. Containers, service meshes, microservices, immutable infrastructure, and declarative APIs exemplify this approach.

Let me just say, “Cloud Native” means “Built for Cloud”. You start by keeping in mind what Cloud can do for you while designing your application.

Easier said than done. Most of the time, you will design a solution and then try to fit it into the cloud. Another challenge might be your understanding of the cloud. Cloud (or better we should call it cloud computing) itself can be a complex concept. So let us take a moment to demystify it, and let’s start with how industry leaders define it.

What is Cloud-Computing?

Simply put, cloud computing is the delivery of computing services—including servers, storage, databases, networking, software, analytics, and intelligence—over the Internet (“the cloud”) to offer faster innovation, flexible resources, and economies of scale. You typically pay only for cloud services you use, helping lower your operating costs, run your infrastructure more efficiently and scale as your business needs change.

Cloud computing is the on-demand delivery of IT resources over the Internet with pay-as-you-go pricing. Instead of buying, owning, and maintaining physical data centers and servers, you can access technology services, such as computing power, storage, and databases, on an as-needed basis from a cloud provider like Amazon Web Services (AWS).

In short, Cloud-Computing is a set of services, in the form of compute capabilities, storage, networking, security, etc. that one can use off-the-shelf. As an extension, we can say Cloud-Native design is designing our system keeping the full advantage of these services.

Monitoring Microservices: Part 4

Quick Recap: In past posts, we have talked about the basics of monitoring and Golden Signals. We also talked about tools like logs and Metrics which help us create Dashboards and Alerts for monitoring our applications.

In this final post for Monitoring services, I will cover basic areas, that one should consider monitoring when working with a Microservices-based application. One needs to create Dashboards and Alerts around these areas based on application requirements and SLAs.

Service Health: Overall health of the service. Overall health will be a combination of microservices. Ideally, you will put together a single dashboard that provides health details of all critical areas/microservices, so that you can confirm if end-to-end functionality is working fine. For example, the Health of service can be a combination of aspects like CPU load for critical microservices, the number of requests failing (compared to success rate), the response time (based on SLA), etc.

API Health/ Features Health: API Level health, how various APIs are performing. You can create separate dashboards for monitoring APIs/ Microservices.

Infrastructure Health: Another important aspect for monitoring is infrastructure, i.e. CPU, Memory, Network, IO, etc.

Cost: Cloud computing gives us many advantages, but at the same time also brings in additional complexities, managing cost is one of them. Fortunately, most cloud providers give easy ways to manage and monitor costs.

Error and Exceptions: The number of errors and exceptions thrown by your application and code is an important parameter to manage overall service health. For example, after a fresh deployment, if you see an increase in errors or exceptions, you know there is some issue.

Performance: Another important aspect is the performance of critical features. You would like to monitor how your services and APIs are performing and what response time is provided to clients.

Traffic: An increase or decrease in traffic impacts various aspects of the application, including scalability, infrastructure, performance, error rate, security (unexpected traffic might be due to a bot), etc. So it is important to track traffic details.

Success Rate: Success or Error rate helps understand the overall health of the system. A simple ratio would be 2XX to Errored (4XX+5XX) request.

Dependencies – Upstream/ Downstream: In a microservice-based architecture, the service cannot live in isolation. So it is important to track the health and performance of dependencies.

Request Tracing: In microservices-based architecture, where a service calls another service which in turn calls another, and so on, it is difficult to trace error and performance issues. So proper request traceability helps us monitor and debug issues.