Tag Archives: Design

Steps for Designing a system from scratch

You are into a new project, and provided with some requirements. How would you go about designing the new system

We are in this situation a lot of times. So here I am trying to create a step by step guide from taking a requirement doc to finalizing a system architecture.

Stage 0: Before getting started with the design process, we need to make sure about the following.

– Do we have clear understanding of requirements?
– Are we creating something from scratch or enhancing and existing system. In later case we will have design and technology constraints from previous system?
– Have we identified non functional requirements- security, performance, availability etc.
– Have we identified all stakeholders and their role?
– Have we identified key players that will help in creating architecture (architects, Business Analysts, Product owners)
– Have we decided on time/ money to spent on design activities?
– Have we identified reference material? Do we have artifacts for similar design problems, from either inhouse or external sources?
– How are we going to maintain the design artifacts- wiki, git, svn, confluence etc. We will need to maintain versioning?
– Have we identified any guiding principles for the design- we will use open source softwares and tools, or we will be using linux system of deployment etc.
– Are there any constraints- say client wants to use any specific third party tools or technologies, any specific compliance required by law (multilingual support), service availability grantee
– Have we identified all third party systems with which our system will interact and how the interaction will be done?
– Are we creating the system in one go or will it be a phased delivery. Have we identified the value add provided by various components being built and prioritized the delivery?
– In case of phased delivery, we need to identify scope of each phase?
– Have we identified risks involved and mitigated them?
– If we are modifying or enhancing an existing system, we need to understand what areas can be reused, enhanced and built from scratch?
– Better to create a formal document to identify what all design artifacts are required.
– Define KPIs (Key performance indicators) and SLAs (Service Level Agreements)
– Have we defined acceptance criteria for the design?

Stage 1: Now we need to understand the business and what changes do we need.

– Have we understood organization structure?
– Have we identify business goals and objectives for the organization and what changes are required?
– Identify all business requirements, for example customer should be able to return a product is a business requirement.
– Identify and design current business processes (How current business work, does it fulfill all the business requirements or not, if yes, do we need to change or enhance the way it is being done right now, for example current purchase process is manual and we want to provide online options.)
– Identify changes or modification required in business processes

– Design artifacts to be delivered in this stage
Catalogs
— Organizations
— Actors
— Goals
— Roles
— Business Services
— Locations
— Process / Products
Matrices
— Business interaction
— Actor/ Role
Diagrams
— Business Services
— Functional Decomposition
— Product/ Process lifecycle
— Goal/ Service diagram
— Business Use cases
— Process Flow
— Event diagram

Stage 2: Focus on Data used

– What data is being used in the application? how it is originated and used?
– How the data is shared securely in enterprise
– Create common vocabulary and data definitions
– Identify security measures to be taken

– Design artifacts to be delivered in this stage
Catalogs
– Data Entities

Matrices
– Data Entity/ Business function
– Data Entity/ Application matrix

Diagrams
– Conceptual Data Diagram
– Logical Data Diagram
– Physical Data diagram
– Data lifecycle diagram
– Data Security diagram
– Data migration diagram

Stage 3: What all Applications are available? Changes required and new ones to be created

Application- Core parameters
– Platform independence
– Easy to use
– Identify existing applications and newly ones to be created at logical level and than map to physical level

– Design artifacts to be delivered in this stage
Catalogs
– Application portfolio
– Interface catalog

Matrices
– Application/ Organization
– Role/ Application
– Application/ Function
– Application/ Interactions

Diagrams
— Application communication
— Application and user location
— Application use case
— Application details – components/ modules and services
— Application details – Layered architecture if used

Stage 4: Understand the technology working behind the scenes

Control technical diversity: Minimizes cost of expertise.

Catalogs
— Technology portfolio

Matrices
— Application/ Technology

Diagrams
— Deployment diagram
— Environments and locations
— Communication engineering diagram (firewalls)

Stage 5: Lets consider Non Functional Requirements
— Security
— Performance
— Availability
— Disaster recover
— Data backups
— Others (Project specific)

Stage 6: Post Design phase:

– Did we identify reusable artifacts and services which can be used by other projects?
– Have we conducted periodic validation that design and product being build are in sync?
– Does the design change due to any change requests? Has that been reflected in design?
– Have we met all the acceptance criteria that were set initially?

Data Modeling at different levels

When you are designing database for an application, there can be 3 core levels at which you can design your database.

1. Conceptual Level: At this level you are only aware of high level entities and their relationships. For example you know that you have “Employee” Entity who “works for” a “Department” and “has” an “Address”. You are not worried about details.

2. Logical Level: You try to add as much details as possible, without worrying about how it will actually be converted to a physical database structure. So will provide any attributes for “Employee” i.e. Id, FirstName, LastName, AddressId, Salary and define primary and foreign key relations.

3. Physical Level: This is the actual representation of your database design with exact column names, types etc.

database

More info- http://www.1keydata.com/datawarehousing/data-modeling-levels.html

Open-Closed principle Revisited

Reference: http://kamalmeet.com/system-design-and-documentation/understanding-openclosed-principle/

Open closed principle states that your classes should be open for extension but closed for modification. One way to look at it is that when you provide a library or a jar file to a system, you can ofcourse use the classes or extend the classes, but you cannot get into the code and update it.

At a principle level, this means you should code in a manner that you never need to update your class once code. One major reason behind this principle is that you have a class which is reviewed and Unit tested, you would not like someone to modify and possibly corrupt the code.

How do I make sure that my class follow open closed principle?

Let’s look at a design of this MyPizza class

public class MyPizza {
public void createPizza(Pizza pizza)
{
if(pizza.type.equals("Cheese"))
{
//create a cheese pizza
}
else if(pizza.type.equals("Veg"))
{
//create a veg pizza
}
}
}

Following pizza type classes use this

class Pizza
{
String type;
}

class CheesePizza extends Pizza{
CheesePizza()
{
this.type=”Cheese”;
}
}

class VegPizza extends Pizza{
VegPizza()
{
this.type=”Veg”;
}
}

The above design clearly violates the open closed principle. What if I need to add a double cheese pizza here. I will have to go to MyPizza class and update it, which is not following “closed for modification” rule.

How can fix this design?

public class MyPizza {
public void createPizza(Pizza pizza)
{
pizza.create();
}
}


class CheesePizza extends Pizza{
CheesePizza()
{
this.type="Cheese";
}

public void create()
{
//do the creation here
}
}

With this simple modification we are making sure that we will need not change the code in MyPizza class even when we will add new types of pizza, as actual responsibility of creation would be with the new class being created (DoubleCheese).

HTTP verbs implementation for REST webservices

A REST webservice would normally support four HTTP verbs GET, POST, PUT and DELETE.

Simply putting, the verbs have following function.

GET: requests some information from server.
POST: sends/ posts some data to application.
PUT: requests that sent data to be put under given URI location.
DELETE: Deletes the given entity.

Most common of these are GET and POST, as they can mostly handle all the requests.

Another interesting debate is when to use PUT and POST as both tend to send some data to server. Ideally, when we know exactly where to add the data, we will use put, and if we want application to take a call, we will use post. For example if I want to add/ edit user id 112, I will use PUT. But if I just say add a user and let application assign the id, I will use POST.

More on PUT vs POST-
http://stackoverflow.com/questions/630453/put-vs-post-in-rest?rq=1

Message Oriented Middleware

In last post I talked about what is middleware, I will focus on message implementation of same today. Message oriented middleware or MOM mostly uses message queues to send and receive data between two systems.

In simple terms, a message is anything that is being sent from one system to another. Mostly MOM uses XML formats, sometimes SOAP based requests or plain texts. An example MOM system will send message to a  message queue or MQ, from where the receiver will pick up the message.

Advantages of Message Oriented Middleware

  1. Persistence: In normal client-server architecture, we will need to make sure both the systems to be available to have a successful communication. Whereas if we are using MQs, one system can still send messages even if the second is down.
  2. Support for Synchronous and Asynchronous communication: by default the communication is asynchronous but we can implement a synchronous system where a request message sender will wait for the response from other party.
  3. Messages can be consumed at will: If one system is busy when messages are received (which do not need immediate response), it can consume the messages when load is less. For example, a lot of systems are designed to consume the messages at non business hours.
  4. Reliability: As messages are persistent, threat of losing information is low even if one or more systems are not available. Additional security mechanism can be implemented in MQ layer.
  5. Decoupling of systems: Both client and server work independently, and often do not have knowledge for other end. System A creates a message and adds to message queue, without concerning who will pick it up as long as it gets the response message (if required). So one system can be written in Java and other can be in Dot Net.
  6. Scalability: As both machines involved in interaction are independent of each other, it is easier to add resources at either end without impacting the whole system.
  7. Group communication: Sender can send message to multiple queues or same queue can have multiple listeners. In addition Publisher- Subscriber approach can help broadcast a message.

Types of Messaging:

Point to Point: This is a simple messaging architecture where a sender will directly send a message to receiver through message queue.

Publisher-Subscriber (Pub-Sub): This type of communication is required when sender wants to send messages to multiple receivers. Topics are defined to which subscriber can subscribe and receive requests based on same. For example, say a core banking system can trigger messages on various events like new account open, a withdrawal is made, interest rate changed etc. For an event, multiple other systems might want that information to take an action, so say for all withdrawal events, systems like fraud detection, mobile messaging system, daily reporting system, account maintenance system subscribe. Whenever, publisher publishes the message to “Withdrawal” topic, all of these systems will receive the message and take appropriate action.

Understanding Open/Closed Principle

Open closed principle is an important rule in software design, which we need to follow when designing a software system. It states “A Software code should be open for extension but closed for modification”.

What does that mean? Let’s take a very simple example from day to day life. We all use TV remotes. Say, we have created a very simple software code for remote operations like on/ off, vol increase/ decrease, channel change etc. Now what should be my design considerations

1. Any one should be able to extend the code to add new functionality, for example, for advanced TV’s one should be able to add functionality for subtitle language selection, operator specific options etc.
2. At the same time I need to make sure that no one should be able to alter existing core functionality like vol increase/ decrease, so that it does not break down the functions which are working fine perfectly.

Point 1 is what we say open for extension and point 2 states closed for modification principle. The open/ closed principle helps maintainability of code perfectly, by making sure we do not break what is working fine with flexibility of adding new features at the same time.

Implementing Scalability

These days there is a lot of buzz around the word ‘scalability’ in IT world. The concept is actually not new, I have been designing scalable systems for last 9 years, but the idea has definitely changed since then.

How meaning of scalability has changed in last few years?

If 5-6 years back, I was creating a system to support say 10K users, and someone would have told me to make it scalable, I would have thought of making the system it in such a way that it can support double or may be 4 times or max 10 times the users in next 3-4 years. But with the applications like facebook, amazon, ebay, twitter the idea about scalable system is different. Now the user base can increase exponentially in matter weeks. And that is what every organization wants.

What is the impact of change?

Impact of the change is that  now you do not want a system which will need a good amount of change if your user base is increasing. Earlier, as number of users used to grow slowly, it will give you time to think, design, redesign, upgrade the system, but now, as user base can increase with much more speed, you want a system which can scale up within minutes and it should be able to it automatically.

How to achieve scalability in today’s world?

As the demand has changed, so has technology. Cloud computing has made it much easier for us to create scalable systems. Key component choices to be made while creating a scalable solution

1. Design: Design of application/ code should be able to handle infinite load.

2. Database: If you are expecting your data load to grow beyond a few million, you might want to go for NOSQL over RDBMS.

3. Hardware: you should be able to add in new hardware and replicate the application on new servers within minutes. Cloud system can help you here.

4. Load balancing: If our application is getting distributed/replicated over multiple servers, we will need to take care of load balancing so that no server will choke.