When one goes for a Cloud based solution with solution provider like Amazon AWS, there are 2 things which are important. One, you need to have a clarity on what you are trying to achieve, and second is understanding of the services being provided by the provider.
Both the aspects are equally important. AWS provides plethora of services which can amuse at the same time confuse one. You might be tempted to use services which might not be required for your project and unnecessarily adds to the cost. At the same time if services not used with proper understanding, can backfire in terms of output and cost. For example, in one of my projects, incorrect implementation of autoscaling ended up running unused servers adding to cost instead of saving it.
Additionally, one need to be aware of all the capabilities of the service provider, for example, what all database and backup services we can use, can we use caching services, monitoring services provided by the service provider. Otherwise you will end up putting in unnecessary effort in rebuilding the wheel.
Here is a good starting point for AWS usage –
When you are setting up an environment on AWS cloud, you need to go through many steps, like creation of IAM roles, Security groups, Databases, EC2 instances, load balancers etc. Often one resource is dependent on other and hence you have to create components one by one which can be time consuming. With Cloudformation scripts one can easily get the deployment steps automated. And most importantly, the script is reusable any number of times. So if I want to replicate a stage setup on production or another setup in another region, it is easily possible.
One can create template in JSON or YML formats. The template is submitted to cloud formation which executes the template and create the stack which is actual environment with all the mentioned components.
Another important thing is that you can not only create infrastructure, but also do required settings. For example, I needed to get setup for application done on EC2, which I was easily able to do with UserData section.
Here is an example
InstanceType: XXXXX # type here
ImageId: ami-XXXX # any ami here
KeyName: XXXX # name of the key if already exising or create a new one
IamInstanceProfile: !Ref InstanceProfile
- AssociatePublicIpAddress: true
Description: ENI for bastion host
- !Ref AppNodeSG
apt-get -y install awscli
aws s3 cp s3://XXXX/XXXX.XXX ~/some location
#One can install servers, download wars and deploy at runtime
# create another instance
# Security group to give access to ssh and port 80
GroupDescription: SecurityGroup for new AppNode
- IpProtocol: tcp
- IpProtocol: tcp
Roles: [S3FullAccess] # S3FullAccess Role created Manually, so that my EC2 instance can access S3.
A cluster in simple terms is group of similar things, in this case computers or servers. A more refined explanation would be that a cluster is group of computers, working together in such a way that for end user it is one single machine. This is close to what I discussed about implementation of virtualization, so yes clustering is a form of virtualization.
But when we are strictly talking about software architecture, we are actually talking about using cluster for load balancing or handling failover. For a simple web application, this would mean creating 2 or more similar server machines, which are maintained in a cluster. There is a single point of entry which dictates which server from cluster should fulfill incoming request. This is load balancing. The server at the entry point can use any algorithm like round robin or check the actual load on a server to assign a request to one of the servers in the cluster. At the same time if one of the machine goes down in cluster for some reason, other servers can share the load and end user will never know about the problem occurred at backend.
Virtual, as the word hint, is something that is not real, but gives feeling of being real. In computer world, my first interaction with virtual machine was at the very beginning in school days, when we were made to work on dumb terminals, which had only monitor and keyboard. The terminal used to be attached to a central powerful machine which would provide processing power and memory requirements.
That is a very crude example of virtualization. With hardware cost going down in last few years, the need of dumb terminals has evaporated. But with cloud computing coming into picture, virtualization in IT industry has reached to new scales. With cloud computing it is convenient and important to manage virtual machines based on requirement or load on the application.
Virtualization goes beyond a virtual machine (hardware virtualization, software virtualization, storage virtualization, network virtualization are key), but for sake of this posts simplicity, I will stick to virtual machines. A virtual machine is simply a machine which does not exist in real, but can be used as a real machine. The advantage of this type of arrangement is maximum usage of hardware and hence cost efficiency.
Hypervisor is the key software component which helps running multiple operating systems (and hence machines) on one system. Read here on hypervisors.