Author Archives: admin

GraphQL- Schema

In a previous post, I talked about GraphQL basics and how it can help one simplify fetching data for service over REST. Here, we will take the next step and under the concept of schema in GraphQL.

When starting to code a GraphQL backend, the first thing one needs to define is a schema. GraphQL provides us with Schema definition language or SDL which is used to define the schema.

Lets define a simple entity

type Person {
  id: ID!
  name: String!
  age: Int!
}

Here we are defining a simple person Model, which has three fields id, name, which is a string, and age, which is an integer. The “!” mark indicates that this is a required field.

Just defining the Person model does not expose any functionality, to expose the functionality, GrpahQL provides us with three root types, namely, QueryMutation, and Subscription.

type Query 
{  
   person(id: ID!): Person
}

The query mentioned above explains how a client can send an ID and gets a Person object in return.

Next, we have mutations that can help one implement remaining CRUD operations like create, update and delete.

type Mutation {
  createPerson(name: String!, age: Int!): Person!
  updatePerson(id: ID!, name: String!, age: String!): Person!
  deletePerson(id: ID!): Person!
}

Finally, one can define subscriptions, which will make sure whenever an event like the creation of a new object, updation or deletion happens, the server sends a message to the subscribing client.

type Subscription {
  newPerson: Person!
  updatedPerson: Person!
  deletedPerson: Person!
}

Disclaimer: This post was originally posted by me in the cloud community – https://cloudyforsure.com/graphql/graphql-schema/

GraphQL- Getting started

GraphQL is a query language for your APIs. It helps us write simple, intuitive, precise queries to fetch data. It eases out the way we communicate with APIs.

Let’s take an example API, say we have a products API. In REST world you would have the API defined like

to get all products details

GET: http://path.to.site/api/products

or

to get single product details

GET: http://path.to.site/api/products/{id}

Now, let’s say Products has schema as

Product

  • id: ID
  • name: String
  • description: String
  • type: String
  • category: String
  • color: String
  • height: Int
  • width: Int
  • weight: Float

Now by default, the REST API will return all the parameters associated with the entity. But say a client is only interested in the name and description of a product. GraphQL helps us write a simple query here.

query 
{
    product(id:5){
         name
         description
    }
}

This will return a response like

{
    "data": { 
        "product": {
           "name": "Red Shirt"
           "description": "Best cotton shirt available"
        }
    }
}

This looks simple, so let’s make it a bit complicated. Say a product is associated with another entity called reviews. This entity manages product reviews by customers.

Reviews

  • id: Id
  • title: String
  • details: String
  • username: String
  • isVerified: Boolean
  • likesCount: Int
  • commentsCount: Int
  • comments: CommentAssociation

Now you can imagine in case of REST API, client needs to call additional API say

http://path.to.site/api/products/{id}/reviews/

With graphQL, we can achieve this with a single query

query 
{
    product(id:5){
         name
         description
         reviews{
             title
             description 
         }
    }
}

This will return a response like

{
    "data": { 
        "product": {
           "name": "Red Shirt",
           "description": "Best cotton shirt available",
           "reviews": [
               {
                    "title": "Good Shirt",
                    "description": "Liked the shirt's color" 
               },
               {
                    "title": "Awesome Shirt",
                    "description": "Got it on sale"
               },
               {
                    "title": "Waste of Money",
                    "description": "Clothing is not good"
               }
           ]  
        }
    }
}

We can see GraphQL solves multiple problems which were there with REST APIs. Two major problems we have evaluated in the examples above

Overfetching: We can explicitly mention all data we need from an API. So even if a Product model has 50 fields, we can specify which 4 fields we need as part of the response.

Multiple calls: Instead of making separate calls to Products and Product reviews, we could make a single query call. So instead of calling 5 different APIs and then clubbing the data on the client side, we can use GraphQL to merge the data into a single query.

Why Graph?

As we can see in the above examples, GraphQL helps us look at data in form of a connected graph. Instead of treating each entity as independent, we recognize the fact that these entities are related, for example, product and product reviews.

Disclaimer: This post was originally posted by me in the cloud community –https://cloudyforsure.com/graphql/graphql-getting-started/

Istio- Getting started

I recently wrote about service mesh and how it helps ease managing and deploying services. Istio is an open-source service mesh that layers transparently onto existing distributed applications. Istio helps to create a network of deployed services with load balancing, service-to-service authentication, monitoring with no additional coding requirements. Istio deploys a sidecar for each service which provides features like canary deployments, fault injections, and circuit breakers off the shelf.

Let’s take a look at Istio at a high level

The overall architecture of an Istio-based application.
Image source: https://istio.io/latest/docs/concepts/what-is-istio/

Data Plane: as we can see in the design above that Data Plane uses sidecars through Envoy Proxy and manage traffic for microservices.

Control Plane: Control plane helps to have centralized control over network infrastructure and implement policies and traffic rules.

Control plane functionality (https://istio.io/latest/docs/concepts/what-is-istio/)

Automatic load balancing for HTTP, gRPC, WebSocket, and TCP traffic.

Fine-grained control of traffic behavior with rich routing rules, retries, failovers, and fault injection.

A pluggable policy layer and configuration API supporting access controls, rate limits and quotas.

Automatic metrics, logs, and traces for all traffic within a cluster, including cluster ingress and egress.

Secure service-to-service communication in a cluster with strong identity-based authentication and authorization.

Currently Istio is supported on

  • Service deployment on Kubernetes
  • Services registered with Consul
  • Services running on individual virtual machines

Lets take a look at core features provided by Istio

Traffic management

Istio helps us manage traffic by implementing circuit breakers, timeouts, and retries, and helps us with A/B testing, canary rollouts, and staged rollouts with percentage-based traffic splits.

Security

Another core area where Istio helps is security. It helps with authentication, authorization, and encryption off the shelf.

While Istio is platform-independent, using it with Kubernetes (or infrastructure) network policies, the benefits are even greater, including the ability to secure pod-to-pod or service-to-service communication at the network and application layers.

https://istio.io/latest/docs/concepts/what-is-istio/#security

Observability

Another important aspect that Istio helps with is observability. It helps managing tracing, monitoring, and logging. Additionally, it provides a set of dashboards like Kiali, Grafana, Jaeger, etc to help visualize the traffic patterns and manage the services.

Additional Resources –

https://istio.io/latest/docs/

https://dzone.com/articles/metadata-management-in-big-data-systems-a-complete-1

Disclaimer: This post was originally posted by me in the cloud community – https://cloudyforsure.com/cloud-computing/istio-getting-started/

Kubernetes

Sometime back I talked about the importance of containers in implementing Microservices-based applications. Implementing such a design comes up with its own challenges, that is, once the number of services and containers grows, it becomes difficult to manage so many container deployments, scaling, disaster recovery, availability, etc.

Container Orchestration with Kubernetes

Because of the challenges mentioned above with container-based deployment of Microservices, there is a strong need for container orchestration. There are a few Container orchestration tools available but it will not be wrong to say that Kubernetes is the most popular at this point of time.

Kubernetes or K8s as it is popularly knows, provides following features off the shelf

  • High availability – no downtime
  • Scalability – high performance
  • Disaster Recovery – backup and restore

Kubernetes Architecture

Image Source: https://kubernetes.io/docs/concepts/overview/components/

Let us take a look at how K8s achieves the features we mentioned above. At the base level, K8s nodes are divided into Master node and Worker nodes. The master node, as the name suggests is the controlling node that manages and controls the worker nodes. Master node makes sure services are available and scaled properly.

Master Node

Master node contains contains following components- API server, Scheduler, Controller and etcd.

API server: The API server exposes APIs for various operations to external users. It validates any request coming in and processes it. API can be exposed through REST calls or a command-line interface. The most popular command-line tool to access API is KubeCTL.

Scheduler: When a new pod (we will look into pods shortly when discussing worker nodes) is being created, the scheduler checks and deploys the pod on available nodes based on configuration settings like hardware and software needs.

Controller/ Control Management: As per official K8s documentation, there are four types of controllers that manage the following responsibilities.

  • Node controller: Responsible for noticing and responding when nodes go down.
  • Replication controller: Responsible for maintaining the correct number of pods for every replication controller object in the system.
  • Endpoints controller: Populates the Endpoints object (that is, joins Services & Pods).
  • Service Account & Token controllers: Create default accounts and API access tokens for new namespaces.

etcd: The etcd is a key-value database store. It stores cluster state at any point in time.

Worker Node

kubelet: It runs on each node and makes sure all the containers running in pods are in healthy state and takes any corrective actions if required.

kube-proxy: It maintains network rules on nodes. These network rules allow network communication to your Pods from network sessions inside or outside of your cluster.

Pods and Containers:  One can think of a Pod as a wrapper layer on top of containers. Kubernetes does not want to work directly with containers to give it flexibility so that the underlying layer does not have any dependency.

Disclaimer: This post was originally posted by me in the cloud community – https://cloudyforsure.com/containers/kubernetes/

External Authorization with Envoy Proxy

In this post we will go over three things majorly, firstly we will start with setting up Envoy proxy on the local machine, second, we will set up layer 4 and layer 7 proxy, and finally, we will implement an external authorization filter.

Setup Envoy Proxy

Envoy proxy can be installed on most of the popular OS and also has a docker installation. The site gives very good documentation for getting started –https://www.envoyproxy.io/docs/envoy/latest/start/start. For this example, I am setting up an envoy proxy on Mac OS.

First of all, make sure you have brew installed. Then execute the following commands to install envoy proxy

brew tap tetratelabs/getenvoy
brew install envoy

Finally, check the installation: envoy --version

Setup Level 7 and Level 4 Proxy

Before jumping into the difference between layer 4 and layer 7 proxy, it makes sense to revisit the OSI model or open system interconnection model. OSI model helps us visualize networking communication as 7 layered representation, where each layer adds functionality on top of the previous layer.

For OSI Model Refer https://en.wikipedia.org/wiki/OSI_model

Focusing on layer 4 and layer 7, we understand a very important difference. At layer 7, we have a complete request coming in from the client i.e. we understand what the actual request is? is it a GET request or a POST request? is the end-user asking for /employee or /department? whereas at layer 4 all we are looking at is data packets in raw format, hence there is no way for us to know what resource is being requested for.

Layer 7 implementation makes sure that the proxy receives all the incoming packets for a request, recreates the request, and makes a call to the backend server. This helps us implement certain features like TLS offloading, data validation, rate limiting on specific APIs, etc. On the other hand, for Layer 4 proxy, all we are doing is passing on the packets being received to the backend server. The advantage here is that it is fast (no data processing) and can be considered secured (again we are not looking into data). Based on application need one can choose either layer 7 or layer 4 proxying.

For demonstrating the usage of Envoy proxy implementation, we will need to set up some sample applications. Here I have created a couple of “Hello World” applications locally on port 1111 and port 2222.

$curl localhost:1111
Hello World from 1111
$curl localhost:2222
Hello World from 2222

To start with, we are setting up a layer 7 proxy. We will create an Envoy proxy config YAML file with details of the filters and clusters we are using. Here is a reference YAML- https://www.envoyproxy.io/docs/envoy/latest/configuration/overview/examples#static

static_resources:
  listeners:
    - name: listener_0
      address:
        socket_address: { address: 127.0.0.1, port_value: 8080 }
      filter_chains:
        - filters:
          - name: envoy.filters.network.http_connection_manager
            typed_config:
              "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
              stat_prefix: ingress_http
              codec_type: AUTO
              route_config:
                name: local_route
                virtual_hosts:
                - name: local_service
                  domains: ["*"]
                  routes:
                  - match: { prefix: "/" }
                    route: { cluster: some_service }
              http_filters:
              - name: envoy.filters.http.router
  clusters:
    - name: some_service
      connect_timeout: 1s
      type: STATIC
      lb_policy: round_robin
      load_assignment:
        cluster_name: some_service
        endpoints:
          - lb_endpoints:
            - endpoint:
                address:
                  socket_address:
                    address: 127.0.0.1
                    port_value: 1111
            - endpoint:
                address:
                  socket_address:
                    address: 127.0.0.1
                    port_value: 2222

A couple of things to observe in the above YAML file, first observe the filter we are using here. We are using envoy.filters.network.http_connection_manager which is the simplest filter we use to redirect traffic. The second important thing to observe is the source and destination ports. We are listening to the traffic at port 8080 and redirecting to port 1111. This is a simple example where both envoy proxy and application server are running on the same local machine, whereas this would not be the case in the real world and we will see more meaningful usage of cluster address. For now, let’s run and test the envoy proxy.

Start envoy proxy with the config YAML created.

envoy --config-path ~/Desktop/workspace/envoyexample.yaml

Once the envoy is started successfully, you will see we are able to hit port 8080 and getting the output from port 1111 and 2222 in a random fashion.

$curl localhost:8080
Hello World from 1111
$curl localhost:8080
Hello World from 2222

To convert the existing Envoy proxy config YAML to support layer 4 routing, we just need to change the name and type of filter to envoy.filters.network.tcp_proxy

static_resources:
  listeners:
    - name: listener_0
      address:
        socket_address: { address: 127.0.0.1, port_value: 8080 }
      filter_chains:
        - filters:
          - name: envoy.filters.network.tcp_proxy
            typed_config:
              "@type": type.googleapis.com/envoy.config.filter.network.tcp_proxy.v2.TcpProxy
              stat_prefix: ingress_http
              cluster: some_service
  clusters:
    - name: some_service
      connect_timeout: 1s
      type: STATIC
      lb_policy: round_robin
      load_assignment:
        cluster_name: some_service
        endpoints:
          - lb_endpoints:
            - endpoint:
                address:
                  socket_address:
                    address: 127.0.0.1
                    port_value: 1111
            - endpoint:
                address:
                  socket_address:
                    address: 127.0.0.1
                    port_value: 2222

You can see the YAML looks simpler as we cannot support path-based routing in layer 4 reverse proxy. The results remain the same as hitting localhost:8080 will still hit backend servers running at 1111 or 2222, but as explained above, the mechanism being used behind the scenes has changed.

Setup External Auth using LDAP

Usually you will have an externalized service for providing authentication feature, and you will add the auth filter to Envoy proxy config. For this example I will start with setting up a simple LDAP auth service.

A utility file to validate the credentials.

public class LDAPAuth {
	public static boolean authenticateJndi(String username, String password) throws Exception{

	    try {
	    	Properties  props = new Properties();
	        props.put(Context.INITIAL_CONTEXT_FACTORY, "com.sun.jndi.ldap.LdapCtxFactory");
	        props.put(Context.PROVIDER_URL, "ldap://ldap.forumsys.com:389/");
	        var principal = "cn="+ username+ ",dc=example,dc=com";
	        props.put(Context.SECURITY_PRINCIPAL, principal);
	        props.put(Context.SECURITY_CREDENTIALS, password);
	        InitialDirContext context = new InitialDirContext(props);
	    } catch (Exception e) {
	        return false;
	    }
	    return true;
	}
}

And a controller

@RestController
public class AuthController {
	
	@GetMapping("/")
	public ResponseEntity<String> sayHello(@RequestHeader Map<String, String> headers) {
		headers.forEach((key, value) -> {
			System.out.println(key +" - " + value);
		});
		String authorization = headers.get("authorization");
	    String base64Credentials = authorization.substring("Basic".length()).trim();
	    String credentials = new String(Base64.getDecoder().decode(base64Credentials));
	   
	    final String[] values = credentials.split(":", 2);
		String username = values[0];
		String password = values[1];
		System.out.println("user:"+username);
		System.out.println("pass:"+password);
		try {
			var success = LDAPAuth.authenticateJndi(username, password);
			System.out.println("returned:"+success);
			if(!success) {
				return new ResponseEntity<String>(HttpStatus.FORBIDDEN);
			}
		} catch (Exception e) {
			e.printStackTrace();
		}
		return new ResponseEntity<String>(HttpStatus.OK);
	}
}

We will expose the authentication service at port 3333 for this example. Once we have that, all we need to do is to introduce authentication filer in our envoy config at layer 7 reverse proxy settings.

static_resources:
  listeners:
    - name: listener_0
      address:
        socket_address: { address: 127.0.0.1, port_value: 8080 }
      filter_chains:
      - filters:
        - name: envoy.http_connection_manager
          config:
            codec_type: auto
            stat_prefix: ingress_http
            route_config:
              name: local_route
              virtual_hosts:
              - name: backend
                domains:
                - "*"
                routes:
                  - match:
                      prefix: /
                    route:
                      cluster: some_service
            http_filters:
            - name: envoy.ext_authz
              config:
                http_service:
                  server_uri:
                    uri: auth:8080
                    cluster: ext-authz
                    timeout: 5s
            - name: envoy.router
              config: {}
  clusters:
    - name: some_service
      connect_timeout: 1s
      type: STATIC
      lb_policy: round_robin
      load_assignment:
        cluster_name: some_service
        endpoints:
          - lb_endpoints:
            - endpoint:
                address:
                  socket_address:
                    address: 127.0.0.1
                    port_value: 1111
    - name: ext-authz
      connect_timeout: 1s
      type: STATIC
      lb_policy: round_robin
      load_assignment:
        cluster_name: ext-authz
        endpoints:
          - lb_endpoints:
            - endpoint:
                address:
                  socket_address:
                    address: 127.0.0.1
                    port_value: 3333

Once we start the Envoy proxy, we have a auth filter in place now.

envoy --config-path ~/Desktop/workspace/envoyexample.yaml

To have a successful response, we need to make sure we are sending correct authorization header now.

Basic Auth with Envoy Proxy

HTTP/2

HTTP/2 has become a first-class citizen in the past couple of years where development teams have started realizing its true potential. Some of the core features of HTTP/2 are

  • Data compression of HTTP headers
  • Server Push
  • Multiplexing multiple requests over a single TCP connection

Here is a very good illustration of HTTP/2

While we talk about HTTP/2, it makes sense to touch base upon HTTP/3. HTTP/3 suggests using QUIC protocol instead of TLS for faster communication. Here is a good article on comparing HTTP with HTTP/2 and HTTP/3.

REST vs GraphQL vs gRPC

Some days back I wrote about SOAP vs REST. It will not be a stretch to say that with the popularity of REST, SOAP had a very limited business usage. REST seems like a natural extension of HTTP based communication where we used HTTP verbs like GET, POST, PUT, DELETE patch to update a resource state. REST as the name Representational State Transfer suggests an intuitive set of APIs

GET /employees – One can conclude that this will return a list of employees
GET /employees/1 – returns employee with ID 1

Though REST is good, at times it feels like a restriction, say for a simple use case where I want to tell the API that for a given employee just give me a name. To handle these cases, GraphQL has gained a lot of popularity. GraphQL is a query language or way to get data from APIs.

{
emplyee(id: “111”) {
name
department
}
}

more on GrpahQL queries https://graphql.org/learn/queries/

GraphQL is a specification and not an implementation in itself. So you will find multiple client and server implementations available to choose from. For example, Facebook providers with Relay library. Another popular implementation is Apollo. In a typical REST APIs availability, you have a separate endpoint for each call. In the case of GraphQL, you have a single endpoint where you send your query.

gRPC is an open-source Remote Procedure Call system initially developed by Google. It uses HTTP/2 for transport, and protocol buffers as its underlying message interchange format. Remote procedure calls give users a feel of making calls to remote code as if they are available locally. The streaming approach behind the scenes makes sure the calling method receives a response as soon as it is ready on the remote machine.

A comparative view of the three approaches i.e REST vs GraphQL vs gRPC

https://www.youtube.com/watch?v=l_P6m3JTyp0

An interesting comparison between gRPC vs REST

Mutual TLS

Sometime back, I talked about HTTPS, and discussed how SSL or Secured Socket Layer can help us establish a secured connection between client and server. TLS or Transport Layer Security can be thought of as an updated version of SSL protocol.

The image below shows the standard handshake process in TLS based communication. When a client tries to communicate with the server, the server sends back its certificate to the client along with its public key. Client after authenticating the server certificate sends the symmetric key encrypted by the public key shared by the server earlier. The server decrypts the key and finally, their communication is done using the symmetric key.

image source https://blog.cloudflare.com/protecting-the-origin-with-tls-authenticated-origin-pulls/

In the case mentioned above, we can see how the client, mostly the browser, validates the server certificate to trust it. An extension to the above communication style is mutual TLS or mTLS. As the name suggests, we are referring to a scenario where both server and client need to trust each other, mostly in this case client is another service or IOT based device. In this case, both the client and server will share their certificates and build mutual trust by authenticating the certificates.

image source https://developers.cloudflare.com/access/service-auth/mtls

API Proxy vs API Gateway

Off late I have been working on and exploring some API proxy tools. I have seen the terms API proxy and API gateway being used interchangeably most of the time, and probably somewhere both try to solve similar problems, with a slight change.

When we talk about API proxy, we are talking about reverse proxy functionality where we can expose our existing APIs to the outside world through a proxy server. So the user applications are not directly talking to your APIs but interacting through the proxy. The idea here is simple, that you need not let everyone know internal details about how and where your servers are deployed. API proxy can also work as a load balancer for API deployed on multiple servers. It can provide additional features like monitoring, quota implementation, rate limiting for your APIs.

An API gateway does all that and provides add-on features like orchestration, for example, when the client hits “/products” API, the gateway might talk to multiple services and collaborate the data before sending back. Additionally, it can provide Mediation like converting request or response formats. It can also help expose a SOAP service as a set of REST endpoints and vice versa. It can provide sophisticated features like handling DDOS attacks.

There is no hard boundary line between API Proxy and API Gateway, and we will find the terms being used interchangeably. Features too, at a certain level, are overlapping. So at the end of the day just look for the tool that fulfils your requirements.

Apigee – Getting started

With the popularity of microservices-based design, we are ending up with tens and hundreds of APIs being created for our applications. Apigee is a platform for developing and managing APIs. It provides a proxy layer for our backend APIs. We can reply on this proxy layer to provide us with a lot of features like security, rate limiting, quotas implementation, analytics, monitoring, and so on.

The core features provided by Apigee are

Design: One can create APIs, and configure or code policies as steps in API flows. Policies can help implement features like rate limiting, quota implementation, payload transformation, and so on.

Security: One can enforce security best practices and governance policies. Implement features like authentication, encryption, SSL offloading, etc at the proxy layer.

Publish: We can provide reference documentation and manage the audience for the API from the portal.

Analyze: Drill down usage data, investigate the spikes and trace calls in real time.

Monitor: Ensure API availability and provide a seamless experience to the APIs.

Hands on example

Getting started with Apigee is easy, go to https://apigee.com and create a free 60-day evaluation account for testing. Once logged in, choose the API proxies option to get selected.

Create a new proxy, and we will select reverse proxy to start with.

To start with, we will use the sample API provided by apigee- http://mocktarget.apigee.net (just prints string “Hello Guest”), and provide name, base path, and some description. You can see we have several other options like SOAP service, which is very useful if you have an existing SOAP service and want to expose it as REST services. Also, there is a No Target option which is helpful when we are creating Mock APIs to test our client applications. But as mentioned, for now, we are creating a reverse proxy for this example.

For the first iteration, we will not add any policy and leave all other options as default (no policy selected).

Once created, we will deploy the API to the test environment. For a free evaluation version, we are provided two environments only, test and prod.

After successful deployment, a proxy URL will be provided by apigee, something like https://kamalmeet-eval-test.apigee.net/first, which we can hit and get the output (Hello Guest). We can see that we are hitting proxy URL which in turn hits the actual URL, in this case, http://mocktarget.apigee.net/.

Go get an understanding of what is happening behind the scenes, we will look into the “Develop” tab.

We can see two sets of endpoint settings, a proxy endpoint, and a target endpoint. The proxy endpoint is the one exposed to the calling service. The target endpoint is the one that is communicating with the provider service.

Currently, we have not done much with the endpoints so we see very basic XML settings. On the proxy side, all we see is connection /first and the fact that we are exposing a secured URL. Similarly, on the target side, all we have is the target URL.

To make the API more meaningful, let’s modify the target endpoint to https://mocktarget.apigee.net/json, which returns a sample JSON like

{
"firstName":"John",
"lastName":"Doe",
"city":"San Jose",
"state":"CA"
}

We can update the same Proxy that we created earlier or create a new one. To update the existing one, just go to the develop tab and update the target endpoint URL.

When you click on save, it will deploy automatically (else you can undeploy and deploy). Once done, you will see the existing proxy URL that we created now returns the JSON response https://kamalmeet-eval-test.apigee.net/first. So far we have not added any policy to our APIs. Well, that is easy, in the same develop tab, just above the XML definitions we see a flow diagram.

We can add a step to request flow or response flow. Let’s add a policy to request flow by clicking on the +Step button. To start, let’s add a quota policy.

By default, the quota policy gives settings for 2000 requests per month. For sake of this test, let’s modify it to 3 calls per minute.

To test the quota implementation, we can go to the trace tab and hit the URL multiple times. We can see response code 429 (too many requests), caused by the quota limit.

We have implemented the quota policy, which can be very useful to stop a bot attack on our API. Similarly, Apigee provides us with a set of policies categorized under traffic management, security, and mediation. For traffic management, we have a quota, spike arrest, caching, etc. For Security, we have policies like Basic Auth, JSON threat protection, XML threat protection, OAuth, and so on.

For mediation, we have policy likes, JSON to XML, XML to JSON, XSL transformation, etc.

In the example we created, we added a policy to the incoming request, similarly, we can add a policy to response, for example, if we apply JSON to XML policy to the response, we will start receiving XML response instead of JSON.