What is ZUUL in Java

API Gateways - a practical introduction

Christian Schwörer & Constantin Weißer February 12, 2019

API gateways - also known as edge services - are a fundamental part of a cloud-native microservice architecture. They form the central access point for all requests to the backend services. In this article we give a practical introduction to the subject. First, the various tasks of an API gateway are presented. Concrete implementation options are then outlined - starting with the rudimentary basic proxy functionality through cloud platform solutions to programmatic gateway frameworks.

The purpose of API gateways

Fig. 1 shows a simple microservice architecture in which two clients - a mobile app and an angular single page application (SPA) - communicate with three backend services (users, images and comments). This structure has some disadvantages:

  • Lots of communication links: The apparent large number of links also means that every client has to know every backend service. This is problematic at the latest when the backends change, for example because a service is to be broken up into two smaller services.
  • Same-origin policy: In order for the clients to be allowed to communicate with the various backend services, a cross-origin resource sharing (CORS) exception must be defined for each backend. This is particularly true when using JavaScript clients such as the Angular SPA listed.
  • Protection of internal endpoints: As can be seen in Fig. 1, there is a connection between the users and the images service. Since these are public endpoints, these can also be reached by the clients, even if this is not wanted.
  • Cross-cutting concerns: In addition, there are a number of cross-cutting concerns that must be implemented in every backend service. This includes, for example, an initial authentication, a possible SSL termination or the checking or setting of specific (security) headers such as CSRF or HSTS. But also protective mechanisms such as Rate limiting and Throttling are to be mentioned here.

These disadvantages can be compensated for by introducing an API gateway, which is, so to speak, on the border with the backend system (hence the name "Edge Service") and serves as a central access point.

As the basic structure in Fig. 2 shows, the communication of the clients to the backend services always takes place via the API gateway. The clients therefore only need to know its address - the gateway then forwards the requests to the specific service. So its main task is reverse proxying, i. H. the encapsulation of the backend services for the clients. The above-mentioned cross-sectional issues can be implemented in this central point, so that they do not have to be provided separately by each service. The CORS rules can also be uniformly defined here to allow the Same origin policy meet.

Basic functionality: reverse proxying

While API gateway products often provide a multitude of features, a simple functionality is of central importance: reverse proxying. Why? This means that several different endpoints can be combined under one. Without reverse proxying, the three exemplary services Users, Comments and Images must be reached at three different addresses. Worse still: if you scale the services, the endpoints multiply further. This quickly leads to a confusing collection of endpoints on the part of the client. If there are clients that cannot be controlled (e.g. a rolled out mobile app that does not have to be updated), the changes that can be made to the endpoints are very limited.

As a rule, one uses to summarize equivalent endpoints, for example three instances of the user service Load balancer. For different types of endpoints, e.g. multiple services, a Reverse proxy used. These terms describe both the concepts and concrete implementations thereof. Often, however, you can also combine several concepts in one implementation.

We consider NGINX [1] as an example. The web server can be operated as a reliable reverse proxy with a simple configuration. It is therefore used in this way in many platforms. Google's API management called "Cloud Enpoints" is based on it, for example. The relevant part of an NGINX configuration looks like this:

Listing 1: NGINX configuration

Everyone location-Section describes a path offered by the web server. Instead of offering local files on the path, you can use the proxy_pass- Specify directive other servers to which requests are forwarded. In the case of several instances, you can either use a load balancer and refer to the address of the load balancer, or expand the configuration with server groups with little effort and thus distribute the load with the help of NGINX.

In addition to the configuration, the fundamental question arises as to how one can operate software reliably. Installing software on machines by hand, virtually or physically, is actually a thing of the past. As an initial setup at the time of development, it can of course be done this way. For productive use it is important to make the setup reproducible. The familiar configuration management tools can of course be used here.

Today, application or container runtimes are mostly used in the cloud to run applications. A prominent example is Cloud Foundry. In fact, it is possible on the platform to configure a reverse proxy with very little effort. This solution is presented in more detail in the following section.

Platform solution: Cloud Foundry

Cloud Foundry is a popular platform for running applications in the cloud [2]. The supported runtimes include, for example, Java, Go or Python. The user receives very convenient tools to quickly commission and scale applications. The environment for a single running application is provided by Cloud Foundry. In the case of a Java application, the executable artifact is a jar file. The command line tool CF-CLI provides the developer with the command cf. the ability to push your application directly to the Cloud Foundry instance for execution, d. H. upload the executable jar file and start the application.

By default, each application is assigned a so-called Service route assigned. Behind this is a DNS subdomain under which exactly this application can be reached. If an application is scaled to more than one instance, the load is transparently distributed between the instances for the caller. So Cloud Foundry automatically places a load balancer in front of each application. Nevertheless, with several different applications (e.g. several microservices) you also have several endpoints. When there are changes in the microservice landscape, the disadvantages described above occur: the client has to be constantly adapted in order to know all endpoints. In many cases this is not even possible. So what options do we have to get the functionality of an API gateway on Cloud Foundry? We consider two different approaches:

Use of an API gateway as an application

Let's take any implementation of an API gateway, for example the Spring Cloud Gateway presented later, Netflix Zuul or, for simple cases, NGINX. If the software can be deployed on Cloud Foundry, it is suitable for use.

Like any other application, the application receives a service route. Other applications can also be reached within the platform via its route. So instead of using the individual routes of the user, comment and image services in the client, we only call the route of the API gateway application. Fig. 3 illustrates the difference.

This solution requires the maintenance of another application, even if this is often mainly configured and only rarely programmed. Nevertheless, you have to keep an eye on the additional effort for the team. At the same time, however, we use the advantages of the platform: Cloud Foundry supports the developer with the entire lifecycle of the gateway application. APM and monitoring solutions that are already in use only need to be extended to this additional application. In addition, this approach offers the greatest possible flexibility. Which specific implementation is used for the API gateway does not matter and can also be changed over time according to requirements. It should be noted, however, that the endpoints of the individual services are still public and accessible without an API gateway.

Cloud Foundry platform features

But Cloud Foundry also offers a minimal solution with no operational overhead: The service routes described above can also be configured with simple commands so that they use path information for routing. In addition to the automatically created service route, path-based routes can be configured in this way. Listing 2 shows the corresponding CF-CLI commands:

Listing 2: Route mapping with Cloud Foundry

cf map-route user-service pcf-url.com --path users --hostname mygw cf map-route image-service pcf-url.com --path images --hostname mygw

After executing these commands there will be a new subdomain called mygw.pcf-url.com on the turn two paths / users and / images Are available. / users is on the application user service steered and / images accordingly on image service. This solution offers little configuration leeway, is very simple and practically maintenance-free for developers.

Programmatic solutions

However, if special requirements have to be implemented in the API gateway that the reverse proxy or platform solutions presented cannot map, a number of gateway frameworks are available that enable programmatic implementation. Most of these frameworks have the structure shown in Fig. 4.

They allow you to define filters that are applied to the routed requests:

  • Pre-filter: Evaluate the request before it is forwarded to the backend service. If the request meets certain conditions - for example if a certain header is present - the request is modified. Requests can also be rejected directly here, for example if they are invalid.
  • Routing: This is where the actual forwarding to the backend service takes place
  • Post filter: Are applied to the response before the response is returned to the client. This means that the response can still be manipulated.

Spring Cloud Gateway

Spring Boot [3] and Spring Cloud [4] are two JVM frameworks that build on one another for creating and operating cloud-native microservice application landscapes.

Spring Cloud has long been offering a Netflix Zuul-based API gateway [5]. However, the underlying Zuul version has some disadvantages: Neither HTTP / 2 nor WebSockets are supported. The more serious problem, however, is that the framework can only handle requests in a blocking manner. I. E. For each incoming request, a thread is blocked until the response from the backend service has been received and processed.

However, with the release of Spring Boot 2.0, which is based on Spring Framework 5 and the Project Reactor, the framework offers the ability to create non-blocking, reactive applications. Consequently, the decision was made to publish Spring Cloud Gateway [6], a gateway framework based entirely on the reactive Spring ecosystem. Spring Cloud Gateway has therefore been the recommended solution since the Greenwich.RC1 [7] version of Spring Cloud.

scenario

The following describes how you can use Spring Cloud Gateway to create your own edge service. The scenario that is to be implemented can be seen in Fig. 5.

The API gateway has the task of performing a "token exchange". To do this, a cookie with the name customer id can be read out. If the cookie is present, a bearer token is created and set in the authorization header. The modified request is then routed to one of three backend services. As a result, a pre-filter is required that reads out the cookie and sets the header, as well as a routing component for forwarding it to the correct backend.

Annotation: In order to reduce the code examples to the essentials, only the value of the cookie is read out and copied into the authorization header. The actual generation of the bearer token is not discussed.

The code examples are written in Kotlin - but this is not a requirement. Since Spring Boot / Cloud is a JVM framework, Java is of course also possible. An executable Spring Boot application including backend services with all code examples can be found in GitHub [8].

Project setup

The easiest way to create the project structure for an edge service based on Spring Cloud Gateway is via Spring Initializr. Under https://start.spring.io all you have to do is select the "Gateway" dependency (see Fig. 6). A project archive can then be exported, which can be loaded into the development environment.

Routing

Spring Cloud Gateway offers two options for defining routes: On the one hand - as usual in Spring Boot - by means of configuration in the application.yml. On the other hand via a Fluent API DSL. Since the first way differs little from other Spring Boot configurations, Listing 3 shows the use of the API DSL.

Listing 3: Routing with the Fluent API DSL

On line 1 the application is saved as a SpringBootApplication marked. The actual route configuration takes place from line 4. There is a bean customRouteLocator that defines the interface RouteLocator implemented. About the method RouteLocatorBuilder.routes () routes can now be added.

So in lines 8 to 12 the route becomes users configured. Based on the path Predicate, it is specified that all requests in the path of the pattern / users have to the backend service http: // localhost: 8081 / users to get redirected. Means filters the filters that are to be applied to these requests are defined - in the example of the AuthorizationFilter.

In the simple example, it is forwarded to backend services that run on localhost. Of course, that doesn't make sense in a cloud environment. As a rule, with a cloud-native architecture, a service registry such as Netflix Eureka or Consul is used, with which services log on and off. In that case, the routing would be via a DiscoveryClient which determines the current address of a backend service instance from this service registry.

Pre-filter to modify the request

The structure of the AuthorizationFilter is shown in Listing 4.

Listing 4: Filters for manipulating the requests

By making the component the interface GatewayFilter implemented, the method filter() overwritten. The method signature already shows the asynchronous, reactive implementation: The return value is of the type Mono - the reactive equivalent to CompletableFuture. The cookie is read from the request in line 7. If it does not exist or does not contain a value, processing can already be canceled and the HTTP status code 400 is returned to the client (lines 9 to 11). This means that the backend service is not even called up. If the cookie is available, the authorization header is set in the request. The modified request is then sent to the GatewayFilterChain added so that the next potential filter can be executed (lines 12 to 18).

With these two simple classes, the requirements described in the scenario are implemented. The Spring Boot application can now be provided in the cloud - for example as a cloud foundry app - and acts there as an edge service.

Alternative gateway frameworks

In addition to the Spring Cloud Gateway listed, there is a whole range of other frameworks on the basis of which a separate edge service can be implemented. For example, Netflix released version 2 of Zuul Open Source in May 2018 [9]. The technical basis is the asynchronous, non-blocking Netty framework [10]. An integration in Spring Cloud is not planned by the Spring community. Details on creating an API gateway with Zuul 2 including code examples can be found in this blog post [11]. In addition, there are a number of other frameworks such as KrakenD [12] or Tyk [13], just to name two, written in Go.

Conclusion

There are essentially two decisions to be made: should an API gateway be used and, if so, which solution should be used?

API gateway: yes or no?

The two key arguments for a gateway are the simplicity for clients and the encapsulation of the microservice architecture. If you have more than one endpoint, we believe it is essential to bundle them together. The administration of "endpoint lists" in clients should be avoided as they can quickly become confusing.

Since an architecture is seldom static, but evolves, the endpoints also change again and again over time. Services can be broken up or merged. If no gateway is used here, the clients are directly exposed to these changes. If software is rolled out (e.g. a mobile app), the development team can no longer guarantee that users are using the latest version. In this case, a gateway makes it possible to keep the API compatible, even if the microservice landscape changes again and again.

But you always have to keep an eye on the challenges an API gateway brings with it. First and foremost, this is the single point of failure that the gateway represents in the architecture. This results in a number of requirements for the availability, resilience and capabilities of the gateway. It must be at least as available as the maximum requirement for any service behind it. It must also support all communication technologies that the called services require (for example HTTP / 2 or web sockets).

Choosing the right API gateway solution

When it comes to choosing a specific product, we advise you to start as simply as possible. Some products promise innumerable features; but that also has an effect on the complexity. An exchange of the product is usually possible in production and is completely transparent for the clients. It is therefore worth waiting for the exact requirements and only using more complex solutions later if necessary.

You should also avoid moving business logic or "features" from the services to the gateway. This often leads to confusing and difficult to maintain systems with many dependencies. In a microservice architecture in particular, this leads to difficult questions about responsibility (see: Avoid overambitious API gateways [14]).

In summary, it can be said that there is a suitable solution for every application and infrastructure. According to the motto "You aren't gonna need it (YAGNI)"It is advisable to start with a simple implementation and only expand or replace it when necessary.

You might also be interested in