Site icon Vinsguru

Event Carried State Transfer – Microservice Design Patterns

Overview:

In this tutorial, I would like to show you one of the Microservice Design Patterns – Event Carried State Transfer to achieve the data consistency among the Microservices.

Event Carried State Transfer:

Modern application technology has changed a lot. In the traditional monolithic architecture which consists of all the modules for an application, we have a database which contains all the tables for all the modules. When we move from the monolith application into Microservice Architecture, we also split our big fat DB into multiple data sources. Each and every service manages its own data.

Having different databases and data models bring advantages into our distributed systems architecture. However when we have multiple data sources, obvious challenge would be how to maintain the data consistency among all the Microservices when one of the modifies the data. The idea behind Event Carried State Transfer pattern is – when a Microservice inserts/modifies/deletes data, it raises an event along with data. So the interested Microservices should consume the event and update their own copy of data accordingly.

Sample Application:

In this example, let’s consider a simple application as shown here. A monolith application has modules like user-module, product-module and order-module.

Our DB for the above application has below tables.

CREATE TABLE users(
   id serial PRIMARY KEY,
   firstname VARCHAR (50),
   lastname VARCHAR (50),
   email varchar(50)
);

CREATE TABLE product(
   id serial PRIMARY KEY,
   description VARCHAR (500),
   price numeric (10,2) NOT NULL,
   qty_available integer NOT NULL
);

CREATE TABLE purchase_order(
    id serial PRIMARY KEY,
    user_id integer references users (id),
    product_id integer references product (id),
    price numeric (10,2) NOT NULL
);

When I need to find all the user’s orders, I can write a simple join query like this, fetch the details and show it on the UI.

select 
    u.firstname,
    u.lastname,
    p.description,
    po.price
from
    users u,
    product p,
    purchase_order po
where 
    u.id = po.user_id
and p.id = po.product_id
order by u.id;

That was easy! Now let’s assume that we move into Microservice architecture. We have a user-service, product-service and order-service. Each service has it own database.

[
   {
      "userId":1,
      "productId":1,
      "price":300.00
   },
   {
      "userId":2,
      "productId":1,
      "price":250.00
   },
   {
      "userId":2,
      "productId":2,
      "price":650.00
   },
   {
      "userId":3,
      "productId":3,
      "price":320.00
   }
]

Now in the above case, when we look for all the user’s order, we can not simply write a join query across all the different data sources as we did earlier. We need to first send a request to order-service. Once we get the response from the order-service, based on the userId and productId it has, We also need to send a request to user-service and product-service to get the user and product details, process the data and show it on the UI. It looks like a lot of work, HTTP calls, network latency to deal with and they are all going to affect performance of the application very badly.

It also creates tight coupling among microservices which is bad! What will happen when the user-service is not available? It will also make the order-service FAIL which we do not want!!!

One possible solution which might sound very bad advice to you is having the user and product information in the purchase_order collection itself in the MongoDB as shown here.

{
   "user":{
      "id":1,
      "firstname":"vins",
      "lastname":"guru",
      "email: "admin@vinsguru.com" 
   },
   "product":{
      "id":1,
      "description":"ipad"
   },
   "price":300.00
}

In this approach, order-service itself has all the information for us to show the data on the UI. It does not depend on other services like user-service, product-service to provide the information we need. It is loosely coupled.

Advantages:

Why it might sound very bad is because, data is redundantly stored and what if user changes his name / email? or what if the product description is updated? In the traditional approach, It was not a problem. Now order-service would not have the updated information. It would have stale data if user or product info is updated.

Disadvantages:

Redundant data/Additional disk space is really not a problem nowadays as data storage is very very cheap! But We can update the user details in the order-service whenever user-details are updated in the user-service. It would be happening asynchronously. Eventual consistency is the trade off for the performance / resilient design we get!

Lets see how we can maintain updated data across all the microservices using Kafka to avoid the above mentioned problem!

Kafka Infrastructure Setup:

We need to have Kafka cluster up and running along with ZooKeeper. Take a look at these articles first If you have not already!

As part of this article, We are going to update order-service’s user details whenever there is an update on user details in the user-service asynchronously. For that we are going to create a topic called user-service-event in our Kafka cluster.

User Service:

<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
  <groupId>org.postgresql</groupId>
  <artifactId>postgresql</artifactId>
  <scope>runtime</scope>
</dependency>
<dependency>
  <groupId>org.springframework.kafka</groupId>
  <artifactId>spring-kafka</artifactId>
</dependency>

@Entity
public class Users {

    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    private Long id;
    private String firstname;
    private String lastname;
    private String email;
    
   // getters and setters

}
@Repository
public interface UsersRepository extends JpaRepository<Users, Long> {
}
public class UserDto {
    private Long id;
    private String firstname;
    private String lastname;
    private String email;

    // getters & setters

}
public interface UserService {
    Long createUser(UserDto userDto);
    void updateUser(UserDto userDto);
}

@Service
public class UserServiceImpl implements UserService {

    private static final ObjectMapper OBJECT_MAPPER = new ObjectMapper();

    @Autowired
    private UsersRepository usersRepository;

    @Autowired
    private KafkaTemplate<Long, String> kafkaTemplate;

    @Override
    public Long createUser(UserDto userDto) {
        Users user = new Users();
        user.setFirstname(userDto.getFirstname());
        user.setLastname(userDto.getLastname());
        user.setEmail(userDto.getEmail());
        return this.usersRepository.save(user).getId();
    }

    @Override
    @Transactional
    public void updateUser(UserDto userDto) {
        this.usersRepository.findById(userDto.getId())
                .ifPresent(user -> {
                    user.setFirstname(userDto.getFirstname());
                    user.setLastname(userDto.getLastname());
                    user.setEmail(userDto.getEmail());
                    this.raiseEvent(userDto);
                });
    }

    private void raiseEvent(UserDto dto){
        try{
            String value = OBJECT_MAPPER.writeValueAsString(dto);
            this.kafkaTemplate.sendDefault(dto.getId(), value);
        }catch (Exception e){
            e.printStackTrace();
        }
    }
}
@RestController
@RequestMapping("/user-service")
public class UserController {

    @Autowired
    private UserService userService;

    @PostMapping("/create")
    public Long createUser(@RequestBody UserDto userDto){
        return this.userService.createUser(userDto);
    }

    @PutMapping("/update")
    public void updateUser(@RequestBody UserDto userDto){
        this.userService.updateUser(userDto);
    }

}
spring:
  datasource:
    url: jdbc:postgresql://localhost:5432/userdb
    username: vinsguru
    password: admin
  kafka:
    bootstrap-servers:
      - localhost:9091
      - localhost:9092
      - localhost:9093
    template:
      default-topic: user-service-event
    producer:
      key-serializer: org.apache.kafka.common.serialization.LongSerializer
      value-serializer: org.apache.kafka.common.serialization.StringSerializer

Order-Service

<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-data-mongodb</artifactId>
</dependency>
<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
  <groupId>org.springframework.kafka</groupId>
  <artifactId>spring-kafka</artifactId>
</dependency>

@Document
public class PurchaseOrder {

    @Id
    private String id;
    private User user;
    private Product product;
    private double price;
   
    // Getters & Setters

}

public class Product {

    private long id;
    private String description;

    // Getters & Setters

}

public class User {

        private Long id;
        private String firstname;
        private String lastname;
        private String email;

        // Getters & Setters

}
@Repository
public interface PurchaseOrderRepository extends MongoRepository<PurchaseOrder, String> {

    @Query("{ 'user.id': ?0 }")
    List<PurchaseOrder> findByUserId(long userId);
    
}
public interface PurchaseOrderService {
    List<PurchaseOrder> getPurchaseOrders();
    void createPurchaseOrder(PurchaseOrder purchaseOrder);
}

@Service
public class PurchaseOrderServiceImpl implements PurchaseOrderService {

    @Autowired
    private PurchaseOrderRepository purchaseOrderRepository;

    @Override
    public List<PurchaseOrder> getPurchaseOrders() {
        return this.purchaseOrderRepository.findAll();
    }

    @Override
    public void createPurchaseOrder(PurchaseOrder purchaseOrder) {
        this.purchaseOrderRepository.save(purchaseOrder);
    }
    
}
public interface UserServiceEventHandler {
    void updateUser(User user);
}

@Service
public class UserServiceEventHandlerImpl implements UserServiceEventHandler {

    private static final ObjectMapper OBJECT_MAPPER = new ObjectMapper();

    @Autowired
    private PurchaseOrderRepository purchaseOrderRepository;

    @KafkaListener(topics = "user-service-event")
    public void consume(String userStr) {
        try{
            User user = OBJECT_MAPPER.readValue(userStr, User.class);
            this.updateUser(user);
        }catch(Exception e){
            e.printStackTrace();
        }
    }

    @Override
    @Transactional
    public void updateUser(User user) {
        List<PurchaseOrder> userOrders = this.purchaseOrderRepository.findByUserId(user.getId());
        userOrders.forEach(p -> p.setUser(user));
        this.purchaseOrderRepository.saveAll(userOrders);
    }
}
@RestController
@RequestMapping("/order-service")
public class OrderController {

    @Autowired
    private PurchaseOrderService purchaseOrderService;

    @GetMapping("/all")
    public List<PurchaseOrder> getAllOrders(){
        return this.purchaseOrderService.getPurchaseOrders();
    }
    
    @PostMapping("/create")
    public void createOrder(@RequestBody PurchaseOrder purchaseOrder){
        this.purchaseOrderService.createPurchaseOrder(purchaseOrder);
    }

}
spring:
  data:
    mongodb:
      host: localhost
      port: 27017
      database: order-service
  kafka:
    bootstrap-servers:
      - localhost:9091
      - localhost:9092
      - localhost:9093
    consumer:
      group-id: user-service-group
      auto-offset-reset: earliest
      key-serializer: org.apache.kafka.common.serialization.LongDeserializer
      value-serializer: org.apache.kafka.common.serialization.StringDeserializer
[
    {
        "id": "5dcfb1056637311008e17f80",
        "user": {
            "id": 1,
            "firstname": "vins",
            "lastname": "guru",
            "email": "admin@vinsguru.com"
        },
        "product": {
            "id": 1,
            "description": "ipad"
        },
        "price": 300
    }
]
{
    "id": 1,
    "firstname":"vins",
    "lastname": "gur",
    "email": "admin-updated@vinsguru.com"
}
[
    {
        "id": "5dcfb1056637311008e17f80",
        "user": {
            "id": 1,
            "firstname": "vins",
            "lastname": "guru",
            "email": "admin-updated@vinsguru.com"
        },
        "product": {
            "id": 1,
            "description": "ipad"
        },
        "price": 300
    }
]

Source Code:

The source is available here.

Summary:

We were able to maintain data consistency across all the microservices using Kafka. This approach avoids many unnecessary network calls among microservices, improves the performance of microservices and make the microservices loosely coupled. For ex: Order-service does not have to be up and running when user details are updated via user-service. User-service would be raising an event. Order-service can subscribe to that whenever it is up and running. So that information is not going to be lost! In the old approach, it makes microservices tightly coupled in such a way that all the dependent microservices have to be up and running together. Otherwise it would make the system unavailable.

 

Share This:

Exit mobile version