Messages processing

Queues are provided on queue microservices, which are native microservices (so it has a standard configuration.yml file) with additional configuration in queues.yml file, where queues definitions are provided:

Figure 1. JLupin Reactive Queues Architecture.

Every queue consists of innerQueuesAmount number of inner queues. At any given moment one inner queue is active for writing, into which QUEUE entry point (provided by a native microservice) puts asynchronous requests (messages) from clients. The rest of inner queues are opened in read mode for messages processing (routing and executing on the target microservices by delegators).

Figure 2. JLupin Reactive Queues - initial queue filling.

After waitTimeBetweenCheckingTaskReadyToStartInMillis period of time (in milliseconds) Queue Manager opens the next inner queue for writing (closed for message processing), while the previous one is opened in read mode and messages processing is started. Messages are processed by threadAmount number of threads, no sooner than delaySendProcessTaskToExecuteInMillis period of time (in milliseconds). On each thread the message handler - delegator, tries to send a message to the target microservice in maxSendProbeAmount number of attempts, if it's not succeed the message are discarded.

Figure 3. JLupin Reactive Queues - initial queue filling

Queues are also capable of receiving and storing responses to previously sent messages to microservices. They wait howLongTaskInputShouldBeOnQueueWithoutResultItInMillis milliseconds of time waiting for the response, if the time is exceeded, messages are discarded. if the result comes with en exception (identified through regular expression by exceptionStringToRepeat string) the associated request will be repeated maxAcceptExceptionAmount times.After receiving the answer, it waits for howLongTaskResultShouldBeOnQueueWithoutDownloadItInMillis until it is retrieved by the client from the queue.

Reactive data flows

Lets assume that web application WebApp1 makes an asynchronous request to the application microservice app_A using SIMLPLE queue located on queues_1 queue microservive. It uses reactive communication schema, as show on the following diagram:

Figure 4. JLupin Reactive Queues - data flows

The whole process has the following stages:

  1. The web application WebApp1sends a asynchronous request to a service located on the app_A. It uses SIMPLE queues on queues_1 and registers callback function for the response. The communication is done through QUEUE entry points on Main Server.
  2. The WebApp1 receives confirmation from queues_1 of accepting the request for processing by the SIMPLE queue (through Main Server), after which the associated threat with the request is released. WebApp1 threats are not engaged in the response handling until it's ready on a queue. The response handling by WebApp1 is described in the next points (7,8).
  3. The SIMPLE queue handles the request and through the delegator repeatsAmountByDelegator times tries to find the available instance of app_A microservice that should process the request. If it not succeeded the queue delegator waits timeToWaitBetweenRepeatProbeInMillisByDelegator period of time to try again. If it is succeeded the delegator sends the request to the target microserive using QUEUE entry point on Main Server.
  4. The app_A microservice sends confirmation to queues_1 microservice of accepting the request for processing (the communication between a queue and a target application is also asynchronous).
  5. After the requests is processed the delegator located on QUEUE entry point of the target application microservice tries repeatsAmount times to find available instance of SIMPLE queue an queues_1 microservice and send the response. If it not succeeded the queue delegator waits timeToWaitBetweenRepeatProbeInMillis period of time to try again. If it is succeeded the delegator sends the response to the queue using QUEUE entry point on Main Server. These parameters are defined in the target application microservice configuration file, see this chapter to get more information.
  6. The queues_1 microservice confirms successful reception of the response.
  7. If a callback function has been registered during sending the request by WebApp1 microservice the process of sending back the response to the client is performed. It engages a new threat from the thread pool, which executes the callback function. The response waits howLongTaskResultShouldBeOnQueueWithoutDownloadingItInMillis period of time to be sent to the client. After this time, the response are discarded by the garbage collection process.
  8. The confirmation of receiving the response is sent by the client (WebApp1) to queues_1 microservice.

Of course, these reactive flow works fine in case where are many nodes with different set of components - in distributed architecture. For example where web applications, queues and applications are located on different set of nodes (at least two) as shown on the picture below:

Figure 5. JLupin Reactive Queues - data flows in distributed configuration