Single node / single instance

Overview

The plan assumes that single instance of JLupin Platform works on a node (operating system). Such configuration is suitable for:

  • non-production environment
  • non-critical production environments.

Please be advised that single node deployment on production should be done on virtualized infrastructure with HA/DR mechanisms or secondary (passive) node should be taken into consideration. There are very effective JLupin mechanisms in that keep service up & running and allow to manage it completely online (ex. zero downtime deployment) in case of the single node deployment, but they don't protect against hardware failure.

This deployment plan is shown on the following picture (with example node name and its IP address):

Figure 1. JLupin Platform single node / single instance deployment plan.

Linux

Follow the below instructions:

  1. Prepare a directory for JLupin software (ex. /opt/jlupin)
  2. Add a dedicated user (group) for JLupin software (ex. groupadd jlapp && useradd -g jlapp jlapp) and login using it.
  3. Unzip the package into the prepared directory
  4. (optionally) Change node (instance) name (ex. node1_1) in JLupin Platform runtime configuration:
ZONE:
  name: default
MAIN_SERVER:
  name: node1_1         # <-- !CHANGE!
  location: DC1
[...]
NODE_PEERS:
  node1_1:              # <-- !CHANGE! (must be the same as MAIN_SERVER->name)
    ip: '127.0.0.1'
    jlrmcPort: 9090
    queuePort: 9095
    transmissionPort: 9096
    informationPort: 9097
  1. Start JLupin Platform: /opt/jlupin/platform/start/start.sh
  2. Run JLupin Platform CLI Console to control your JLupin environment by executing: /opt/jlupin/platform/start/control.sh (in this way, it runs in interactive mode)

If you like to control JLupin through systemd units (applicable to RHEL-like Linux distributions like: CentOS, Fedora and of course RHEL), please follow below instructions:

  1. Create the jlupin.service unite file in /usr/lib/systemd/system directory:
[Unit]
Description=JLupin Platform 1.5
After=network.target

[Service]
Type=forking
User=jlapp
ExecStart=/opt/jlupin/platform/start/start.sh
ExecStop=/opt/jlupin/platform/start/control.sh node shutdown

[Install]
WantedBy=multi-user.target

Change the paths accordingly to your environment if there are different than there examples.

  1. Enable the service: systemctl enable jlupin
  2. Set JAVA_HOME explicitly and accordingly to your environment in /opt/jlupin/platform/start/configuration/setenv (for JLupin Platform)
  3. Set JAVA_HOME explicitly and accordingly to your environment in /opt/jlupin/platform/start/control/configuration/setenv (for JLupin Platform CLI Console)
  4. Enjoy starting / stopping JLupin Platform by executing systemctl start jlupin / systemctl stop jlupin

Windows

On Windows, the process is also very simple, just:

  1. Prepare a directory for JLupin software (ex. c:\\Program Files\\JLupin)
  2. Add a dedicated user for JLupin software (ex. JLupin) and login using it.
  3. Unzip the package into the prepared directory.
  4. Start JLupin Platform: c:\\Program Files\\JLupin\\platform\\start\\start.cmd
  5. Run JLupin Platform CLI Console to control your JLupin environment by executing: c:\\Program Files\\JLupin\\platform\\start\\control.cmd (in this way, it runs in interactive mode)

This way of starting / stopping JLupin Platform is suitable for development purposes, for production, QA and UAT environments we advice to install JLupin as a windows service by:

  1. Setting the name of JLupin as a windows service in c:\\Program Files\\JLupin\\platform\\start\\configuration\\setenv (SERVICE_NAME variable)
  2. Executing c:\\Program Files\\JLupin\\platform\\start\\configuration\\controll\sbin\\installwin.cmd to install JLupin as a windows service

Now, you can enjoy operating JLupin PLatform as a windows service through windows administration panel or scripts available in c:\\Program Files\\JLupin\\platform\\start\\configuration\\controll\sbin\\.

SSL

Due to security reasons, we strongly advice to change default set of certificates and keys distributed with JLupin Platform. Each instance of JLP has a server certificate and associated private key, located in $JLUPIN_HOME/platform/server-resources/ssl/server and a set of client certificates that are authorized to connect to the instance of JLupin Platform, located in $JLUPIN_HOME/platform/server-resources/ssl/client.

Even each instance of JLupin PLatform may have it's own server certificate, but this way would be very hard in the scope of management and maintenance. We advice to expand this range, where one server certificate is common for each business domain, that is running on JLupin Platform (for example ebank).

  • Generate CSR and private key
openssl req -out serverX509Certificate.csr  -new -newkey rsa:2048 -nodes -keyout serverPrivateKey.pk
Generating a 2048 bit RSA private key
....................................................................................................................+++
..........+++
writing new private key to 'serverPrivateKey.pk'
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [XX]:PL
State or Province Name (full name) []:MALOPOLSKIE
Locality Name (eg, city) [Default City]:KRAKOW
Organization Name (eg, company) [Default Company Ltd]:JLUPIN
Organizational Unit Name (eg, section) []:R&D
Common Name (eg, your name or your server's hostname) []:ebank
Email Address []:admin@jlupin.com

Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:
  • Generate self-signed certificate
openssl req -x509 -sha256 -days 365 -in serverX509Certificate.csr -key serverPrivateKey.pk -out serverX509Certificate.crt
  • Copy the certificate and private key to appropriate JLP directory: $JLUPIN_HOME/platform/server-resources/ssl/server

  • Update server certificate on the client side (CLI Console, Web Console, Maven Plugin)


Single node / multiple instances

Overview

The plan assumes that many instances of JLupin Platform works on a node (operating system). Such configuration is suitable for:

  • non-production environment to simulate multi-node configuration on a single node or when different business domains share the same resources
  • production environments, where different business domains share the same resources

Please be advised that single node deployment on production should be done on virtualized infrastructure with HA/DR mechanisms or secondary (passive) node should be taken into consideration. There are very effective JLupin mechanisms in that keep service up & running and allow to manage it completely online (ex. zero downtime deployment) in case of the single node deployment, but they don't protect against hardware failure.

This deployment plan is shown on the following picture (with example node name and its IP address):

Figure 2. JLupin Platform single node / multiple instances deployment plan.

Configuration

The first instance (node1_1) should be prepared in the same way as single node / single instance.

The second one should be installed in the individual directory (ex. $JLUPIN_HOME/platform2 or %JLUPIN_HOME%\platform2) and follow the instructions below:

  1. Change JMX_PORT and DEBUG_PORT in JLupin Platform initial configuration:
[...]
## Debug mode options
DEBUG_PORT=13998 # <-- !CHANGE! (ex. add 1000)
DEBUG_OPTS="-agentlib:jdwp=transport=dt_socket,server=y,suspend=n"

## JMX configuration
JMX_ENABLED=yes
JMX_PORT=19010 # <-- !CHANGE! (ex. add 10000)
JMX_OPTS="-Dcom.sun.management.jmxremote.local.only=false -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false"
  1. Change instance name, ports for logical servers and microservice port offset in JLupin Platform runtime configuration:
ZONE:
  name: default
MAIN_SERVER:
  name: node1_2         # <-- !CHANGE!
  location: DC1
SERVERS:
  JLRMC:
    port: 19090         # <-- !CHANGE! (ex. add 10000)
    readTimeout: 480000
    [...]
  ELASTIC_HTTP:
    port: 18082          # <-- !CHANGE! (ex. add 10000)
    readTimeout: 480000
    [...]
  QUEUE:
    port: 19095          # <-- !CHANGE! (ex. add 10000)
    readTimeout: 480000
    [...]
  TRANSMISSION:
    port: 19096          # <-- !CHANGE! (ex. add 10000)
    readTimeout: 480000
    [...]
  INFORMATION:
    port: 19097          # <-- !CHANGE! (ex. add 10000)
    readTimeout: 480000
    [...]
  INFORMATION_HTTP:
    port: 19098          # <-- !CHANGE! (ex. add 10000)
    readTimeout: 480000
    [...]
  PROCESS_MANAGER:
    isCheckAvailableOSMemoryOnMicroservicesStart: true
    microservicesPortOffset: 30000    # <-- !CHANGE! (ex. add 10000)
  [...]
  NODE_PEERS:
    node1_2:                    # <-- !CHANGE!
      ip: '127.0.0.1'
      jlrmcPort: 19090          # <-- !CHANGE! (ad. 10000)
      queuePort: 19095          # <-- !CHANGE! (ad. 10000)
      transmissionPort: 19096   # <-- !CHANGE! (ad. 10000)
      informationPort: 19097    # <-- !CHANGE! (ad. 10000)
  1. Change ports of virtual server in JLupin Edge Balancer configuration:
  • edge.conf:
[...]
http {

  include mime.types;
  include dict.conf;


  # JLupin Module Initialization: BEGIN

  init_by_lua_block {
    require "jlupin"

    -- Discovery process --
    discoveryHost = "127.0.0.1"
    discoveryPort = "19098"              -- <-- !CHANGE! (should be set to INFORMATION_HTTP port)
    discoveryConnectionTimeout = 5000
    [...]

    -- Module initialization --
    JLupinInitModule()
  }

  init_worker_by_lua_block {
    JLupinInitStartDiscoveryTimer()
    JLupinInitStartBalancerTimer()
  }

  # JLupin Module Initialization: END

  ##############################################################################
  # Data endpoints
  ##############################################################################
  include edge_servers/*.conf;

  ##############################################################################
  # Administration endpoint
  ##############################################################################
  server {
    listen  18888;              # <-- !CHANGE! (ex. add 10000)
    server_name edge_admin;
    set $server_type 'admin';
    include servers/admin.conf;
  }

  ##############################################################################
  # Discovery endpoint
  ##############################################################################
  server {
    listen  18889;              # <-- !CHANGE! (ex. add 10000)
    server_name edge_discovery;
    set $server_type 'admin';
    include servers/discover.conf;
  }
}
  • edge_servers/edge8000.conf
server {
  listen  18000;              # <-- !CHANGE! (ex. add 10000)
  server_name edge18000;      # <-- !CHANGE!
  set $server_type 'data';
  include servers/data.conf;
}

and change the file name, if you want.

  • edge_servers/edge8001.conf
server {
  listen  18001;              # <-- !CHANGE! (ex. add 10000)
  server_name edge18001;      # <-- !CHANGE!
  set $server_type 'data';
  include servers/data.conf;
}

and change the file name, if you want.

  • Apply the port offset for all custom servers, if defined.

Now, the second instance is ready to start.

Note that the second instance has different TRANSMISSION port that default. This interface is used by a couple of management tools and should be changes accordingly:

Independent instances

If you follow the instruction above you will setup two independent instances which means that: * Each Edge Balancer provides set of services provided only from the instance that is connected to (discoveryHost in edge.conf) * Microservices cannot invoke services from the second instance using JLupin Software Load Balancers

as show on the following picture.

Figure 3. JLupin Platform single node / multiple independent instances.

Independent instances are useful when different business domains share the same resources. In practice it means that your OPS gave you one server, but you need to run you microservices in two independent environments :)

Clustered instances

Two instances sharing one node can also configured to work as a cluster. In that case all services, regardless of their location (instance) are accessible on each node, as show on the following picture:

Figure 4. JLupin Platform single node / multiple clustered instances.

In order to clustered two instances sharing the same node you need to:

@node1_1

  • Change discoveryPeersDefaultAdminPort in edge.conf to discovery port of the second instance, in this example it is:
    [...]
    -- Set default discovery virtual server and its protocol for peers here if you have chosen "auto" for peers discovery (discoveryPeersSource)
    discoveryPeersDefaultAdminPort = "18889"    -- <-- !CHANGE!
    discoveryPeersDefaultProtocol = "http"
    [...]
  • Perform the following command in JLupin CLI Console:
> node peer add node1_2 10.0.0.1 19090 19095 19096 19097

@node1_2

  • Perform the following command in JLupin CLI Console:
> node peer add node1_1 10.0.0.1

Now, you have a cluster ! :)

Of course, in clustered environment you can run multiple instances of microservices (along with single-instance microservice) and perform load balancing and failover between them. The example of such configuration is presented on the following picture.

Figure 5. JLupin Platform single node / multiple clustered instances (multiple instances of microservices).