Container Deployments
The Logi Analytics Platform and Logi applications can be bundled into and executed from container environments, such as Docker. Some of our customers have had success deploying their Logi applications in this manner.
Logi Professional Services staff may be able to assist you with such a deployment, for a fee. However, we don't recommend any particular container over another and we don't certify that Logi applications will work using a container.
Docker Instructions
A best practice for Docker is to separate the Tomcat container and Logi Scheduler into their own nodes behind a load balancer (for information about load balancing, see Load Balancing Configuration).
Copy Logi Info App to Linux host (follow configuration steps in Deployment Checklist below for details)
Must have security configured with users, sharing rdSecurekey configured for multiple nodes
Must have Data connections set up for production configuration
Must have rdError sharing set up for multiple nodes (complete with custom error page)
Must have an OEM license
Scheduler connections setup to work behind a load-balanced end point
Pull Tomcat image from Docker Hub
Run Tomcat container and enter shell to install Scheduler
Docker run -it Tomcat bash
Install Scheduler
From a second shell commit changes to Tomcat Docker image as Tomcat
Create Dockerfile to buil dimage that includes Logi App and Scheduler content
Transfer Info app to container
Transfer script that starts both Scheduler and Tomcat
Run start-up script(s)
Expose the ports required by Tomcat and Scheduler
Build a new image (you must do this every time you make a change to the Info app)
Docker build -t Tomcat
Run Tomcat
At this point, you will have a container with your Logi app working on port 8080. The following steps show you how to use Docker-compose to stand up multiple instances of Info and load balance them with Nginx:
Write docker-compose.yml file
Run Docker-compose
docker-compose up -d
Scale Info up:
docker-compose scale app=2
Useful Links
The following links are resources for implementing Docker:
How to maintain Session Persistence (Sticky Session) in Docker Swarm
Swarm 1.12 Routing data to specific container sticky session
Nginx:
Multiple executables in container script:
Bind Mount:
Dockerfile Example
Below is a file that accompanies the directions mentioned in the Docker-compose scale with sticky sessions link above. This example is not meant to be run, it is simply a guide for creating your own:
FROM tomcat
MAINTAINER Author Name <david.abraham@logianalytics.com>
ADD /InfoGo /usr/local/tomcat/webapps/InfoGo
ADD logiStart.sh /usr/local/tomcat/logiStart.sh
#CMD ./usr/local/tomcat/logiStart.sh
CMD ["/usr/local/tomcat/logiStart.sh"]
EXPOSE 56982
EXPOSE 8080
Docker-Compose Example
Below is an example of a docker-compose file:
app:
image: tomcatdiscovery
environment:
-VIRTUAL_HOST=localhost
-VIRTUAL_PORT=8080
-USE_IP_HASH=1
volumes:
-/home/logise/Docker/share:/usr/local/tomcat/webapps/InfoGo/share
nginx:
image: tpcwang/nginx-proxy
ports:-"80:80"
volumes:
-/var/run/docker.sock:/tmp/docker.sock:ro