Why would you use a Kubernetes deployment for FME Server?

Liz Sanderson
Liz Sanderson
  • Updated

Introduction

FME Server has been available on Kubernetes since 2019, and more customers are starting to use it for their FME Server deployments or inquire about it due to the increased awareness of serverless compute services.
Kubernetes is still a new technology for a lot of people, so this article will explain what the benefits of using Kubernetes (for FME Server) are.

What is Kubernetes?

Kubernetes is an open source container orchestration tool, which provides the ability to automate deployment, scaling and management of containerised applications. If you’re not familiar with containers, check out our blog article from when we introduced FME Server for Docker.


Kubernetes is a complex technology, so if you aren’t already familiar with Kubernetes or you’re just getting started, we recommend that you familiarize yourself with the Kubernetes Concepts before trying to deploy FME Server using Kubernetes.
On the Safe Software blog is a post from when FME Server was in tech preview for Kubernetes. There are links to some educational resources at the bottom of the blog.

Kubernetes solves several issues that arise with manual container management and deployment. Similar to Docker Compose, Kubernetes takes a declarative approach, meaning that you describe how FME Server should be installed and configured in a YAML file. We have already created these YAML files, so that Kubernetes will deploy FME Server with the correct containers, services, networking, resources etc. 

 

What is Helm?

Helm is a package manager for Kubernetes applications, which is the tool we use to install, upgrade and manage FME Server on Kubernetes. The YAML files mentioned above are grouped together, into a helm chart. Our Helm Charts are available on GitHub.

Any variables or parameters that an FME Server administrator wishes to change will be defined in a values.yaml and applied using Helm. This file contains configuration values for the chart and the desired state of FME Server. The list of supported configurable parameters and the default values can be found on our GitHub

The FME Server administrator can save their values.yaml file which makes using Kubernetes (with Helm) to deploy FME Server a quicker, easier and repeatable solution.

 

What are the benefits of using Kubernetes?

The Kubernetes documentation has a good overview of what Kubernetes is and what it can do. Some of those features can be beneficial to an FME Server installation:

Service discovery and load balancing

Kubernetes can load balance traffic across a deployment so that it’s stable. If you have multiple FME Server core pods running (which has the core and web application server containers inside one pod), then network traffic will be distributed between those pods to avoid one pod getting overwhelmed. Kubernetes will only send traffic to pods that are running and ready.
Service discovery makes scaled FME Server deployments easier. In a traditional deployment, adding additional distributed components (most likely engines) may require additional networking or firewall configuration to make sure that all of the FME Server components can still communicate. With Kubernetes, when you add more nodes (hosts) or scale pods (FME SErver engines or cores), it takes care of all of the networking and communication between pods for you. This is a benefit to FME Server users that distribute or scale their engines.
 

Storage Orchestration

Kubernetes allows you to automatically mount a storage system of your choice, such as local storages, public cloud providers and more. This is where the FME Server System Share will reside. You need to set up your storage provider before installing FME Server.
 

Automated rollouts and rollbacks

In Kubernetes you can describe the desired state for your deployed containers, and it will change the actual state to the desired state at a controlled rate. For example, you can automate Kubernetes to create new containers for your deployment, remove existing containers and adopt all their resources to the new containers. This may be useful for managing FME Server engines (scaling, queues, etc) and minor version upgrades (bug fixes), but should not be used for major FME Server upgrades. Going from 2021.0 to 2021.1 would be considered a major upgrade, and therefore not supported by helm upgrade. Instead, we recommend launching the new version of FME in a new namespace and swapping the ingress hostname once the new version is ready. 
For minor FME Server upgrades, the helm upgrade command will work. For example, going from 2021.0.1 to 2021.0.2 is supported.
 

Automatic bin packing

You provide Kubernetes with a cluster of nodes that it can use to run containerized tasks. You tell Kubernetes how much CPU and memory (RAM) each container needs. Kubernetes will fit containers onto your nodes to make the best use of your resources. For FME Server, we recommend setting this on your engines so that when Kubernetes schedules engine pods it will only put them on a node that has enough free resources. You will need to make sure that you have enough nodes, or big enough nodes that have enough resources to run your FME Server deployment, otherwise pods will get stuck in a pending state until Kubernetes can successfully schedule them.
The benefit of this is making sure that your containers have sufficient resources to be able to complete their work.

 

Self healing

Kubernetes restarts containers that fail, replaces containers, kills containers that don’t respond to your user-defined health check, and doesn’t advertise them to clients until they’re ready to serve. In the FME Server Helm Chart, we have already defined the liveness and readiness probes for the pods. 
This makes FME Server running on Kubernetes fault tolerant, as any pods or containers that get into a bad state will get restarted or replaced. If an engine pod went down, Kubernetes wouldn’t advertise that pod as available to the core, so you don’t have to worry about jobs running on unhealthy containers. Failed jobs in FME Server are re-queued, and will get processed on the next available engine.
If Kubernetes is running as a service on a public cloud provider (for example, AWS EKS), nodes in a Kubernetes clusters are provided through auto-scaling groups, allowing you to easily scale nodes. This also has the benefit that if a node went down or was accidentally removed, the auto-scaling group would start a new one to replace it. Kubernetes would schedule any missing pods onto the replacement node as soon as it was available, minimising any downtime for the application. 
This makes Kubernetes a good solution if you’re looking to deploy a large-scale, highly available FME Server.
 

Who should be using Kubernetes?

As Kubernetes has a steeper learning curve than Docker (and Docker Compose) we do not recommend this deployment to users who don’t already have experience with managing Kubernetes deployments.
If you want to get started with Kubernetes we recommend taking some time to learn about the technology before deploying FME Server with it. There are many resources and training course available online.

FME Server containers are built on Ubuntu images, so this would not be a good deployment option for someone that needs Windows-based format support (eg Esri).

Organisations with in-house expertise of Kubernetes that want to take advantage of the benefits would be best suited to this deployment type.

 

Where can you deploy Kubernetes?

One of the benefits of Kubernetes is that it can be used anywhere. Many Cloud Providers have support for Kubernetes, such as AWS, Azure and Google Cloud Platform, however there may be differences in deployment between them. One example of this would be setting up the volumes for FME Server to use. Our documentation has instructions for the major Cloud Service Providers.

It is possible to deploy Kubernetes on-premise (or locally for testing), and we have had success using Minikube and kind. 

FME Server on Kubernetes has no special licensing requirements as is done through the FME Server web ui. The easily scalable nature of Kubernetes makes it well suited for Dynamic Engines.

FME Server pricing can be found here.
Documentation on licensing can be found here.

Was this article helpful?

Comments

0 comments

Please sign in to leave a comment.