Introduction
Are you looking to deploy FME Flow on Kubernetes?
This Kubernetes FAQ will answer some of the common questions we receive while users decide whether Kubernetes is an appropriate deployment option for their needs.
If you have any questions about deploying FME Flow with Kubernetes, please add a comment below or contact us.
FAQ
- Are there any significant differences when using an FME Flow deployment on Linux (Kubernetes)?
- What is the best way to integrate ArcPy / ArcGIS Pro with FME Flow on Kubernetes?
- Is it possible to use EKS (AWS Elastic Kubernetes Service) and combine Linux containers with Windows EC2 instances (for Esri compatibility)?
- Is it possible to add extra Python components?
- How can I include JDBC/ODBC drivers for database formats?
- How can an Oracle client be deployed so that an FME Flow Engine can connect to an Oracle database?
- Is it possible to manipulate the JAVA keystore (for example, adding extra internal CA certificates)?
- How can FME Flow Kubernetes deployments remain licensed if deployments are deleted and created?
- How can I give the FME Flow engine containers access to SMB shares to read/write data?
- Can I use an alternative ingress controller to NGINX?
Are there any significant differences when using an FME Flow deployment on Linux (Kubernetes)?
The primary concern would be format support for the Linux-based FME Flow Pods. Typically, formats requiring Windows-based libraries or clients will not be supported [for example- Esri formats]. However, you could leverage Remote Engine Services hosted on a Windows OS VM and Queue Control to route ESRI-format-related jobs to be run on Windows engine service.
Some formats for Oracle, as discussed below in this FAQ, may be supported after providing the required ancillary files, libraries or drivers for Linux. In case third-party libraries, JDBC drivers, etc need to be used, it might require creating your own FME Flow engine image. Database (such as SQL Server) format transformers require the JDBC version of the transformer, which may require workspace modification.
Please go through FME Transformer Gallery and All Applications & Formats Supported by FME | Safe Software to determine Linux OS compatibility for desired formats.
What is the best way to integrate ArcPy / ArcGIS Pro with FME Flow on Kubernetes?
Integrating ArcPy, ArcGIS Pro or ArcGIS Server requires windows libraries and our Kubernetes pods are linux-based. So the only possible solution would be to use the Remote Engine Service as mentioned above.
Is it possible to use EKS (AWS Elastic Kubernetes Service) and combine Linux containers with Windows EC2 instances (for Esri compatibility)?
Setting up a mixed Windows-Linux FME Flow deployment in Kubernetes is not advised. Although it's technically feasible, it involves considerable complexity, including the need to adjust our Helm charts, pods and specifically configure access through numerous ports. Such a setup presents significant deployment challenges and falls beyond the scope of Safe Software's technical support offerings.
Is it possible to add extra Python components?
If you have a custom Python module you need to use on FME Flow that is not part of the Python standard library, you can upload additional Python modules through the FME Flow web user interface. Please see our documentation on Importing Custom Python Modules to FME Flow .
How can I include JDBC/ODBC drivers for database formats?
You can upload the JDBC driver .jar file in the FME Flow System Share, as covered in Getting Started with JDBC.
How can an Oracle client be deployed so that an FME Flow Engine can connect to an Oracle database?
This is possible by adding a few additional files to the engine container/pod.
There are a couple of ways that you could do this with Kubernetes:
- You could use a ConfigMap in Kubernetes that contains the files you need in the container, then mount those in as a specific directory or filename inside the pod. You can read more about it in Configure a Pod to Use a ConfigMap | Kubernetes . This would require a manual change to the helm chart to mount these files into the engine pod.
- You could build your version of the container based on our engine container and add your files after pulling our engine image from the repository. The associated dockerfile could look something like this:
FROM safesoftware/fmeserver-engine:2020.1.3-20201002 ADD /local/file /file/in/container ADD /another/file /another/file/in/container
You would then build a container from this dockerfile and push that to a docker registry. You would then modify the helm chart to use this container instead of our official container, or pass in the details for the custom engine image in the values.yaml:
engines: - name: "standard-group" image: "<image_tag>" registry: "<customer_registry>" namespace: "<customer_namespace>" engines: 2 type: "STANDARD" labels: {} affinity: {} nodeSelector: {} tolerations: [] resources: requests: memory: 512Mi cpu: 200m
For a full list of parameters, please see our GitHub repo.
Is it possible to manipulate the JAVA keystore (for example, adding extra internal CA certificates)?
Similar to the Oracle client being deployed, you can add files to the web container/pod. You may also have to execute commands inside the container after the files have been added. To deploy FME Flow with a trusted certificate to a Kubernetes cluster, please follow Deploying with Kubernetes and a Trusted Certificate .
How can FME Flow Kubernetes deployments remain licensed if deployments are deleted and created?
FME Flow licensing mandates that the FME Flow System Share (resources) remain persistent to keep the software licensed. Should you fully dismantle the deployment, re-licensing will be necessary. For those looking to streamline this into an existing CI/CD workflow, licensing can be automated using the FME Flow REST API.
How can I give the FME Flow engine containers access to SMB shares to read/write data?
By default, we create a user named 'fmeserver' with a uid of '1363'. We also create a group named 'fmeserver' with a guid of '1363'. When creating the StorageClass that you will use to mount your share into your pods, you should specify the mountOptions like so:
mountOptions: - uid=1363 - gid=1363
Can I use an alternative ingress controller to NGINX?
There shouldn't be any issues using an alternative ingress. However, this will require modification to our helm charts as they are currently hardcoded for NGINX.
Comments
0 comments
Please sign in to leave a comment.