How to tail FME Flow log files and forward into Datadog

Liz Sanderson
Liz Sanderson
  • Updated

FME Version

Introduction

This article discusses integrating FME Flow (formerly FME Server) with Datadog. The example assumes the desired logs are stored on-premises. However, the steps would be similar if you ran FME Flow on a virtual machine.

Datadog has several log file integration options.

  1. Custom log forwarder via TCP or HTTP: Forward logs to Datadog via HTTP or TCP.

  2. Application log collection: Configure FME Flow at the code level to generate logs and send them directly to Datadog.

  3. Tail files: Using the Datadog Agent, you can tail FME Flow logs and forward them to Datadog.

This article will walk you through how to tail FME Flow logs and get them into Datadog. Although there isn't a native integration between FME Flow and Datadog, tailing files works extremely well.

Process Walkthrough

Setting up the Datadog Agent

Install the Datadog Agent on your server and go through the initial setup/installation process. Configure the Agent for logs by opening datadog.yaml (found under C:\ProgramData\Datadog, or in the Agent GUI under Settings) and set logs_enabled: true

Now the Agent is set up and ready to send logs to Datadog. The next step is to configure the FME Flow log files you wish to send back. Start by navigating to Logs -> Log Configuration -> Add a Log Source in the Datadog UI. Select Server, then Custom Files.

Based on your input, the UI will walk you through what changes to make to your Agent’s directory. For example, to generate the necessary YAML for tailing the fmeserver.log, fill in these values:

  • Path: C:/ProgramData/Safe Software/FMEFlow/resources/logs/core/current/fmeserver.log
  • Service: fmeserverlog
  • Source: fmeserver

As the UI states, you will create a new folder in the Agent’s conf.d directory called fmeserver.d. In this folder, create a new conf.yaml file, copy/paste the generated YAML from the Datadog UI, and save the file. The final path to this new file will be: C:\ProgramData\Datadog\conf.d\fmeserver.d\conf.yaml.

You can utilize wildcards to access multiple job log files if you also want to tail job logs. To do this, follow the same steps as above with the following values:

  • Path: C:/ProgramData/Safe Software/FMEFlow/resources/logs/engine/current/jobs/*/*.log
  • Service: fmeserverlog
  • Source: fmeserver_jobs

In this case, the new folder you will create in the Agent’s directory will be called fmeserver_jobs.d.

You will need to restart the Agent to get Datadog to pick up these changes.

Creating Log Pipelines

Next, we will need to create a pipeline for each type of log file we want (e.g., fmeserver.log and job logs). The documentation explains how we can use a Grok Parser to extract the information we need. The parser will use rules that we define to extract this information and assign it to official Datadog attributes. The main ones we are concerned with are status, message, and date.

Navigate to Logs -> Log Configuration -> Pipelines and select Add a new pipeline. Each pipeline will contain processors that remap our attributes.

Creating Server Log Pipeline:

  1. Pipeline:
    1. Filter: source:fmeserver
    2. Name: fmeserver.log
  2. Grok Parser
    1. Name the processor: Grok Parser
    2. Define parsing rules:
parseFMEServerLog %{date("EEE-dd-MMM-yyyy hh:mm:ss.SSS a", "<Timezone_Offset>"):date}\s+%{notSpace:status}\s+%{data:message}

Because FME Flow log files use the local timezone of the host server (i.e., not UTC), you will need to replace <Timezone_Offset> with the appropriate timezone offset for your server. See this section of the documentation for supported values.

  1. Status Remapper
    1. Name the processor: Status Remapper
    2. Set status attribute(s): status
  2. Message Remapper
    1. Name the processor: Message Remapper
    2. Set message attribute(s): message
  3. Date Remapper:
    1. Name the processor: Date Remapper
    2. Set date attribute(s): date

Creating Job Log Pipeline:

  1. Pipeline
    1. Filter: source:fmeserver
    2. Name: Job Logs
  2. Grok Parser
    1. Name the processor: Grok Parser
    2. Define parsing rules:
noSpaceAfterStatus %{date("yyyy-MM-dd HH:mm:ss", "<Timezone_Offset>"):date}\|\s+%{number:cpu_time}\|\s+%{number:system_time}\|%{word:status}\|%{data:message}

spaceAfterStatus %{date("yyyy-MM-dd HH:mm:ss", "<Timezone_Offset>"):date}\|\s+%{number:cpu_time}\|\s+%{number:system_time}\|%{word:status}\s*\|%{data:message}

As before, update <Timezone_Offset> with a supported timezone offset value.

  1. Status Remapper - same as server log pipeline
  2. Message Remapper - same as server log pipeline
  3. Date Remapper - same as server log pipeline

Your two pipelines should look like this when you are done:

Exploring Logs

At this point, our log files should be visible within the Datadog UI. Navigate to Logs -> Search & Analytics -> Explorer. In a new tab, log in to FME Flow and run a workspace. After some time, we should start to see records from both the server log and the associated job log.

We can filter our logs from here by different attributes (time, status, source, etc.). This general template can be applied to tail additional logs from FME Flow and forward them into Datadog.

Was this article helpful?

Comments

0 comments

Please sign in to leave a comment.