FME Flow Troubleshooting: Log Files

Liz Sanderson
Liz Sanderson
  • Updated

Full Guide: FME Flow Troubleshooting Guide

Log files are very helpful when debugging FME Flow (formerly FME Server). This page answers common questions for managing and troubleshooting log files. For how to use log files as a troubleshooting tool, please see FME Flow Debugging Toolbox: Troubleshooting with Logs.

In 2023, FME Server underwent a name change and is now known as FME Flow. Since this article discusses features present in previous versions of FME, it will refer to both names interchangeably, using the appropriate product name based on the year the feature was introduced. For more information on the rebranding, see our website
 

Content Overview

 

Introduction 

FME Flow records a log for nearly every process including jobs, automations, connections, database, and web user interface activity. These logs are the most powerful tool available for troubleshooting issues on FME Flow. 
 

If you are not experiencing an issue with FME Flow logs , but want to find out where to find them and how to use them, please see our debugging toolbox for Troubleshooting with Logs
 

Common Issues 

 "Can I set a custom log file location?"

It is not recommended to edit the configuration files to change the log file location. The location is specified at installation and cannot be changed. Our documentation references this here
However, you could copy the logs into another location and still keep the log files in the original location.
 

 "Where are python errors logged in FME Flow?"

You might have a workspace which fails in FME Flow with the "statusMessage= FME_END_PYTHON failure" and there is no error in the job log with the last job log entry as follows-
INFORM|Translation was SUCCESSFUL with 0 warning(s) (56983 feature(s) output)
INFORM|FME Session Duration: 26.5 seconds. (CPU: 19.8s user, 2.9s system)

Check the <ServerNamer>_fmeprocessmonitorengine.log file in the resources\logs\engine\ folder for more python errors as below -
Fri-10-Mar-2017 03:25:42 PM   INFORM   Thread-24   FME2016-2_Engine1   Traceback (most recent call last):
Fri-10-Mar-2017 03:25:42 PM   INFORM   Thread-24   FME2016-2_Engine1     File "<string>", line 5, in MF_Include_1489177542141
Fri-10-Mar-2017 03:25:42 PM   INFORM   Thread-24   FME2016-2_Engine1     File "<string>", line 3, in ParamFunc
Fri-10-Mar-2017 03:25:42 PM   INFORM   Thread-24   FME2016-2_Engine1     File "C:\FMEFlowShare\resources\engine\Transformers\Tools.py", line 10, in <module>
Fri-10-Mar-2017 03:25:42 PM   INFORM   Thread-24   FME2016-2_Engine1       import cx_Oracle
Fri-10-Mar-2017 03:25:42 PM   INFORM   Thread-24   FME2016-2_Engine1   ImportError: No module named cx_Oracle
Fri-10-Mar-2017 03:25:42 PM   INFORM   FME2016-2_Engine1   393656 : Process "FME2016-2_Engine1" being restarted.
Fri-10-Mar-2017 03:25:42 PM   INFORM   FME2016-2_Engine1   393566 : Process "FME2016-2_Engine1" waiting for process output listener threads to terminate…

You can use the excellent traceback module to include a detailed error message in the actual job log.
Here's an example shutdown script:
import traceback
import fme
try:
   # Your code here
   test = 1 / 0
except:
   with open(fme.logFileName, 'a') as fmelog:
       # Re-open FME log file in append mode
       fmelog.write("An error occurred in the shutdown script:\n")
       fmelog.write("-" * 75 + '\n')
       fmelog.write('\n'.join(traceback.format_exc().splitlines()) + '\n')
       fmelog.write("-" * 75 + '\n')
   raise # Re-raise the original exception

This is the end of the job log:

1.jpeg
 

 "Can I change the logging level for jobs on FME Flow?"

In the message.properties files, you can change the DEBUG_LEVEL to SUPER_VERBOSE to increase the logging level and capture more information in the logs. If you're worried about excessive disk use, make sure to enable the FME Flow cleanup task and perhaps also set them to run more frequently.

 

 "Where can I find the sub-workspace logs?"

If you publish the parameter for the log file in the child workspace, you can create a separate log file for that workspace by defining the location and name in the WorkSpaceRunner.
 

 "Why do my FME Flow job logs have errors relating to a factory?"

If you are creating workspaces in a newer FME Form (formerly FME Desktop) version and running it from an older FME Flow version,  you might experience an error similar to -
ERROR |The clause 'FACTORY_NAME Router and Unexpected Input Remover COMMAND_PARM_EVALUATION SINGLE_PASS MULTI_READER_KEYWORD ARCGISFEATURES_1' within 'FACTORY_DEF * RoutingFactory FACTORY_NAME Router and Unexpected Input Remover COMMAND_PARM_EVALUATION SINGLE_PASS MULTI_READER_KEYWORD ARCGISFEATURES_1 INPUT FEATURE_TYPE * ROUTE ARCGISFEATURES ARCGISFEATURES_1::CURRENT TO FME_GENERIC ::CURRENT ALIAS_GEOMETRY MERGE_INPUT Yes OUTPUT ROUTED FEATURE_TYPE *' is incorrect. It must look like: FACTORY_NAME <name>

Unfortunately, workspaces created in a newer version of FME will most likely not work on an older version of FME Flow. In general, you will be okay with an upgrade within the same major version.
 

 "Why are my job logs being deleted daily?"

It could be the internal resourcechecker task that gets triggered because of too little available disk space on your server. Check the logs/service/current/fmecleanup.log for critical messages.
To modify the thresholds, look in fmeServerConfig.txt under the "Cleanup Watch" headings, specifically:
  • CLEANUP_WARN_MINDISKSIZE_GB
  • CLEANUP_WARN_MINDISKSIZE_PCT
  • CLEANUP_CRITICAL_MINDISKSIZE_GB
  • CLEANUP_CRITICAL_MINDISKSIZE_PCT
You can also set notification topics that you can configure to, for example, send you an email if the available disk space goes below the WARN or CRITICAL levels.

Also, check to confirm if you are backing up your data and where it is stored in your system.

To ensure that logs and other files do not consume too much disk storage over time, FME Flow is configured, by default, to delete select files based on certain conditions, including their age and whether they have been auto-archived. For more information, please see Cleaning Up FME Flow Logs and Other File.

imageimage"Why are my jobs being moved into the 'old' folder?"

Log files are further sub-categorized in either a current or old directory. They are auto-archived from the current to the old directory when either of the following occurs:
  • A new log file of the same name is created.
  • The size of the file has reached 5000 lines of text.

You can control how log files are auto-archived and other properties of log files by editing the messagelogger.properties file. For more information, see Message Logger Properties.
 

 "Is it possible to email the job log or job statistics from FME Flow?"

When you register a workspace with FME Flow, you can choose to post a job’s success/fail message to a topic on success or fail, which will contain the job ID.

image.png

You can create another workspace which will subscribe to the topics posted above, which will receive this message as JSON. When you publish the log emailer workspace to FME Flow, register it with the Notification Service and send the Notification Data to the JSON Reader.

image.png

An FMEFlowLogFileRetriever will use the job ID to return the log information inside an attribute. To extract the job statistics/summary at the end, use a StringSearcher to extract the information you need. You can then use the result from the StringSearcher in the body of the email to send to your user.
Please see the workspace template attached (built for 2016.0)- joblogtoemail.fmwt
Alternatively, you could use the Emailer transformer in 2016.1+ 
The job Status information can also be acquired via the REST API using the following Transformation Manager (/transformations/jobs/id/< jobid >/result)
 

 "Can I turn off the logging info only for one workspace on FME Flow?"

You can turn off the information in FME Workbench (Tools>FME Options> Translation) and save the workspace.
Then upload it to FME Flow and the workspace will contain the information about what to log.
 

 "Do any log files show which python dll is being used?"

When an FME Flow Engine processes its first job (after starting or restarting), it will record this information in the Job Log:
8 INFORM0.00.02016-12-20 14:52:22Using FME's provided Python interpreter from `C:\Program Files\FMEFlow\Server\fme\fmepython27\python27.dll'
9 INFORM0.00.02016-12-20 14:52:22Python version 2.7 loaded successfully
10 INFORM0.00.12016-12-20 14:52:22FME_BEGIN_PYTHON: evaluating python script from string...
11 INFORM0.00.12016-12-20 14:52:22FME_BEGIN_PYTHON: python script execution complete.
 
This information will be displayed for each FME Flow Engine you have licensed.
Subsequent jobs ran on the same FME Flow Engine will not post this information about the Python interpreter seen on Line 8 and 9 until the Engine restarts.
When an FME Flow Engine starts up, it reads this information and stores it. This is why if you change the Python interpreter for an FME Flow Engine, you must restart the Windows Services for the change to take effect.
You can control how many successful or failed translations are required before an FME Flow Engine restarts by changing the values of MAX_TRANSACTION_RESULT_SUCCESSES and MAX_TRANSACTION_RESULT_FAILURES.
(see fmeServerConfig.txt )  
 

 "Can I write the FME Flow log files to a database?"

The log information shown in the FME Flow job webpage is written to the FME Flow database. 
You can access that database (by default Postgres) using an FME workspace and write the information in the required format.
You can also use the FME Workbench text reader to read the log file and write the text lines to a database if required.
 

 "How are child workspaces logged in FME Server 2017?"

There was a change in 2017.0 - the  FMEServerJobSubmitter (with wait for jobs to complete set to Yes and submit jobs set to in parallel)  is calling jobs in a different way. The idea is to allow the Server to run multiple jobs simultaneously (assuming there are multiple engines). You will see the child job as a separate job (with its own job ID) and separate log files. 

4.jpeg
 

 "Can I download FME Flow log files in FME Workbench?"

If you have multiple jobs on FME Flow and need to download log files for each job, you can try to use the FMEFlowLogFileRetriever transformer.
 

 "Has anyone used Kibana to monitor FME Log files?"

You can pull certain FME Flow logs into an Elasticsearch database and use Kibana to monitor/analyze the activities in the logs. 

5.jpeg

You can create a 'Pipeline Configuration File' (please see this attachment) in Logstash to extract the required information from a CSV file. Then Elasticsearch can be used to create indices which can be utilized by Kibana for data discovery, visualization and dashboard. This testing is done with Elasticsearch 2.2.0, Logstach 2.2.0 and Kibana 4.4.0.
The process may be changed slightly with the new version of Elasticsearch.
You can follow the Elasticsearch documentation for building the process.
 

Additional Resources 

 

Are you still experiencing issues?

Please consider posting to the FME Community Q&A if you are still experiencing issues that are not addressed in this article. There are also different support channels available.
 

Have ideas on how to improve this?

You can add ideas or product suggestions to our Ideas Exchange.

Was this article helpful?

Comments

0 comments

Please sign in to leave a comment.