Single Edits PostgreSQL: Push data from the Database to an Application in Real-Time with Webhooks

Liz Sanderson
Liz Sanderson
  • Updated

FME Version

  • FME 2023.0


With FME Flow (formerly FME Server), you can manage PostgreSQL database changes using triggers and Automations to push data in real-time. It’s possible for database triggers to send jobs directly to workspaces on FME Flow using workspace webhooks. However, using this method misses out on the additional functionality that Automations has to offer, like webhook triggers, multi-workspace chaining, email notifications, and much more. By combining the power of FME Flow with PostgreSQL database triggers, the options are endless when it comes to applying these procedures to your organization’s workflows.

So why should you care about database triggers? Imagine you have a central internal database with external application datasets that need to be synchronized. It would require a lot of time and manual work to update the application dataset every time a change occurs in the central database. When using database triggers with FME Flow, Automations will do all the work for you in an efficient and streamlined process.



  • PostgreSQL 12
    • PostGIS extension
    • PL/Python (Python 3)
  • ArcGIS Online account (minimum Creator license). If you do not have a licensed AGOL account, you can use a shapefile instead for Parts 5 and 6.



  1. Pothole data will be loaded into a PostGIS database using a simple FME Flow App that sends job requests to a workspace on FME Flow.
  2. A trigger function in the database will detect any edits and use a supplementary function to send a JSON-formatted message to an Automation webhook trigger. 
  3. The Automation contains a message-handling workspace to parse out the ID of the edited record, database action that was executed, and date when the edit was made. The result is written into an ArcGIS Online feature service, which acts as a real-time change log.

Note: The workspaces, Automation and SQL scripts used throughout this article are meant to be relatively basic to illustrate the potential of FME Flow for your database applications. The code snippets and scripts can be customized to fit your environment. While a spatial database is being used in this article, the same methods can be applied to non-spatial data.


Step-By-Step Instructions

This tutorial was tested with FME Workbench 2023.0, FME Flow 2023.0.1, PostgreSQL 12.4, and PL/Python 3.7. There may be some differences in the user interface, terminology and functionality if you are using different versions, but the fundamentals are mostly the same. Note that we will be using the PostgreSQL plpython3u extension for a Python 3 interpreter instead of plpythonu (Python 2).


Part 1: Set up PostgreSQL and Python

The first step is to install PostgreSQL and use it as the central database. For this demonstration, we will be using PostgreSQL 12. After installing PostgreSQL, open StackBuilder to install PostGIS and PL/Python. To learn more about the Stack Builder utility, please read the documentation.

Select the PostGIS and EDB Language Pack. While you can use Python from a different source, the EDB Language Pack is one of the easiest and most straightforward installations. To learn more about PL/Python and PostgreSQL version compatibility with different operating systems, please consult the PostgreSQL documentation for more details. It is crucial that you follow these steps while running the Command Prompt as an administrator to set up your system environment variables, otherwise PostgreSQL won’t be able to locate the Python interpreter.

Using pgAdmin or a similar tool, create a new database and run CREATE EXTENSION postgis to spatially enable the database. Run CREATE EXTENSION plpython3u to enable PL/Python.


Note: If you encounter the error “The specified module could not be found” when attempting to add plpython3u to your database, you may need to verify that your environment variables and Python have been set up properly. The error message is slightly misleading; what it actually means is “Cannot find dependency DLL python3x.dll in PATH”.


Part 2: Set up the Automation

At this point, we want to create an Automation, which will receive the incoming JSON-formatted messages from the PostgreSQL database trigger. The message will be passed into our logger workspace.

Open FME Flow and create a new Automation. Add a Webhook Trigger to the canvas. Save the Automation to generate a webhook URL, which will be used in the database trigger function.



Part 3: Create Database Trigger and Request Functions

Download and unzip Open the SQL script in a text editor. Copy and run the CREATE TABLE statement with the PostgreSQL query tool to create the potholes table. This table holds the id of the reported pothole, date it was reported, reporter's name, pothole size, and location. Note that the location/geometry column is included with EPSG:4326 (LL84) as the coordinate system.

CREATE TABLE single_pothole
  id serial PRIMARY KEY,
  date DATE,
  name VARCHAR(255),
  size CHAR(10),
  geom GEOMETRY(Geometry,4326)

In step 4 of the script, we are initiating the HTTP request function that will be called by the trigger function (step 5). The send_request function uses PL/Python to perform the following:

  • Takes the FME Flow URL and custom message as arguments
  • Imports the Python libraries required to parse the URL and make a web request
  • Parses the URL and extracts the necessary components
  • Sends a POST request to FME Flow with the custom message as a JSON-formatted content payload (the header specifies the payload type, which in our case is application/json)
  • Gets the response back from FME Flow and returns it as a string

Before running step 5, you need to replace the URL parameter on line 66 with the webhook URL from the previous section. This part of the script sets up the trigger function using PL/pgSQL to perform the following:

  • Declares the custom message and Well-Known Text point variables that will be used in the function
  • Stores the geometry value of a record in wkt_point if it was either inserted or updated, and is converted to WKT format using the PostGIS ST_AsText function 
  • Uses if/else statements and the TG_OP trigger procedure to check which action (INSERT, UPDATE or DELETE) was fired and generates a custom message accordingly. Note that the DELETE action will use the old geometry value because no new record exists.
  • Creates custom JSON-formatted messages containing the point ID, action type, and the old or new geometry value as WKT (depending on the action)
  • Calls the send_request function to send the JSON message to the FME Flow webhook trigger 

Run steps 4, 5 and 6 in the query tool. Step 6 of the script is used to bind single_function to the single_pothole table. The trigger will be called the moment after a row is inserted, updated or deleted. Triggers and functions can be viewed and managed in pgAdmin by going into Schema > public > Trigger Functions, and Schema > public > Tables > [Table Name] > Triggers (binded trigger functions).


Part 4: Set up the Data Loader Workspace

For demonstration purposes, we will be using a simple FME Flow App to submit jobs to a data loader workspace that performs various actions on our PostGIS database. The Flow App will be configured later.

Open the postgis_single_loader.fmw workspace in FME Workbench.

This workspace will be residing on FME Flow to receive and handle the Server App job requests and perform example actions on the database. Edit the PostGIS writer to connect to your own database. In the CSV reader, set the path to the sample CSV file (Potholes.csv).

Note: While we are using an FME workspace to load the data, the database trigger will still run if you edit single_pothole with any other method (i.e: SQL, ArcGIS). 

The source CSV and destination PostGIS database published parameters can be removed. When you are ready, publish the loader workspace to FME Flow in a new repository named ‘PostGIS-Single’ as a Job Submitter service. Include the database connection and sample Potholes.csv file (upload to Shared Resource folder > Data).


Part 5: Create ArcGIS Online Change Log Feature Service

For this section, we will be using ArcGIS Online to demonstrate a synchronized external application that monitors changes in the database without needing direct access to the database. Log in to your ArcGIS Online account and create a new hosted point feature layer named PotholeChangeLog. In the layer’s Data > Fields, create three new fields:

  • id (integer)
  • RowAction (string)
  • DateEdited (date)

You may find it useful to create a simple ArcGIS Online web app to display the feature service. 


Part 6: Set up the ArcGIS Online Logger Workspace

Open the postgis_single_logger.fmw workspace in FME Workbench. 


Edit the ArcGIS Online Feature Service writer parameters to connect to your ArcGIS Online account. Make sure the output geometry is set to arcgisonline_point and the layer name matches with the feature service layer we created in Part 5. You may be required to manually change the user attribute types, particularly the ‘id’ to esriFieldTypeInteger (NOT esriFieldTypeOID) and ‘DateEdited’ to esriFieldTypeDate.

Note: If you don’t have a licensed ArcGIS Online account, please use the Esri Shapefile writer instead. Enable the shapefile writer and disable the AGOL writer. For the writer destination, enter $(FME_SHAREDRESOURCE_DATA) to write to the Data folder in Resources. The change log shapefile will be on FME Flow and can be downloaded to view in FME Data Inspector.

The logger workspace will be placed after the webhook trigger in our Automation to perform the following:

  • Gets the webhook trigger’s output attribute (formerly output key) values (point id, database action, and WKT geometry)
  • Converts and replaces the webhook.geom value with a valid WKT point geometry
  • Sets the coordinate system to LL84
  • Logs the current date and time and converts it to a compatible format that ArcGIS Online can read (we are using ISO datetime)
  • Removes and renames attributes to match the AGOL feature service layer
  • Writes the point feature with attributes into the feature service

When you are finished configuring the logger workspace, publish it to FME Flow as a Job Submitter in the same repository as the data loader workspace along with your ArcGIS Online web connection.


Part 7: Set up the FME Flow App

The following actions are needed to configure the Flow App:

  1. In FME Flow, go to Flow Apps > Create Workspace App.
  2. Name the app ‘postgis-single-edit-flow-app’, and enter ‘PostGIS Single Edit Triggers’ for the title.
  3. Select the postgis_single_loader.fmw workspace to run from the app.
  4. Click OK when you are done configuring the app. Open the Flow App URL.

Use the drop-down menu to select an example to run:

  1. Inserts a new pothole record into the PostGIS database. (id=1)
  2. Inserts three new pothole records into the database.
  3. Updates the record with id=3 from a ‘Medium’ pothole size to ‘Large’.
  4. Deletes the record that was inserted in step 1 (and deletes all duplicates if Example 1 was run multiple times).

While the loader workspace and Flow App are not particularly practical in their current states, imagine a scenario where we have a Flow App and non-FME users who can make edits without needing direct access to FME Flow or the database.


Part 8: Complete the Automation and Run the Examples

Open the Automation created in Part 2. Add a Run Workspace Action to the canvas and select the postgis_single_logger.fmw workspace. As stated earlier, we want to use output attributes to pass the custom JSON message from the webhook trigger to the logger workspace.

In the webhook trigger, open the Output Attributes panel. Under Webhook Attributes, create three new attributes: id, action and geom. These attributes are parsed from the JSON sent by the PostgreSQL trigger function and are passed into the logger workspace. Optionally, you can add a Log Action to the webhook trigger and set it to log the webhook message content. Click Apply, save the Automation and click Start Automation.


Open the FME Flow App and run the examples in order. You will be redirected to a job status page that tells when the job is running and if it completed/failed. Check the PostGIS database and Automation log to make sure the edits successfully went through. A job submission will appear in the logs for each edit made to the database. If you added the message content log action to the Automation, the JSON-formatted message sent from the database trigger will also appear in the logs. 

Check your ArcGIS Online feature service/shapefile to verify that the edited records were created. Keep in mind that we’re reading the same records from the sample CSV file, so there will likely be overlapping points in your feature service.

Output viewed in Esri ArcGIS Online with Topographic basemap.

Now that you’re equipped with the fundamentals of PostgreSQL database triggers and how to use them with FME Flow, spend less time doing manual work and let FME Flow do the work for you!


Additional Resources

Getting Started with Automations

Bulk edits PostgreSQL: Push data from the database to an application in real-time

Tutorial: Update Databases Using an FME Server App

Building Integrations with the FME Server Automation Writer


Was this article helpful?



Please sign in to leave a comment.