Batch Processing Method 3: Batch Deploy

Liz Sanderson
Liz Sanderson
  • Updated

Introduction

Batch deploy is available under the Run menu in FME Workbench. It supports both instant batch execution and the creation of batch files. It permits you to use the current workspace to process a large number of source datasets, and produce a separate output for each.

Batch deploy operates in the form of a Wizard. The user specifies the input and output datasets plus other relevant settings such as a suffix for the output file names. You can choose to read an entire directory of data files, including subdirectories, or just select individual files. Batch Deploy even lets you choose not to batch from or to selected datasets, and supports appending to an existing destination dataset (where that format supports appending).

If the process is not executed at once the batch files created are a *.tcl file containing the batch process settings, and a *.bat file which is used to start the process.

Q) What's the point of having multiple source readers but only batching one of them? Why would I do that?

A) Previously (before FME2005) a workspace with multiple source readers could not be batched at all - so this is an improvement we made to permit batching to occur in these example scenarios:

  1. You have multiple Shape datasets and a single CSV file that lists the changes to be made to features in these Shapes. You now are able to batch process all the Shape datasets to make the changes listed in the single (non-batched) CSV file.
  2. You have multiple DGN files that have links to attributes held in an Oracle database. You can now convert these DGNs to MIF/MID, matching them to the Oracle database records by batching the DGNs but not the Oracle database (of which there is only one common to all the DGNs).



Q) What's the point of having multiple destination writers but not batching all of them? Why would I do that?

A) Previously (before FME2005) a workspace with multiple destination writers could not be batched at all - so this is an improvement we made to permit batching to occur in this example scenario:

  1. You have multiple Shape datasets which you wish to convert to both MID/MIF and GeoDatabase. You need a separate MID/MIF file for each source Shape dataset, but only a single GeoDatabase to which all the data is written. Therefore you batch process the MID/MIF destination but turn off batching for the GeoDatabase. Provided that you have the 'Delete GeoDatabase' setting set to NO all of the data in each iteration will be added to the GeoDatabase instead of replacing it (whereas if you batched processed you would get a separate GeoDatabase for each iteration)

Was this article helpful?

Comments

0 comments

Please sign in to leave a comment.