FME Version
Data Flow
One central problem with many ESF workspaces is that it they are not structured with an understanding of how dataflow works in FME. In FME, spliting data flow is the same as copying and combining is the same as appending. If you want records to actually merge attributes or features together, then you need to use combining transformers such as Aggregators, FeatureMergers and Joiners, etc. Any reference to documents throughout this article can be found in the ESFTemplates.zip attached to this article under Files.
Establishing Relationships with Common Keys
A very common problem is the lack of interreferencing using the appropriate IDs. For example, every opening needs a unique esf_opening_id and every opening definition is a child on an opening and thus must contain both an esf_opening_id and an esf_opening_definition_id. If an opening definition cut block polygon is sent to the esf writer without a reference to a parent opening that shares an esf_opening_id with the opening_definition, then the opening_definition polygon will be rejected by the esf validator.
It is possible that some records will have a repeating key. For example, several cut blocks may be part of the same opening, in which case the composite key might be cut block and cutting permit. In the workspace we will need to use the aggregator so that these multiple cut blocks can become grouped into one aggregate feature associated with one opening definition and opening.
Harvest Application Example
For example, in the FTA_HA example from the ESF tutorial (HarvestApplication_wCP_submission.xml):
The harvest application has : esf_harvest_application_id hawcp-1 esf_legal_description_id legal-desc-1 and cutblock A has: esf_harvest_application_id hawcp-1 esf_legal_description_id legal-desc-3 and cutblock B has: esf_harvest_application_id hawcp-1 esf_legal_description_id legal-desc-2
This means that cutblock A and B both belong to harvest application hawcp-1 and hence define the relationship between the cutblock objects and the harvest application object. Also, all the legaldescription ids need to be defined as individual records in the legaldescription feature table.
Lack of Compliance with Feature Specification
Often there are features rejected by the ESF writer because they don't comply with the associated feature representation documents (e.g. 2esf_results_fme_feature_representation.doc) . You may need to use FME's attribute processing transformers to parse data in such a way as to make it compliant. For example, you might use a StringReplacer to get rid of the spaces in a telephone number since the ESF results feature rep spec states that the telephone number must be decimal(10,0). If the date field had a '.' in it, which I had to remove.
Viewing / Testing Output
Try opening the xml output file with BC MoF Electronic Submission Framework - ESF (XSD-Driven) reader, with the parameter Validate Dataset set to Yes. This does the same schema check that the MOF website does when you submit online.
Trial and Error Approach
Often you need to do some parsing and reformating to get the metadata into a form ESF would accept. Note that it is often a trial and error process. Run the workspace, generate the xml, try loading it with the FME Data Inspector with the reader parameter Validate Dataset set to Yes. When it fails, note the row number and error message, open the xml in a text editor, note which field the reader is complaining about, go to the esf_results_feature_representation_4rev11.doc, look up that field and feature type, and see what the requirements are for it. This is how you might find out found that the date and phone number values were incorrectly formatted, of the wrong type, or empty when a value is required, for example:
E.g.
XML Parser error: 'Error in input dataset:'file:///C:/RT/31827/results/RESULTS_OpeningDefinition.xml' line:53 column:18 message:Datatype error: Type:InvalidDatatypeValueException, Message:Value is not in enumeration .'
Line 53 shows: <rst:actionCode/> this must mean that there is a record where actionCode is blank.
You may also need to try loading it with the FME Data Inspector with the reader parameter Validate Data set to No in order to inspect your output and see why it might be failing.
Extracting and Managing Attributes
It can be handy to use a FeatureJoiner to extract field values from a spreadsheet or table. The trick is to find a common key that relates your geometry features and table records so you can retrieve OpeningDefinition attributes.
Use an AttributeRenamer to rename the attributes to the exact names that ESF is expecting. This way you don't need to manually connect them at the output.
Use NullAttributeReplacers to handle those records with null values that would otherwise cause the translation to fail. These nulls may exist because there are records in the geometry file which cant find a match in the spreadsheet. These will ultimately need to be matched and populated to avoid loss of these records.
Grouping Features
For Results submissions, you typically you need to group the features by opening_id using the Aggregator transformer. You could then branch your data flow and use a GeometryRemover to send the attribute only data to the Opening feature type, and the other dataflow branch could pass the aggregated polygons with attributes to the OpeningDefinition feature type.
Additional Resources
For additional diagnostic ideas, try going through: 7ESF_PROBLEMS_R2.doc which is included with ESFTemplates.zip located under Files.
Comments
0 comments
Please sign in to leave a comment.