How to Create and Explore Augmented Reality Models for City Infrastructure

Liz Sanderson
Liz Sanderson
  • Updated

FME Version

  • FME 2021.2

Introduction

In this tutorial, we are going to learn how to create 3D FMEAR files representing city infrastructure from a regular 2D dataset and how to view them at their true location with the FME AR app.
Intro2.png
Note the difference between the name of the format, which is “FMEAR”, a single word, and the app name “FME AR”, two words.

FMEAR files are generated with FME Desktop, and the first part of the tutorial guides you through the process.

The second part of the tutorial explains how to use the FME AR app for augmenting the real world with the data prepared in the first part.

Geospatial Data in Augmented Reality adds an interesting challenge to the process of learning how to use AR. Traditionally, when we visualize data on our screens, our actual location or location of the data does not matter. Due to the nature of geospatially-aware Augmented Reality, viewing a particular dataset only makes sense in an area where it belongs. Because of this, you may want to use the tutorial as general guidelines and adjust the workflow to your data so that once you have completed the tutorial, you can view your data in the context of your location in the world.

The tutorial assumes that you have a good understanding of how FME Desktop works.

The tutorial uses the open data published by the City of Coquitlam, BC, Canada. 

 

The FME AR mobile app is still in development, and the articles in this tutorial series may not reflect its current state. The app should not be used in production. The FME AR mobile app for Android has been deprecated; see FME AR Mobile App on Android Deprecation.

 

Video

 

How to Create FMEAR Files

Reading the data

The data used in this tutorial consists of several shapefiles with many different feature types representing all kinds of underground infrastructure such as cables, mains, and manholes, and two more layers - streetlights and road centerlines.

When we create AR scenes using city data, we rarely want to transform the whole dataset into an AR format. We rather need a smaller scene to explore - maybe an intersection or a stretch of a road. A straightforward way to do it is to specify a search envelope and clip the source data that covers the area of interest right on the reader:
ReadAndClip.png
This will read only the data within the search envelope. We should see the following if we send the clipped data into FME Data Inspector (note that the orthoimage is not part of the dataset, it was added to Data Inspector separately as a background map):
SourceData.png
This is the subset of the source data, a typical 2D dataset we can find across many organizations, and which we want to transform into a 3D AR model. Note that the data coming with this tutorial is already clipped to these extents.
 

How to Transform Point Features

There are two ways we can transform the point features into 3D geometries - by changing the geometry of a point or by replacing it with an existing 3D model.
​​​​​​​

Changing point geometry

This method is useful when we want to get a simple representation of a 3D object - a cylinder (for example, a pole or a manhole cover) or a box (e.g. an electrical box). If the attributes describing the geometric properties of an object are available, we can use them to build the new geometry. In the dataset we use, the point feature type Storm Catch Basin has a diameter attribute called “BARREL_DIA”, we will use it in the 2DEllipseReplacer transformer as a parameter. The attribute value is expressed in millimeters, and since AR datasets always use meters, we need to scale it to meters. The “Axis” parameters require radii, this is why we also need to divide the diameter by 2.
2DPointReplacer.png
With our next step, we will slightly extrude the ellipse (the Extruder transformer) to get a 3D geometry. For poles, we can use the actual height attribute if available.
Extruder.png
The last step is setting the appearance of the new geometry. Note that each AppearanceSetter must use a unique appearance name. Skipping the naming or using duplicate names may lead to features receiving wrong appearances.
AppearanceSetter.png
If a point represents a rectangular feature, we can use 2DBoxReplacer followed by Extruder.

Now our point features are ready for saving into AR format.

 

Replacing point geometries with 3D models

If we want to use a 3D model instead of a point (a fire hydrant, a street light, etc.), we should bring the model into the workflow with a reader of one of the 3D formats, such as OBJ, FBX, SketchUp, and similar ones.

As with almost any kind of transformation in FME, there are several ways we can pass a 3D geometry to a point. An advanced and very powerful method is adding the geometry to a Shared Item run-time library (SharedItemAdder), and then telling the points to use this geometry (SharedItemIDSetter). Here, we will use a simpler method, which includes extracting a 3D geometry into an attribute and merging it with the point feature.

For our tutorial, we will use a SketchUp model of a fire hydrant. Let’s inspect it first in Data Inspector:
Hydrant.png
We begin with a simple step - removing any unnecessary attributes. A SketchUp file carries over 120 format attributes, which are not exposed on the reader but are present on the feature. They may slow down the transformation process, and usually are not needed. We can use either AttributeRemover, BulkAttributeRemover, or AttributeKeeper to get rid of them.

In our next step, we need to make sure our model is properly centered, scaled, and rotated. The Feature Information window indicates that the model’s units are inches. The AR apps use meters, which means we need to scale the model with the Scaler transformer. We also see that the model is not centered in the XY plane. Along the Z-axis, 0 elevation is the middle of the model. We want the center of the model to be precisely at 0,0, and because we need the model to stand on the ground, the bottom of the model should be moved to elevation 0 as well.

We can calculate the size of the model along the X and Y axes by extracting its bounds (BoundsExtractor) and subtracting the minimum value from the maximum. Then, we offset (Offsetter) the model by the minimum coordinate value minus one-half of the model size. This brings the center of the model to 0: 

-min_coord - (max_coord - min_coord)/2

For the Z-axis, we simply move the model up by the negative minimum z value.

An easy alternative for centering the model is an FME HUB transformer ModelCenterer
ModelPreparation.png
Now we have a 3D model that can be used instead of the point features. We extract the geometry of the fire hydrant model (GeometryExtractor) into an attribute and pass this attribute to fire hydrant points (FeatureMerger), then, we extract the coordinates of the points (CoordinateExtractor), and replace the geometries of the points with the hydrant geometry (GeometryReplacer). At this moment, all hydrants are still sitting at 0,0,0 coordinates. Now we use the extracted point coordinates to move the hydrants to the original hydrant point locations (Offsetter).
ModelPreparation2.png

Placing oriented models

Sometimes, we need to orient 3D models representing point features along linear features. For example, a water valve should follow the pipe it is installed on. In this case, we need to pass the points representing valves, and lines depicting pipes through NeighborFinder transformer, which finds a pipe (Candidate port) closest to each valve (Base port). This transformer will also add the angle of the candidate to the base feature. Once we change the geometry of the valves from points to 3D models, we can use the angle to rotate the model (Rotator) and place it along the pipe.
WaterValveWorkspace.png

After this, the water valves are placed along the pipes:
WaterValve.png

3D models usually come with appearances already set and don’t need further styling, however, if it is necessary, we can use AppearanceSetter as described above. Note that setting appearances on complex hierarchical geometries may require adding several AppearanceSetters addressing particular parts of the geometry with the “Geometry XQuery” parameter.
 

Transforming linear features

We will try two types of line transformations. Let’s begin with making 3D pipes. PipeReplacer allows turning linear features into pipes, which are created as fme_pipe geometry. Not all formats or format implementations support this geometry type. For better compatibility, we can turn fme_pipe into more common geometry types such as fme_brep_solid (GeometryCoercer or PipeEvaluator, which has control over stroking tolerance), or fme_mesh (Triangulator), which is a very efficient way of representing 3D features.

PipeReplacer.png


For electrical cables, let’s generate boxes with a rectangular cross-section. We can buffer the lines and then simply extrude the resulting area.
cableBoxes.png

Then we perform styling of the 3D objects created from lines with AppearanceSetter as described above.
 

Labels

Adding labels as 3D objects can be useful for annotating real-world objects such as buildings, street lights, bus shelters - anything we can see on the streets. Streets themselves can be labeled, too, which can be quite helpful for an observer to understand their location. We will use road centerlines to generate simple label objects. Labels are made from faces (fme_face geometry) with raster textures depicting street names.

Our road feature type contains four segments of the road. With LengthCalculator and Snipper, we keep only two-meter sections cut from the center of each segment. Then, 3DForcer and Extruder create face geometries, which hover at 1.4 meters above the ground.

In a parallel stream, each segment is replaced with a point (VertexCreator), for which MapnikRasterizer creates a raster label using the FULLNAME attribute representing the full name of a street.
Labelling1.png

With Offsetter in “Polar Coordinate” mode, we rotate the label by 180 degrees and move it slightly away from the original label. This way, we make sure that the street name is visible from both directions and looks correct even if the renderer chooses to show faces with a single side.
Labels.png
Now all our features are ready for the FMEAR writer. 
 

Anchor

An anchor is a special object in FMEAR files that places an AR model at its correct location in the world. Anchor feature “knows” its latitude and longitude as well as its coordinates in the model space. When the geographic coordinates are matched to the location on the planet, and the compass readings are done correctly, the model gets georeferenced. 

The anchor object must have three special attributes: 

Attribute Name Attribute Value
fmear_location_feature anchor
fmear_anchor_latitude The latitude of the anchor point in LL84, for example:
49.177957
fmear_anchor_longitude The longitude of the anchor point in LL84, for example:
-122.843120


Anchors with no geometry are assumed to have coordinates 0,0 in the model space, however, it makes sense to use some real object’s coordinates for the anchor because it helps to visually identify the model location and adjust its position if necessary. 

We will use the fire hydrant point as the anchor. CoordinateExtractor adds the hydrant’s coordinates to the feature as attributes, and AttributeReprojector changes the original coordinate system to LL84. Finally, an AttributeCreator creates the fme_location_feature attribute.
anchor.png

Writing FMEAR Files

When we add FMEAR writer, we should make sure that the “Initial model scaling” parameter is set to "Full Scale" scale, so that the model visualized in the FME AR app will have the scale of 1:1, that is, the real world size.

Taking the small screens of the mobile devices into account, we don't have to replicate the full structure of the original dataset. We can simplify the structure, by combining all feature types related to water into a “Water” feature type, anything related to drainage can be combined into “Storm”, etc. Or, we may want to separate objects that we can see, such as manhole covers, from those that are hidden under the surface, such as pipes and cables.

The anchor can go to any feature type. Once the writer detects the first anchor feature, it will save it into the FMEAR file header, but won't write it into the specified feature type. Only the first anchor is taken into account, if there are more anchors, they will be ignored.

Finally, it is convenient to write the output to some cloud storage such as Google Drive or Dropbox so that the results become available for viewing on the cloud-connected mobile device without additional manipulations. Alternatively, we can send the model to the mobile device using email or other methods, and save it locally there in the file system.

FullWorkspace.png 
Once we save our AR file on our mobile device, we are ready to go out and explore it in the real world.
 

Placing, Adjusting, and Exploring AR Models

As it was mentioned in the introduction, the geospatial Augmented Reality dataset only makes sense in its true context - the location where it really belongs. On the other hand, nothing prevents us from visualizing this data anywhere in the world, so you can use the AR file coming with this tutorial. If you live in Lower Mainland, British Columbia, you can check this dataset against its real location in Coquitlam, at the intersection of Eagleridge Drive and Creekside Drive/Burnside Place.

On the mobile device, we open the FME AR app (available in App Store and Google Play), and through the File Browser, navigate to the model location on the device or in the cloud. Once we tap on it, FME AR opens the model. We should make sure we don’t move or rotate our mobile device while it is reading the model, because it may confuse the sensors - the GPS and the compass, and the model will be placed incorrectly. Once the model is loaded, the mobile device begins searching for a surface, on which the model can be placed. We need to point the camera towards such a surface. Make sure the phone does not find the wrong surface. The app has an option to visualize detected planes on the screen, which may be helpful during the model placement step.

Since we use the fire hydrant as our anchor, it makes sense to load the model while standing close to this object. 

Once we see the model loaded, the augmented hydrant should appear in close proximity to the real hydrant. Due to the limitations of the GPS and compass chips inside mobile devices, we can often expect the features in our model will be off by a few meters from the real object location. The rotation of the model can also be off by a few degrees. We can adjust the model by moving and rotating it on the screen until it matches reality. It is possible to connect mobile phones to high-accuracy GNSS receivers, which allows a very accurate model placement.

Now we can walk around and explore the scene. Depending on the quality of your data, you may notice some mismatches between different layers. If only some features are off, while the position of the others is correct, you may need to check the source data. This makes Augmented Reality an Ultimate Quality Control tool.

Keep in mind that the device needs to keep tracking the surface, so make sure it is always in sight of the camera. If you explore the scene on the street, always be aware of your surroundings, and ideally, wear a safety vest.

 

Conclusion

We went through the main steps of AR model creation and usage. There is a lot more we can do to make the models visually richer and more informative - we can add more labels, allow color-coded geometries, feature summaries, and with some effort - animations, and so on. Even more power we get if our workspace is hosted as a service in FME Cloud. In this case, we can create an AR model on demand for a requested location - there will be no need to generate the model ahead of time.

Augmented Reality is an emerging and quickly developing technology, and we hope that with this tutorial, you will be able to create your own AR scenes.

Was this article helpful?

Comments

0 comments

Please sign in to leave a comment.