In the previous note, we covered some steps for cleaning up geometry before loading radar data; the main idea was to save time by not going through the process of loading a large amount of compromised data.  This note will deal with the possibilities to edit geometry after loading radar data and explain why, in general, positioning needs to be very good in 3D-projects.

Loading of radar GPR data

When we load radar data, we must specify the interpolation distance the software will use internally. As mentioned before, it’s usually of little use to make this distance shorter than half the channel spacing.

Another factor is the memory needed for managing the data, and this distance directly dictates that. In Figure 3, we show a small project, with raw data of 55 MB. During data collection, the point distance was 2 cm, with a channel spacing of 11 cm. The images show data interpolated to 2 cm, 4 cm, and 10 cm bins. As can be seen, to locate the utilities, any one of those settings would be just fine. However, the disk space needed to accommodate all steps in the post-processing up to the stage shown is quite different – 0.25 GB for 10 cm binning and 3.8 GB for 2 cm binning. So, in this case, by interpolating to 2 cm, we ended up with 70 times the original data size. However, any visible benefit is negligible, so interpolation with 10 cm binning seems a suitable choice, given that it only requires 5 times the disk space for the raw data.

Do we need some fancy filtering for importing the data? No, dc-adjustment, de-wow, or bandpass, combined with threshold and compensation for the Rx-Tx distance, is all that’s needed – assuming, of course, the raw data is of good quality.

Correcting bad positions

Figure 3 below shows a section from a survey conducted with a vehicle-mounted array. Not even the most erratic driver could create the track A-B-C as shown. This type of positioning error is typical when you allow for variations in RTK-coordinates between fix and float. When the fix is lost, the output from the GPS jumps unpredictably.

We don’t know whether the positioning before B and after C is good, but we can conclude from the anomaly at A that we’re not entirely lost. At A, we have a continuous anomaly crossing over two swaths, so at least the relative position between these two swaths is good at that point.

In Figure 4, the result of the correcting actions is shown, with some higher gain on the data. We now have a continuous anomaly at B and can be sure that we did something in the right direction. What we did here was mainly to delete positioning points between B and C, leaving the odometer wheel as the only positioning device between these points.

So can we now conclude that it’s possible to fix bad positioning? No, we should not think in that direction. It’s possible to correct minor errors, but if the positioning is bad throughout a project, it will be too time-consuming to fix. Recall the project shown in a previous note, with more than 70,000 positioning points; it would be impossible to correct a large chunk of those.

In practice, we’re limited to deleting some visibly wrong positioning points and moving others, provided we have anomalies to aid in doing so. Having said all this, we should also mention that commercially it’s often ok to live with some minor errors, and the ability to correct some of the geometry may not always be worth the effort.

Takeaway

Often interpolation distances are chosen too short in the belief that this will enhance the data, while in fact, the channel spacing is the most limiting parameter. We’re not saying that one should always interpolate to the channel spacing, only that one should not overestimate the ability to use the seemingly higher density along the swath to enhance the final images. We haven’t seen any significant benefit in interpolating to less than half the channel spacing. We’ve also noticed that a modern, interactive software makes it possible to correct for some positioning errors, although we also warn for over-optimistic views on this ability. In large datasets, it’s impossible, and when it is possible, it relies heavily on having anomalies visible.

In our next note, we’ll cover some of the processing we do before the interpretation and export stages.

To download this article click here.