Having covered several useful visualization and interpretation tools in previous notes, we have not yet talked about ‘thick-slice’ processing. Therefore, in this note, we present a novel method for visualizing 3D-GPR data more effectively by applying ‘thick-slice’ processing. To the best of our knowledge, this method has not been commercially available until now. The original concept was presented to us by Mark Grasmueck et al., of the University of Miami. While Mark remains the innovator, we have adapted it as a feature within Condor.
Our efforts are to continually seek effective methods for managing huge datasets, without loss of resolution/detail, which preserve depth/position awareness and accuracy and, finally, lend themselves for precise picking of targets and exports to cad-environments. Further, our clients ask for such functionality packaged in a user-friendly environment, without the need for massive parameter tweaking. OspreyView, as we call it, meets all these conditions.
A first example of what we’re presenting now is shown in figure 1. The top image shows our approach to ‘thick slice’ processing while the bottom view shows a traditional, thin-slice. Although both images are good, at least five utilities are shown clearly in the OspreyView image which are not present in the bottom one or only with some imagination. Before delving further into details, let us briefly review some concepts in thick-slice processing. Firstly, a depth slice is a horizontal cut through your dataset, showing the data’s amplitudes after the processing is applied. The resulting image (C-scan) has some significant characteristics. It gives an excellent overview which is very cumbersome to achieve through single line profiling, and it preserves the full resolution of the GPR-system and data collection. The depth localization of targets will be precise, and moving through the data will be swift. Figure 2 (top image) shows an example of such basic visualization.
As good as this type of ‘thin-slice’ visualization is, the obvious drawback is that it only shows what the GPR detects at the time/depth cut through by a given slice. There is no effective visualization of dipping targets, nor can it be possible to see those at entirely different depths. Scrolling is needed, which adds time, but more importantly, it does not give the operator a complete overview.
To overcome this drawback and get a better overview, it’s common practice to add more processing by averaging slices over a specific depth range. In Figure 2, the middle and bottom images show the effect of such averaging over 4 and 8 slices, respectively. While the averaging applied to the middle image works, it doesn’t work on the bottom image. This data’s severe degradation is due to the averaging spanning, close to, a full wavelength of the radar signal. This method always reduces resolution, and even if the 4-slice averaging seems to work, a careful look will reveal loss of information. A Hilbert transform is often added before averaging to avoid the cancellation due to the GPR-signal’s bi-polar attribute. However, this approach further reduces resolution and therefore isn’t what we want.
That said, animations of moderately averaged slices work well. Set up at a suitable frame speed, they provide an effective way of visualizing all present targets, and this approach will likely not go away soon. Although the full resolution is preserved, it does not easily lend itself as a tool supporting precise target picking for export to CAD/GIS environments – a critical task for our clients. Regardless, it remains a meaningful way to visualize 3D data and to bring attention to what can be achieved with GPR.
We could go on to discuss amplitude thresholding or other means of simplifying and improving visualization, but that is beyond the scope of this document. The objective is ‘thick-slice’ processing, so, let’s jump to OspreyView. Figure 3 below shows a second comparison between this new method (bottom) and a typical ‘thin-slice’ (top). The two data examples reveal utility lines beneath a busy road intersection, centred at approx. 1.3 m depth.
In this example, the operator set OspreyView to see through from 0.2 to 1.9 m, revealing almost everything available in this data. This extensive range may blur some subtle details, but it gives an excellent overview.
Given the example in Figure 3, the advantage of OspreyView should be obvious to anyone working with 3D-GPR data, even though only the birdseye view through the ground is shown. This birdseye view is not the final delivery our clients require, but having a clear overview of the scene is essential. We will talk about precision later.
How does OspreyView work?
In simple terms, the software applies a colour matrix for decoding depths and signal return strengths. This approach is different from the 1-dimensional colour palettes commonly used throughout the GPR industry.
As shown in Figure 4, the default colour matrix is set, so that targets at the centre are greenish, while those shallower or deeper are yellowish or bluish, respectively. This scheme is suitable for preserving a feeling of depth in the images (as per the inventor’s concept). It’s a straightforward process to alter the colour matrix, as needed for those wanting something else.
User-friendliness is an often-overlooked feature. We think it is essential and strive to make that a hallmark of the ImpulseRadar brand – why make things complicated when they do not need to be? OspreyView is controlled by two sliders, one for the depth range, and one for contrast, as shown in Figure 4, above. The view itself is activated via a checkbox, so switching between the regular view and OspreyView is quick and couldn’t be simpler.
Going back to what our clients need to put in their delivery reports, i.e. precise target locations, Figure 5 above, shows how OspreyView helps extract just that. The precision interpretations revolve around the Ribbon-Box function presented in earlier notes, and OspreyView blends seamlessly with this function. The result is that precise picking of targets is now an even easier task given OspreyView’s superior colour-coded depth profile overview.
The OspreyView is not only a way to colour the top view, it also holds depth information. This means that when we pick a line along a coloured utility in the top view, the vertexes of those picks will be placed at the precise depth determined by that colour. So, in short, we now make precise depth pickings from the top view! For this to work, we must have reasonable clean data, on the other hand, any misaligned picks can easily be adjusted in a side-view.
We have also found that this way of visualizing data helps us find subtle targets and anomalies otherwise challenging to detect. They are not invisible in the single-slice view, but they are easily passed by unnoticed, if only visible at one depth-slice. Figure 6, below, illustrates this perfectly. The top image is an ordinary slice where some of the human-made structures are barely visible. They are only possible to detect in one or two slices, and weakly. Contrast this with OsrpreyView where they are clearly visible.
The usefulness is not limited to ideal data, and in fact, we have found it useful in all the projects to which it has been applied. Obviously, it does not help when a position is erroneous or when the soil is rendering GPR useless, but we have found it helpful even when the going gets tough.
Summary
OspreyView is the first commercial application of a novel method of visualizing 3D-GPR data. It is available in our CONDOR software, and the advantages are summarized as follows:
- The user has a clear overview of the targets beneath, instantly, without further processing in an arbitrarily variable time window.
- Completely preserves the resolution of the original data, both in depth and spatially.
- Very fast, flipping between traditional slice-view and OspreyView is instant.
- Supports precise picking of targets, including correct depth, from the top view only, while not interfering with the pickings in other views.
- Does not require separate processing instances, no extra disk space needed.
- Helps in detecting faint objects, minimizes the risk of missed targets.
- Has a straightforward and intuitive user interface, no parameter-tweaking needed