There are two fundamentally different ways to accomplish this. You can use either:
- krig_2d or
Generally, if you are working with surface data such as X, Y, Z points in ASCII format, krig_2d is the easiest and best approach for a number of reasons. We recommend formatting your “surface” data in .APDV format since you only need 4 columns of numeric information in the file since we can omit the optional “Boring ID” and “Ground Surface Elevation” columns. Additionally, the third column of the file, which is normally the Z coordinate can be either the actual Z coordinate or it can just be ZERO (0.0). Therefore your Surface Input Data can be as simple as:
|… (32 rows continue)|
Make sure to choose Linear Processing (not Log Processing). If we krige this “surface” it gives us:
However, this is not what we want to do!
We need to read the point data that needs new Z coordinates! If you already have these points in any C Tech data format (.APDV, .GEO, .GMF, .ELF), you’re ready to go, but if not, the easiest format by far is .GMF. We would have used GMF format above except that krig_2d won’t read it. GMF requires only one header line (which only needs to be the word “surface”) and then each line that follows needs to be 3 numbers, which are X, Y & Z coordinates. The three numbers can be separated by commas, spaces OR tabs. Since you don’t know your Z coordinate, you can just enter ZERO (0.0) for Z, but don’t leave it blank. If you already have other Z coordinates (non-zero) that is OK. It won’t matter.
However, if you have a Boring or Sample ID that you want to (somehow) retain with each sample, one thing that will matter is sorting. I recommend that you sort your data by X or even X and then Y if you suspect that you’ll have multiple points at the same X coordinates. Don’t worry about the GMF format, it will allow you to add a fourth column of alphanumeric IDs (though my example below doesn’t include them. Below are my points to receive new Z coordinates:
Now comes the clever part. We read our new points with file_statistics and pipe that data into the “external grid” port of krig_2d. Our application looks like this:
Though you don’t “have to”, I recommend you turn OFF all of the Auxilliary Kriging Data, to keep the output in write_coordinates as simple as possible.
The output of write_coordinates when written as a .APDV file (recommended) is:
In the .APDV file above, the 5 columns are:
- Z (it stayed zero, but we’ll ignore this)
- Topo (Z) as DATA (this is our REAL Z)
- Top Data (this was created by file_statistics when it read our GMF file. It is all zeros since the Z coordinates of our GMF file was all zeros.
If we delete the 3rd and 5th columns and change the 3 line header to be a one line header with just “surface”, we have a GMF file again with the correct Z coordinates
I double checked and EVS didn’t scramble the points. Re-sorting isn’t necessary in this case, but I’ve seen times when it is. When the file is sorted correctly, adding back your Sample IDs is easy.
The second method using the interp_data module is trickier and more prone to problems. Here are the issues:
- interp_data only works when the points to be interpolated fit INSIDE of the surface (or volume) that serves as the source of data.
- This means that if your points fall outside of the X-Y extents of your surface by one-millionth of a meter, you will not get any value assigned and the point will not pass through interp_data
- This also means that when using a surface as the data source, the points must be exactly ON the surface. The only way to ensure this is to put all points at Z=0.0 and to make the surface FLAT at Z=0.0.
- Therefore we must interpolate the surface elevation DATA, not the surface Z coordinates.
In this example, we’ll use all of the same data as above, and our application is shown below:
It is a bit easier to see how the points relate to the surface in this example, but we could have achieved this with a second krig_2d module in our earlier application. As discussed, all Z-Scales are set to 0.0 to make EVERYTHING FLAT. In this simple case, the results will be virtually identical as our earlier example, but our application is a bit more complex and if our data was a bit more spread out, we could have problems.